The SparklyR package from RStudio provides a high-level interface to Spark from R. This means you can create R objects that point to frames stored in the Spark cluster and apply some familiar R paradigms (like dplyr) to the data, all the while leveraging Spark’s distributed architecture without having to worry about memory limitations in R. You can also the distributed machine-learning algorithms included in Spark directly from R functions. 

If you don’t happen to have a cluster of Spark-enabled machines up in a nearby well-ventilated closet, you can easily one up in your favorite service. For Azure, one option is to launch a Spark cluster in HDInsight, which also includes the extensions of Microsoft ML Server. While this service recently had a significant price reduction, it’s still more expensive than running a “vanilla” Spark-and-R cluster. If you’d like to take the vanilla route, a new details how to set up Spark cluster on Azure for use with SparklyR.

AZTK

All of the details are provided in the link below, but the guide basically provides the Azure Distributed Data Engineering Toolkit shell commands to provision a Spark cluster, connect to the cluster, and then interact with it via RStudio Server. This includes the ability to launch the cluster with pre-emptable low-priority VMs, a cost-effective option (up to 80% cheaper!) for non-critical workloads. Check out the details at the link below.

Github (Azure): How to use SparklyR on Azure with AZTK



Source link
Bigdata and data center

LEAVE A REPLY

Please enter your comment!
Please enter your name here