Прочитать на английском

Поделиться через


Use Azure Toolkit for IntelliJ to create Apache Spark applications for HDInsight cluster

This article demonstrates how to develop Apache Spark applications on Azure HDInsight using the Azure Toolkit plug-in for the IntelliJ IDE. Azure HDInsight is a managed, open-source analytics service in the cloud. The service allows you to use open-source frameworks like Hadoop, Apache Spark, Apache Hive, and Apache Kafka.

You can use the Azure Toolkit plug-in in a few ways:

  • Develop and submit a Scala Spark application to an HDInsight Spark cluster.
  • Access your Azure HDInsight Spark cluster resources.
  • Develop and run a Scala Spark application locally.

В этой статье вы узнаете, как:

  • Use the Azure Toolkit for IntelliJ plug-in
  • Develop Apache Spark applications
  • Submit an application to Azure HDInsight cluster

Предпосылки

Установите плагин Scala для IntelliJ IDEA

Steps to install the Scala plugin:

  1. Откройте IntelliJ IDEA.

  2. На экране приветствия перейдите к разделу "Настройка>подключаемых модулей" , чтобы открыть окно "Подключаемые модули ".

    IntelliJ IDEA enables scala plugin.

  3. Выберите Установить для плагина Scala, который представлен в новом окне.

    IntelliJ IDEA installs scala plugin.

  4. После успешной установки подключаемого модуля необходимо перезапустить интегрированную среду разработки.

Create a Spark Scala application for an HDInsight Spark cluster

  1. Запустите IntelliJ IDEA и выберите Create New Project (Создать проект), чтобы открыть окно New Project (Новый проект).

  2. На левой панели щелкните Azure Spark/HDInsight.

  3. Выберите проект Spark (Scala) в главном окне.

  4. From the Build tool drop-down list, select one of the following options:

    • Maven for Scala project-creation wizard support.

    • SBT for managing the dependencies and building for the Scala project.

      IntelliJ IDEA New Project dialog box.

  5. Выберите Далее.

  6. В окне New Project (Новый проект) укажите следующую информацию:

    Недвижимость Описание
    Имя проекта Введите имя. Для этой статьи используется myApp.
    Расположение проекта Введите расположение для сохранения проекта.
    Project SDK (Пакет SDK проекта) This field might be blank on your first use of IDEA. Выберите New... (Создать...) и перейдите к JDK.
    Версия Spark Мастер создания интегрирует правильную версию пакетов SDK для Spark и Scala. Если используется версия кластера Spark более ранняя, чем 2.0, выберите Spark 1.x. В противном случае выберите Spark2.x. В этом примере используется Spark 2.3.0 (Scala 2.11.8).

    Selecting the Apache Spark SDK.

  7. Нажмите Готово. Может пройти несколько минут, прежде чем проект станет доступным.

  8. Проект Spark автоматически создает артефакт для вас. To view the artifact, do the following steps:

    a. From the menu bar, navigate to File>Project Structure....

    b. From the Project Structure window, select Artifacts.

    c. Select Cancel after viewing the artifact.

    Artifact info in the dialog box.

  9. Add your application source code by doing the following steps:

    a. From Project, navigate to myApp>src>main>scala.

    b. Right-click scala, and then navigate to New>Scala Class.

    Commands for creating a Scala class from Project.

    c. In the Create New Scala Class dialog box, provide a name, select Object in the Kind drop-down list, and then select OK.

    Create New Scala Class dialog box.

    d. The myApp.scala file then opens in the main view. Replace the default code with the code found below:

    import org.apache.spark.SparkConf
    import org.apache.spark.SparkContext
    
    object myApp{
        def main (arg: Array[String]): Unit = {
        val conf = new SparkConf().setAppName("myApp")
        val sc = new SparkContext(conf)
    
        val rdd = sc.textFile("wasbs:///HdiSamples/HdiSamples/SensorSampleData/hvac/HVAC.csv")
    
        //find the rows that have only one digit in the seventh column in the CSV file
        val rdd1 =  rdd.filter(s => s.split(",")(6).length() == 1)
    
        rdd1.saveAsTextFile("wasbs:///HVACOut")
        }
    
    }
    

    The code reads the data from HVAC.csv (available on all HDInsight Spark clusters), retrieves the rows that have only one digit in the seventh column in the CSV file, and writes the output to /HVACOut under the default storage container for the cluster.

Connect to your HDInsight cluster

User can either sign in to your Azure subscription, or link a HDInsight cluster. Use the Ambari username/password or domain joined credential to connect to your HDInsight cluster.

Войдите в подписку Azure.

  1. From the menu bar, navigate to View>Tool Windows>Azure Explorer.

    IntelliJ IDEA shows azure explorer.

  2. From Azure Explorer, right-click the Azure node, and then select Sign In.

    IntelliJ IDEA explorer right-click azure.

  3. In the Azure Sign In dialog box, choose Device Login, and then select Sign in.

    `IntelliJ IDEA azure sign-in device login`.

  4. In the Azure Device Login dialog box, click Copy&Open.

    `IntelliJ IDEA azure device login`.

  5. In the browser interface, paste the code, and then click Next.

    `Microsoft enter code dialog for HDI`.

  6. Enter your Azure credentials, and then close the browser.

    `Microsoft enter e-mail dialog for HDI`.

  7. After you're signed in, the Select Subscriptions dialog box lists all the Azure subscriptions that are associated with the credentials. Select your subscription and then select the Select button.

    The Select Subscriptions dialog box.

  8. From Azure Explorer, expand HDInsight to view the HDInsight Spark clusters that are in your subscriptions.

    IntelliJ IDEA Azure Explorer main view.

  9. To view the resources (for example, storage accounts) that are associated with the cluster, you can further expand a cluster-name node.

    Учетные записи хранения Azure Explorer.

You can link an HDInsight cluster by using the Apache Ambari managed username. Аналогичным образом, для присоединенного к домену кластера HDInsight можно связаться с помощью домена и имени пользователя, например user1@contoso.com. Also you can link Livy Service cluster.

  1. From the menu bar, navigate to View>Tool Windows>Azure Explorer.

  2. From Azure Explorer, right-click the HDInsight node, and then select Link A Cluster.

    Azure Explorer link cluster context menu.

  3. The available options in the Link A Cluster window will vary depending on which value you select from the Link Resource Type drop-down list. Enter your values and then select OK.

    • HDInsight Cluster

      Недвижимость Ценность
      Link Resource Type Select HDInsight Cluster from the drop-down list.
      Cluster Name/URL Enter cluster name.
      Тип проверки подлинности Leave as Basic Authentication
      Имя пользователя Enter cluster user name, default is admin.
      Пароль Enter password for user name.

      IntelliJ IDEA link a cluster dialog.

    • Livy Service

      Недвижимость Ценность
      Link Resource Type Select Livy Service from the drop-down list.
      Livy Endpoint Enter Livy Endpoint
      Имя кластера Enter cluster name.
      Yarn Endpoint Необязательно.
      Тип проверки подлинности Leave as Basic Authentication
      Имя пользователя Enter cluster user name, default is admin.
      Пароль Enter password for user name.

      IntelliJ IDEA link Livy cluster dialog.

  4. You can see your linked cluster from the HDInsight node.

    Azure Explorer linked cluster1.

  5. Кроме того, вы можете отменить связь с кластером из Azure Explorer.

    Несоединяемый кластер Azure Explorer.

Run a Spark Scala application on an HDInsight Spark cluster

After creating a Scala application, you can submit it to the cluster.

  1. From Project, navigate to myApp>src>main>scala>myApp. Right-click myApp, and select Submit Spark Application (It will likely be located at the bottom of the list).

    The Submit Spark Application to HDInsight command.

  2. In the Submit Spark Application dialog window, select 1. Spark on HDInsight.

  3. In the Edit configuration window, provide the following values and then select OK:

    Недвижимость Ценность
    Spark clusters (Linux only) Select the HDInsight Spark cluster on which you want to run your application.
    Select an Artifact to submit Leave default setting.
    Main class name The default value is the main class from the selected file. You can change the class by selecting the ellipsis(...) and choosing another class.
    Job configurations (Конфигурация заданий) You can change the default keys and, or values. For more information, see Apache Livy REST API.
    Аргументы командной строки You can enter arguments separated by space for the main class if needed.
    Referenced Jars and Referenced Files You can enter the paths for the referenced Jars and files if any. You can also browse files in the Azure virtual file system, which currently only supports ADLS Gen 2 cluster. For more information: Apache Spark Configuration. See also, How to upload resources to cluster.
    Job Upload Storage Expand to reveal additional options.
    Тип хранилища Select Use Azure Blob to upload from the drop-down list.
    Учетная запись хранения Enter your storage account.
    Storage Key Enter your storage key.
    Storage Container Select your storage container from the drop-down list once Storage Account and Storage Key has been entered.

    Диалоговое окно отправки Spark.

  4. Select SparkJobRun to submit your project to the selected cluster. The Remote Spark Job in Cluster tab displays the job execution progress at the bottom. You can stop the application by clicking the red button.

    Окно отправки Apache Spark.

Debug Apache Spark applications locally or remotely on an HDInsight cluster

We also recommend another way of submitting the Spark application to the cluster. You can do so by setting the parameters in the Run/Debug configurations IDE. See Debug Apache Spark applications locally or remotely on an HDInsight cluster with Azure Toolkit for IntelliJ through SSH.

Access and manage HDInsight Spark clusters by using Azure Toolkit for IntelliJ

You can do various operations by using Azure Toolkit for IntelliJ. Most of the operations are started from Azure Explorer. From the menu bar, navigate to View>Tool Windows>Azure Explorer.

Access the job view

  1. From Azure Explorer, navigate to HDInsight><Your Cluster>>Jobs.

    IntelliJ Azure Explorer Job view node.

  2. На правой панели вкладка "Представление задания Spark " отображает все приложения, которые были запущены в кластере. Выберите имя приложения, для которого вы хотите просмотреть дополнительные сведения.

    Spark Job View Application details.

  3. To display basic running job information, hover over the job graph. To view the stages graph and information that every job generates, select a node on the job graph.

    Spark Job View Job stage details.

  4. To view frequently used logs, such as Driver Stderr, Driver Stdout, and Directory Info, select the Log tab.

    Spark Job View Log details.

  5. You can view the Spark history UI and the YARN UI (at the application level). Select a link at the top of the window.

Получите доступ к серверу истории Spark

  1. From Azure Explorer, expand HDInsight, right-click your Spark cluster name, and then select Open Spark History UI.

  2. When you're prompted, enter the cluster's admin credentials, which you specified when you set up the cluster.

  3. On the Spark history server dashboard, you can use the application name to look for the application that you just finished running. В приведенном выше коде вы задали имя приложения с помощью val conf = new SparkConf().setAppName("myApp"). Your Spark application name is myApp.

Start the Ambari portal

  1. From Azure Explorer, expand HDInsight, right-click your Spark cluster name, and then select Open Cluster Management Portal(Ambari).

  2. При появлении запроса введите учетные данные администратора для кластера. You specified these credentials during the cluster setup process.

Управление подписками Azure

By default, Azure Toolkit for IntelliJ lists the Spark clusters from all your Azure subscriptions. If necessary, you can specify the subscriptions that you want to access.

  1. From Azure Explorer, right-click the Azure root node, and then select Select Subscriptions.

  2. From the Select Subscriptions window, clear the check boxes next to the subscriptions that you don't want to access, and then select Close.

Spark Console

You can run Spark Local Console(Scala) or run Spark Livy Interactive Session Console(Scala).

Spark Local Console(Scala)

Ensure you've satisfied the WINUTILS.EXE prerequisite.

  1. From the menu bar, navigate to Run>Edit Configurations....

  2. From the Run/Debug Configurations window, in the left pane, navigate to Apache Spark on HDInsight>[Spark on HDInsight] myApp.

  3. From the main window, select the Locally Run tab.

  4. Provide the following values, and then select OK:

    Недвижимость Ценность
    Job main class The default value is the main class from the selected file. You can change the class by selecting the ellipsis(...) and choosing another class.
    Переменные среды Ensure the value for HADOOP_HOME is correct.
    WINUTILS.exe location Ensure the path is correct.

    Local Console Set Configuration.

  5. From Project, navigate to myApp>src>main>scala>myApp.

  6. From the menu bar, navigate to Tools>Spark Console>Run Spark Local Console(Scala).

  7. Then two dialogs may be displayed to ask you if you want to auto fix dependencies. If so, select Auto Fix.

    IntelliJ IDEA Spark Auto Fix dialog1.

    IntelliJ IDEA Spark Auto Fix dialog2.

  8. The console should look similar to the picture below. In the console window type sc.appName, and then press ctrl+Enter. The result will be shown. You can end the local console by clicking red button.

    IntelliJ IDEA local console result.

Spark Livy Interactive Session Console(Scala)

  1. From the menu bar, navigate to Run>Edit Configurations....

  2. From the Run/Debug Configurations window, in the left pane, navigate to Apache Spark on HDInsight>[Spark on HDInsight] myApp.

  3. From the main window, select the Remotely Run in Cluster tab.

  4. Provide the following values, and then select OK:

    Недвижимость Ценность
    Spark clusters (Linux only) Select the HDInsight Spark cluster on which you want to run your application.
    Main class name The default value is the main class from the selected file. You can change the class by selecting the ellipsis(...) and choosing another class.

    Interactive Console Set Configuration.

  5. From Project, navigate to myApp>src>main>scala>myApp.

  6. From the menu bar, navigate to Tools>Spark Console>Run Spark Livy Interactive Session Console(Scala).

  7. The console should look similar to the picture below. In the console window type sc.appName, and then press ctrl+Enter. The result will be shown. You can end the local console by clicking red button.

    IntelliJ IDEA Interactive Console Result.

Send Selection to Spark Console

It's convenient for you to foresee the script result by sending some code to the local console or Livy Interactive Session Console(Scala). You can highlight some code in the Scala file, then right-click Send Selection To Spark Console. The selected code will be sent to the console. The result will be displayed after the code in the console. The console will check the errors if existing.

Send Selection to Spark Console.

Integrate with HDInsight Identity Broker (HIB)

Connect to your HDInsight ESP cluster with ID Broker (HIB)

You can follow the normal steps to sign in to Azure subscription to connect to your HDInsight ESP cluster with ID Broker (HIB). After sign-in, you'll see the cluster list in Azure Explorer. For more instructions, see Connect to your HDInsight cluster.

Run a Spark Scala application on an HDInsight ESP cluster with ID Broker (HIB)

You can follow the normal steps to submit job to HDInsight ESP cluster with ID Broker (HIB). Refer to Run a Spark Scala application on an HDInsight Spark cluster for more instructions.

We upload the necessary files to a folder named with your sign-in account, and you can see the upload path in the configuration file.

upload path in the configuration.

Spark console on an HDInsight ESP cluster with ID Broker (HIB)

You can run Spark Local Console(Scala) or run Spark Livy Interactive Session Console(Scala) on an HDInsight ESP cluster with ID Broker (HIB). Refer to Spark Console for more instructions.

Примечание

For the HDInsight ESP cluster with Id Broker (HIB), link a cluster and debug Apache Spark applications remotely is not supported currently.

Reader-only role

Когда пользователи отправляют задание в кластер с разрешением только на чтение, требуются учетные данные Ambari.

  1. Sign in with reader-only role account.

  2. В Azure Explorer разверните HDInsight , чтобы просмотреть кластеры HDInsight, которые находятся в вашей подписке. Кластеры, помеченные как Role:Reader, имеют разрешение только для чтения.

    `IntelliJ Azure Explorer Role:Reader`.

  3. Right-click the cluster with reader-only role permission. Выберите "Связать этот кластер" из контекстного меню, чтобы связать кластер. Enter the Ambari username and Password.

    IntelliJ Azure Explorer link this cluster.

  4. Если кластер связан успешно, HDInsight будет обновлен. The stage of the cluster will become linked.

    IntelliJ Azure Explorer linked dialog.

  1. Click Jobs node, Cluster Job Access Denied window pops up.

  2. Щелкните Связать этот кластер, чтобы связать кластер.

    cluster job access denied dialog.

  1. Create an HDInsight Configuration. Then select Remotely Run in Cluster.

  2. Select a cluster, which has reader-only role permission for Spark clusters(Linux only). Warning message shows out. You can Click Link this cluster to link cluster.

    IntelliJ IDEA run/debug configuration create.

Просмотр учетных записей хранения

  • Для кластеров с разрешением только для чтения щелкните узел "Учетные записи хранения ", откроется окно "Доступ к хранилищу запрещен". You can click Open Azure Storage Explorer to open Storage Explorer.

    `IntelliJ IDEA Storage Access Denied`.

    IntelliJ IDEA Storage Access Denied button.

  • Для связанных кластеров щелкните узел "Учетные записи хранения" , откроется окно "Доступ к хранилищу запрещен". You can click Open Azure Storage to open Storage Explorer.

    `IntelliJ IDEA Storage Access Denied2`.

    IntelliJ IDEA Storage Access Denied2 button.

Convert existing IntelliJ IDEA applications to use Azure Toolkit for IntelliJ

You can convert the existing Spark Scala applications that you created in IntelliJ IDEA to be compatible with Azure Toolkit for IntelliJ. You can then use the plug-in to submit the applications to an HDInsight Spark cluster.

  1. For an existing Spark Scala application that was created through IntelliJ IDEA, open the associated .iml file.

  2. At the root level, is a module element like the following text:

    <module org.jetbrains.idea.maven.project.MavenProjectsManager.isMavenModule="true" type="JAVA_MODULE" version="4">
    

    Edit the element to add UniqueKey="HDInsightTool" so that the module element looks like the following text:

    <module org.jetbrains.idea.maven.project.MavenProjectsManager.isMavenModule="true" type="JAVA_MODULE" version="4" UniqueKey="HDInsightTool">
    
  3. Сохраните изменения. Your application should now be compatible with Azure Toolkit for IntelliJ. You can test it by right-clicking the project name in Project. The pop-up menu now has the option Submit Spark Application to HDInsight.

Очистка ресурсов

Если вы не собираетесь использовать это приложение в дальнейшем, удалите созданный кластер, сделав следующее:

  1. Войдите на портал Azure.

  2. В поле Поиск в верхней части страницы введите HDInsight.

  3. Выберите Кластеры HDInsight в разделе Службы.

  4. In the list of HDInsight clusters that appears, select the ... next to the cluster that you created for this article.

  5. Выберите команду Удалить. Выберите Да.

Azure portal deletes HDInsight cluster.

Errors and solution

Unmark the src folder as Sources if you get build failed errors as below:

Screenshot showing the build failed.

Unmark the src folder as Sources to solution this issue:

  1. Navigate to File and select the Project Structure.

  2. Select the Modules under the Project Settings.

  3. Select the src file and unmark as Sources.

  4. Click on Apply button and then click on OK button to close the dialog.

    Screenshot showing the unmark the src as sources.

Дальнейшие действия

In this article, you learned how to use the Azure Toolkit for IntelliJ plug-in to develop Apache Spark applications written in Scala. Then submitted them to an HDInsight Spark cluster directly from the IntelliJ integrated development environment (IDE). Теперь переходите к следующей статье, в которой объясняется, как перенести зарегистрированные в Apache Spark данные в средство бизнес-аналитики, например в Power BI.


Дополнительные ресурсы

Обучение

Схема обучения

Запуск приложений для высокопроизводительных вычислений (HPC) в Azure - Training

Azure HPC — это специально разработанная облачная возможность для рабочей нагрузки HPC и ИИ, использующая современные отраслевые процессоры и обмен данными по сети InfiniBand для обеспечения максимальной производительности, масштабируемости и ценности приложений. Azure HPC позволяет реализовывать инновации, повышать продуктивность и развивать гибкость бизнеса за счет высокодоступного набора технологий HPC и ИИ с возможностью их динамического распределения в соответствии с изменением коммерческих и техническ