Intel DevCloud for oneAPI

kazi haque
7 min readMay 18, 2022

--

oneAPI — A New Era of Accelerated Computing

oneAPI is an open, cross-industry, standards-based, unified, multiarchitecture, multi-vendor programming model that delivers a common developer experience across accelerator architectures — for faster application performance, more productivity, and greater innovation. The oneAPI initiative encourages collaboration on the oneAPI specification and compatible oneAPI implementations across the ecosystem.

In this article, we are going to learn how to use oneAPI using Intel Dev Cloud

Being oneAPI Certified Instructor from Intel, I would be enumerating the complete steps to execute oneAPI in Intel Dev Cloud.

Before jumping into the execution of the oneAPI program, let us first give a few thoughts about Intel Dev Cloud.

The Intel DevCloud is a development sandbox to learn about programming cross-architecture applications with OpenVino, High-Level Design (HLD) tools — oneAPI, OpenCL, HLS — and RTL.

Learn, prototype, test, and run your workloads for free on a cluster of the latest Intel® hardware and software.

Advantages of using Intel Dev Cloud:

  • Free access to Intel® oneAPI toolkits and components and the latest Intel® hardware
  • 220 GB of file storage
  • 192 GB RAM
  • 120 days of access (extensions available)
  • Terminal Interface (Linux*)
  • Microsoft Visual Studio* Code integration
  • Remote Desktop for Intel® oneAPI Rendering Toolkit
Fig 1.1

Now, Why oneAPI?

Because it gives us the :

  • Freedom of choice for accelerated computing across multiple architectures: CPU, GPU, and FPGA
  • An open alternative to proprietary lock-in
  • Data Parallel C++ (DPC++) — an open, standards-based evolution of ISO C++ and Khronos SYCL*
  • Optimized libraries for API-based programming
  • Advanced analysis and debug tools
  • CUDA* source code migration
  • Additional support for OpenCL and RTL development on FPGA nodes

So, for accessing and executing of oneAPI based programs in Intel Dev Cloud, first one needs to create or register for Intel Dev Cloud using the below link attached for reference:

Fig:1.2

By clicking on the “Enroll” button and filling up the required details, the user gets the required credentials to LogIn.

Once registered, the users can get started to use Intel Dev Cloud for executing oneAPI programs in multiple ways like through Terminal Access, IDE Access, or also directly connecting with the Jupyter Lab.

We will now discuss how to connect with Jupyter Notebook:

Use Jupyter Notebook to learn about how oneAPI can solve the challenges of programming in a heterogeneous world and understand the Data Parallel C++ (DPC++) language and programming model

Once registered and signed in using the appropriate credentials, one can navigate to the below mentioned (Fig 1.3) section for the link https://devcloud.intel.com/oneapi/get_started/ to connect with Jupyter Lab and notebook in Intel Dev Cloud

Fig 1.3

Once you click on the “Launch JupyterLab” button you will be redirected to one of the ipynb notebook “Welcome.ipynb” for oneAPI program as depicted in the Fig 1.4.

Fig 1.4

This document covers the basics of the JupyterLab access to the Intel DevCloud for oneAPI Projects. Here the notebook is running on Python 3.8(Intel oneAPI) kernel.

The diagram (Fig 1.5) below illustrates the high-level organization of the DevCloud.

Fig 1.5

Some points to note while working in Jupyter notebook in Intel Dev Cloud:

Your JupyterLab session has a time limit. If your session runs out of time, you can start a new one by refreshing the page or going to jupyter.oneapi.devcloud.intel.com again. However, keep in mind that the contents of the Notebook will not be automatically saved when the session time runs out. Save your work!

All running processes (the notebook itself, the kernels running in it, terminals) are terminated. If you want to run calculations that survive outside the notebook, use the job queue as described below.

Number of Cores:
Your Notebook is running on a powerful computing server, but other people may be running Notebooks on the same server. They cannot access your files, but you do share the pool of the CPU cores. For heavy workloads (e.g., neural network training), you can get access to more computing power by submitting scripts to the job queue as discussed below.

Amount of Memory:
Your Notebook is also sharing the computing server’s operating memory with other tenants. If you need more memory for calculations, use the job queue.

Run the code in the cell below to query the limits of your JupyterLab environment.

Code:

!echo “* How many seconds are left in my JupyterLab session?”
!qstat -f $PBS_JOBID | grep Walltime.Remaining

!echo “* How many logical CPUs do I have for the Notebook?”
!taskset -c -p $$

!echo “* How much RAM can I use in the Notebook?”
!/usr/local/bin/qstat -f $PBS_JOBID | grep vmem

Job Queue: The job queue is the only method for accessing the full capacity of the computing resources available on the DevCloud. This section explains how you can interact with the queue from the JupyterLab environment. You can also submit to the queue from a terminal session. A more detailed guide on queue usage is available on the Intel DevCloud website.

Creating a Job Script:
To submit a job to the queue, create a Bash script containing the commands that you want to run.
You can do this from the Notebook using the `%%writefile` magic.

The following example creates a job script called `hello-world-example`.
The line `cd $PBS_O_WORKDIR` changes the working directory to the directory where the script is located. Everything else runs in the Bash shell on the designated compute server.

Code:
%%writefile hello-world-example
cd $PBS_O_WORKDIR
echo “* Hello world from compute server `hostname`!”
echo “* The current directory is ${PWD}.”
echo “* Compute server’s CPU model and number of logical CPUs:”
lscpu | grep ‘Model name\\|^CPU(s)’
echo “* Python available to us:”
which python
python — version
echo “* The job can create files, and they will be visible back in the Notebook.” > newfile.txt
sleep 10
echo “*Bye”
# Remember to have an empty line at the end of the file; otherwise the last command will not run

Output:

Writing hello-world-example

You should now see the file `hello-world-example` when you go to the tree menu, or if you run the `%ls` magic.

Code:

%ls

Output:

Fig 1.6

As we can see in the Output (Fig 1.6) the file `hello-world-example` is created.

Submitting a Job to the Queue:

Now you can submit this script as a job using the `qsub` command.

Code:

!qsub hello-world-example

Output:

1910494.v-qsvr-1.aidevcloud

You have submitted a job to the queue. You should see an output line that looks like “[numbers].cXXX”. The number you see in the front is the Job ID. We will be using this number to retrieve the output of the job.

Getting the result:

Once the job is completed, the resulting output and error streams (stdout and stderr) are placed in two seperate text files. These output files have the following naming convention:

* stdout: [Job Name].o[Job ID]. Example: `hello-world-example.o12345`
* stderr: [Job Name].e[Job ID]. Example: `hello-world-example.e12345`

[Job Name] is either the script name, or a custom name — for example, the name specified by the `-N` parameter of `qsub`.

[Job ID] is the number you got from the output of the `qsub` command.

Let’s find the output file produced by the `hello-world-example` job by running the `%ls` magic again.

Code:

%ls hello-world-example*

Output:

hello-world-example.e1910494

To view this file, you can go to File -> Open… click on the hello-world-example.o* file, once you open the generated file , the Output is depicted as shown in Fig 1.7.

Fig 1.7

Alternatively, you can view the contents of the file inside the JupyterLab using the %cat magic command. Run the cell below to view the result of the “hello world” job.

Code:

%cat hello-world-example.o*

Output (Fig 1.8):

Fig 1.8

Final Note:

This document covered some of the basics of using the JupyterLab environment on the DevCloud.

JupyterLab is not the only way to access the DevCloud. You can also log in with an SSH client or a file transfer application based on the SSH protocol (e.g., WinSCP or FileZilla). This may be a more convenient access mode for advanced users who already have the code base developed, and who want to execute their code on powerful compute resources which will be covered in the next articles.

That’s a wrap, hope you have got a clear idea about how to access Inte Dev Cloud for executing oneAPI programs, will come up with other articles discussing other ways to access and explore!

About me:

I am an Intel Certified Instructor for one oneAPI track along with oneAPI Innovator. Please explore the below links for my bio and feel free to reach me in case of any queries.

--

--