Interfaces for Vertex AI

Vertex AI offers a plethora of Interfaces using which users can interact with it and use its services. The only catch is that, some services can only be accessed and used by some interfaces, whereas some can be accessed and used by all.

Vertex AI Console

Vertex AI Console section of the Google Cloud Console is the GUI interface to use and interact with the Vertex AI and it’s services. User can interact with the models, datasets, endpoints and jobs. The Console is divided well based on the type of work it can do, for example in the sub-section called Model Deployment user gets the option Train the Model, Do some experiment with it and Collect it’s metadata.

Below are the images of the Vertex AI console divided in those sub parts for better understanding.

Welcome Part

The above section is encouraging the user to enable all the recommended APIs needed to use the Vertex AI properly. It also lets the user to view the tutorial of how to use the Vertex AI if they need.

Mostly used services section

Just below the Welcome part, the services which are mostly used by the developers are listed in a row wise manner. There are usually two rows which consists of 6 mostly used services of Vertex AI, clicking any one of these will open that particular service and user can manipulate that as they want.

This part is the next row, it also consists of three mostly used services of Vertex AI.

Last Part

This is the end most part of the Vertex AI console where the user can select their region and prepare their dataset or train their models or get batch prediction. All of the above mentioned service requires a API to be enabled first. Without which none of them will work.

Left Sub Menu with Detailed list of services of Vertex AI

Vertex AI Console also provides a left sub menu with a detailed list of services provided by Vertex AI.

Google Cloud Command Line Interface (CLI)

The Cloud Shell, also known as the Google Cloud Command Line Interface can also be used to perform tasks related to Vertex AI. User can use only commands to manipulate the Vertex AI. User must have some good knowledge about the specific commands to use this approach, otherwise they can’t manipulate it easily.

Below are some of the commands which can be used in CLI to manipulate Vertex AI services.

  • Command 1 –
gcloud ai-platform jobs list

The above command can be used to see all the Vertex AI jobs in the developer’s project in a listed manner.

  • Command 2 –
gcloud ai-platform jobs submit training JOB [optional flags] [-- USER_ARGS ...]

This command can be used to submit a training job, user need to provide the exact specification of the Job they want to submit for training. The [optional flags] section can be any of the following –

 --async | --config | --enable-web-access | --help |
--job-dir | --kms-key | --kms-keyring |
--kms-location | --kms-project | --labels |
--master-accelerator | --master-image-uri |
--master-machine-type | --module-name |
--package-path | --packages |
--parameter-server-accelerator |
--parameter-server-count |
--parameter-server-image-uri |
--parameter-server-machine-type | --python-version |
--region | --runtime-version | --scale-tier |
--service-account | --staging-bucket | --stream-logs |
--use-chief-in-tf-config | --worker-accelerator |
--worker-count | --worker-image-uri |
--worker-machine-type
  • Command 3 –
gcloud ai-platform jobs submit prediction JOB --data-format=DATA_FORMAT --input-paths=INPUT_PATH,[INPUT_PATH,...] --output-path=OUTPUT_PATH --region=REGION (--model=MODEL | --model-dir=MODEL_DIR) [optional flags]

The above command is used to start a Vertex AI batch prediction job, here also the user need to provide proper details of the job. Here the [optional flags] section can be any of the following, based on user’s requirement –

optional flags may be  --batch-size | --help | --labels | --max-worker-count |
--model | --model-dir | --runtime-version |
--signature-name | --version
  • Command 4 –
gcloud ai endpoints create --display-name=DISPLAY_NAME [optional flags]

This command is used to create and Endpoint, user MUST provide the Display Name here, otherwise the command will throw an error.

  • Command 5 –
gcloud ai models list

This command is used to list the AI models created using Vertex AI. After running the command the CLI will ask the user to enter the region in which they want to see the models. After entering the region it will give the output of how many models are available in that region.

  • Command 6 –
gcloud ai-platform operations list

This command lists all the long running operations which are part of the Vertex AI project, this helps in monitoring the progess or model training and deployment.

  • Command 7 –
gcloud ai custom-jobs create --display-name=DISPLAY_NAME (--config=CONFIG --worker-pool-spec=[WORKER_POOL_SPEC,...]) [optional flags]

This command is used to create a custom Vertex AI job with specific configuration and Display Name.

  • Command 8 –
gcloud ai custom-jobs describe (CUSTOM_JOB : --region=REGION) [optional flags]

This command is used to describe a specific custom Vertex AI job. It returns the detailed explanation of the Custom Job.

Interfaces for Vertex AI

Google Cloud developed a specific platform named Vertex AI, which provides the user with a single environment to train their machine learning model, interact with them, and discover already available machine learning models and AI applications. It also lets the user customize and improve their Large Language Models (LLMs) for their AI application. Vertex AI is a platform that brings data science, data engineering, and machine learning workflows under the same umbrella. Bringing everything under the same umbrella lets the teams collaborate easily on a project and use all the required tools in the same place without finding them elsewhere. They can also use the benefit of Google Cloud to scale and maintain their applications over the cloud.

Tools Provided for Training and Deployment

Vertex AI provides various tools for training and deployment purposes; some of them are listed below:

  • AutoML – AutoML or Auto Machine Learning is a tool provided by Vertex AI that lets the user train image data, normal text data, tabular data, video data etc. without writing any code or preparing data sets manually.
  • Custom Training – It gives the user 100% control of the training process of the ML model, it involves writing the code, choosing the framework and choosing hyperparameters as required.
  • Generative AI – It lets the developer access Google Cloud’s vast amount of generative AI models of various type which includes text, images, speech, code etc.
  • Model Deployment – Vertex AI allows the developers to deploy their Machine Learning models as RESTful APIs, this makes it easier to integrate or use those models easily into the application.
  • Integration with MLOps – Vertex AI works with the Google Cloud’s MLOPs capabilities for continuous integration and continuous deployment (CI/CD) and versioning of the Machine Learning models.

Similar Reads

Interfaces for Vertex AI

Vertex AI offers a plethora of Interfaces using which users can interact with it and use its services. The only catch is that, some services can only be accessed and used by some interfaces, whereas some can be accessed and used by all....

Terraform

Terraform is a tool that helps you set up and manage different services in Google Cloud, like Vertex AI. It uses a special language to describe what resources you want to create and what permissions they should have....

Conclusion

...