Deploy and Test
- An endpoint refers to an API (Application Programming Interface) that allows you to interact with your deployed machine learning model. It provides a way for external applications, services, or users to send data to the model for inference (making predictions or classifications) and receive the model’s responses.
- Click on Deploy & Test -> Deploy to Endpoint.
- Container creation is needed in Manual ML.
1. Define your endpoint
- Create new endpoint if not.
- Fill name, location.
- Access are of two types. Standard uses Rest API, private is for your private cloud (if purchased).
- In Advanced, there is encryption type, Google managed encryption and Customer managed encryption type. Google managed is the default.
2. Model Setting
- Select Traffic Split, remember select it out of 100. (Details in next slide)
- Min no of compute nodes is the compute resources will continuously run even without traffic demand. This can increase cost but avoid dropped requests due to node initialization.
What is Traffic split in Vertex AI during Model deployment?
Traffic split refers to the distribution of inference requests (also known as traffic) across different versions of a deployed machine learning model. When you deploy multiple versions of a model, you can control how much traffic each version receives. For example, you might direct 80% of the traffic to the current production version and 20% to a new experimental version. So, in short if you want to deploy multiple versions, make the traffic distribution in that respective way.
- Explainability options are particularly important when dealing with complex models, such as deep neural networks, that might be considered “black-box” models due to their intricate internal workings. Explainability helps data scientists, developers, and stakeholders gain confidence in the model’s decisions. It is recommended to turn it on so that we can test it after deployment.
- Click on Feature attribute
Model Productions
Once the model is in production, it requires continuous monitoring for ensuring its performance is as we expected. It will send email report to the given email id in a gap of x days.
- Click on Deploy.
- After successful deployment, we can test the model.
- To test the model provide the inputs. See the result of the output.
Your model is ready now.
API calling is a separate large topic so we are concluding it here.
Build, Test, and Deploy Model With AutoML
The term “Automated Machine Learning,” or “AutoML,” refers to a set of tools and methods used to speed up the creation of machine learning models. It automates a variety of processes, including model evaluation, feature selection, hyperparameter tweaking, and data preparation. By automating the intricate and time-consuming processes involved in model creation, AutoML platforms hope to make machine learning accessible to people and businesses without a strong background in data science.