Implementing a model  |  Machine Learning  |  Google for Developers (2024)

  • Home
  • Products
  • Machine Learning
  • Foundational courses
  • Problem Framing
Stay organized with collections Save and categorize content based on your preferences.

When implementing a model, start simple. Most of the work in ML is on the dataside, so getting a full pipeline running for a complex model is harder thaniterating on the model itself. After setting up your data pipeline andimplementing a simple model that uses a few features, you can iterate oncreating a better model.

Simple models provide a good baseline, even if you don't end up launching them.In fact, using a simple model is probably better than you think. Starting simplehelps you determine whether or not a complex model is even justified.

Train your own model versus using an already trained model

Trained models exist for a variety of use cases and offer manyadvantages. However, trained models only really work when the label andfeatures match your dataset exactly. For example, if a trained modeluses 25 features and your dataset only includes 24 of them, the trainedmodel will most likely make bad predictions.

Commonly, ML practitioners use matching subsections of inputs from aa trained model for fine-tuning or transfer learning. If a trained modeldoesn't exist for your particular use case, considerusing subsections from a trained model when training your own.

For information on trained models, see

Monitoring

During problem framing, consider the monitoring and alerting infrastructure yourML solution needs.

Model deployment

In some cases, a newly trained model might be worse than the model currently inproduction. If it is, you'll want to prevent it from being released intoproduction and get an alert that your automated deployment has failed.

Training-serving skew

If any of the incoming features used for inference have values that fall outsidethe distribution range of the data used in training, you'll want to be alertedbecause it's likely the model will make poor predictions. For example, if yourmodel was trained to predict temperatures for equatorial cities at sea level,then your serving system should alert you of incoming data with latitudes andlongitudes, and/or altitudes outside the range the model was trained on.Conversely, the serving system should alert you if the model is makingpredictions that are outside the distribution range that was seen duringtraining.

Inference server

If you're providing inferences through an RPC system, you'll want to monitor theRPC server itself and get an alert if it stops providing inferences.

Previous arrow_back Framing an ML problem
Next Summary and next steps arrow_forward

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2024-05-08 UTC.

Implementing a model  |  Machine Learning  |  Google for Developers (2024)
Top Articles
Latest Posts
Article information

Author: Francesca Jacobs Ret

Last Updated:

Views: 5967

Rating: 4.8 / 5 (48 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Francesca Jacobs Ret

Birthday: 1996-12-09

Address: Apt. 141 1406 Mitch Summit, New Teganshire, UT 82655-0699

Phone: +2296092334654

Job: Technology Architect

Hobby: Snowboarding, Scouting, Foreign language learning, Dowsing, Baton twirling, Sculpting, Cabaret

Introduction: My name is Francesca Jacobs Ret, I am a innocent, super, beautiful, charming, lucky, gentle, clever person who loves writing and wants to share my knowledge and understanding with you.