Module: tfm.optimization.optimizer_factory
Stay organized with collections
Save and categorize content based on your preferences.
Optimizer factory class.
Classes
class OptimizerFactory
: Optimizer factory class.
Functions
register_optimizer_cls(...)
: Register customize optimizer cls.
Other Members |
LEGACY_OPTIMIZERS_CLS
|
{
'adafactor': 'Unimplemented',
'adagrad': <class 'keras.src.optimizers.legacy.adagrad.Adagrad'>,
'adam': <class 'keras.src.optimizers.legacy.adam.Adam'>,
'adam_experimental': <class 'keras.src.optimizers.adam.Adam'>,
'adamw': <class 'official.modeling.optimization.legacy_adamw.AdamWeightDecay'>,
'adamw_experimental': <class 'keras.src.optimizers.adamw.AdamW'>,
'lamb': <class 'official.modeling.optimization.lamb.LAMB'>,
'lars': <class 'official.modeling.optimization.lars.LARS'>,
'rmsprop': <class 'keras.src.optimizers.legacy.rmsprop.RMSprop'>,
'sgd': <class 'keras.src.optimizers.legacy.gradient_descent.SGD'>,
'sgd_experimental': <class 'keras.src.optimizers.sgd.SGD'>,
'slide': 'Unimplemented'
}
|
LR_CLS
|
{
'cosine': <class 'official.modeling.optimization.lr_schedule.CosineDecayWithOffset'>,
'exponential': <class 'official.modeling.optimization.lr_schedule.ExponentialDecayWithOffset'>,
'polynomial': <class 'official.modeling.optimization.lr_schedule.PolynomialDecayWithOffset'>,
'power': <class 'official.modeling.optimization.lr_schedule.DirectPowerDecay'>,
'power_linear': <class 'official.modeling.optimization.lr_schedule.PowerAndLinearDecay'>,
'power_with_offset': <class 'official.modeling.optimization.lr_schedule.PowerDecayWithOffset'>,
'step_cosine_with_offset': <class 'official.modeling.optimization.lr_schedule.StepCosineDecayWithOffset'>,
'stepwise': <class 'official.modeling.optimization.lr_schedule.PiecewiseConstantDecayWithOffset'>
}
|
NEW_OPTIMIZERS_CLS
|
{
'adafactor': 'Unimplemented',
'adagrad': <class 'keras.src.optimizers.adagrad.Adagrad'>,
'adam': <class 'keras.src.optimizers.adam.Adam'>,
'adam_experimental': <class 'keras.src.optimizers.adam.Adam'>,
'adamw': <class 'official.modeling.optimization.legacy_adamw.AdamWeightDecay'>,
'adamw_experimental': <class 'keras.src.optimizers.adamw.AdamW'>,
'lamb': <class 'official.modeling.optimization.lamb.LAMB'>,
'lars': <class 'official.modeling.optimization.lars.LARS'>,
'rmsprop': <class 'keras.src.optimizers.rmsprop.RMSprop'>,
'sgd': <class 'keras.src.optimizers.sgd.SGD'>,
'sgd_experimental': <class 'keras.src.optimizers.sgd.SGD'>,
'slide': 'Unimplemented'
}
|
SHARED_OPTIMIZERS
|
{
'adafactor': 'Unimplemented',
'adam_experimental': <class 'keras.src.optimizers.adam.Adam'>,
'adamw': <class 'official.modeling.optimization.legacy_adamw.AdamWeightDecay'>,
'adamw_experimental': <class 'keras.src.optimizers.adamw.AdamW'>,
'lamb': <class 'official.modeling.optimization.lamb.LAMB'>,
'lars': <class 'official.modeling.optimization.lars.LARS'>,
'sgd_experimental': <class 'keras.src.optimizers.sgd.SGD'>,
'slide': 'Unimplemented'
}
|
WARMUP_CLS
|
{
'linear': <class 'official.modeling.optimization.lr_schedule.LinearWarmup'>,
'polynomial': <class 'official.modeling.optimization.lr_schedule.PolynomialWarmUp'>
}
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-02-02 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-02-02 UTC."],[],[],null,["# Module: tfm.optimization.optimizer_factory\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/models/blob/v2.15.0/official/modeling/optimization/optimizer_factory.py) |\n\nOptimizer factory class.\n\nClasses\n-------\n\n[`class OptimizerFactory`](../../tfm/optimization/OptimizerFactory): Optimizer factory class.\n\nFunctions\n---------\n\n[`register_optimizer_cls(...)`](../../tfm/optimization/register_optimizer_cls): Register customize optimizer cls.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Other Members ------------- ||\n|-----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| LEGACY_OPTIMIZERS_CLS | \u003cbr /\u003e { 'adafactor': 'Unimplemented', 'adagrad': \u003cclass 'keras.src.optimizers.legacy.adagrad.Adagrad'\u003e, 'adam': \u003cclass 'keras.src.optimizers.legacy.adam.Adam'\u003e, 'adam_experimental': \u003cclass 'keras.src.optimizers.adam.Adam'\u003e, 'adamw': \u003cclass 'official.modeling.optimization.legacy_adamw.AdamWeightDecay'\u003e, 'adamw_experimental': \u003cclass 'keras.src.optimizers.adamw.AdamW'\u003e, 'lamb': \u003cclass 'official.modeling.optimization.lamb.LAMB'\u003e, 'lars': \u003cclass 'official.modeling.optimization.lars.LARS'\u003e, 'rmsprop': \u003cclass 'keras.src.optimizers.legacy.rmsprop.RMSprop'\u003e, 'sgd': \u003cclass 'keras.src.optimizers.legacy.gradient_descent.SGD'\u003e, 'sgd_experimental': \u003cclass 'keras.src.optimizers.sgd.SGD'\u003e, 'slide': 'Unimplemented' } \u003cbr /\u003e |\n| LR_CLS | \u003cbr /\u003e { 'cosine': \u003cclass 'official.modeling.optimization.lr_schedule.CosineDecayWithOffset'\u003e, 'exponential': \u003cclass 'official.modeling.optimization.lr_schedule.ExponentialDecayWithOffset'\u003e, 'polynomial': \u003cclass 'official.modeling.optimization.lr_schedule.PolynomialDecayWithOffset'\u003e, 'power': \u003cclass 'official.modeling.optimization.lr_schedule.DirectPowerDecay'\u003e, 'power_linear': \u003cclass 'official.modeling.optimization.lr_schedule.PowerAndLinearDecay'\u003e, 'power_with_offset': \u003cclass 'official.modeling.optimization.lr_schedule.PowerDecayWithOffset'\u003e, 'step_cosine_with_offset': \u003cclass 'official.modeling.optimization.lr_schedule.StepCosineDecayWithOffset'\u003e, 'stepwise': \u003cclass 'official.modeling.optimization.lr_schedule.PiecewiseConstantDecayWithOffset'\u003e } \u003cbr /\u003e |\n| NEW_OPTIMIZERS_CLS | \u003cbr /\u003e { 'adafactor': 'Unimplemented', 'adagrad': \u003cclass 'keras.src.optimizers.adagrad.Adagrad'\u003e, 'adam': \u003cclass 'keras.src.optimizers.adam.Adam'\u003e, 'adam_experimental': \u003cclass 'keras.src.optimizers.adam.Adam'\u003e, 'adamw': \u003cclass 'official.modeling.optimization.legacy_adamw.AdamWeightDecay'\u003e, 'adamw_experimental': \u003cclass 'keras.src.optimizers.adamw.AdamW'\u003e, 'lamb': \u003cclass 'official.modeling.optimization.lamb.LAMB'\u003e, 'lars': \u003cclass 'official.modeling.optimization.lars.LARS'\u003e, 'rmsprop': \u003cclass 'keras.src.optimizers.rmsprop.RMSprop'\u003e, 'sgd': \u003cclass 'keras.src.optimizers.sgd.SGD'\u003e, 'sgd_experimental': \u003cclass 'keras.src.optimizers.sgd.SGD'\u003e, 'slide': 'Unimplemented' } \u003cbr /\u003e |\n| SHARED_OPTIMIZERS | \u003cbr /\u003e { 'adafactor': 'Unimplemented', 'adam_experimental': \u003cclass 'keras.src.optimizers.adam.Adam'\u003e, 'adamw': \u003cclass 'official.modeling.optimization.legacy_adamw.AdamWeightDecay'\u003e, 'adamw_experimental': \u003cclass 'keras.src.optimizers.adamw.AdamW'\u003e, 'lamb': \u003cclass 'official.modeling.optimization.lamb.LAMB'\u003e, 'lars': \u003cclass 'official.modeling.optimization.lars.LARS'\u003e, 'sgd_experimental': \u003cclass 'keras.src.optimizers.sgd.SGD'\u003e, 'slide': 'Unimplemented' } \u003cbr /\u003e |\n| WARMUP_CLS | \u003cbr /\u003e { 'linear': \u003cclass 'official.modeling.optimization.lr_schedule.LinearWarmup'\u003e, 'polynomial': \u003cclass 'official.modeling.optimization.lr_schedule.PolynomialWarmUp'\u003e } \u003cbr /\u003e |\n\n\u003cbr /\u003e"]]