tf.keras.applications.MobileNet
Stay organized with collections
Save and categorize content based on your preferences.
Instantiates the MobileNet architecture.
tf.keras.applications.MobileNet(
input_shape=None,
alpha=1.0,
depth_multiplier=1,
dropout=0.001,
include_top=True,
weights='imagenet',
input_tensor=None,
pooling=None,
classes=1000,
classifier_activation='softmax'
)
Used in the notebooks
Reference:
This function returns a Keras image classification model,
optionally loaded with weights pre-trained on ImageNet.
For image classification use cases, see
this page for detailed examples.
For transfer learning use cases, make sure to read the
guide to transfer learning & fine-tuning.
Args |
input_shape
|
Optional shape tuple, only to be specified if include_top
is False (otherwise the input shape has to be (224, 224, 3)
(with "channels_last" data format) or (3, 224, 224)
(with "channels_first" data format).
It should have exactly 3 inputs channels, and width and
height should be no smaller than 32. E.g. (200, 200, 3) would
be one valid value. Defaults to None .
input_shape will be ignored if the input_tensor is provided.
|
alpha
|
Controls the width of the network. This is known as the width
multiplier in the MobileNet paper.
- If
alpha < 1.0 , proportionally decreases the number
of filters in each layer.
- If
alpha > 1.0 , proportionally increases the number
of filters in each layer.
- If
alpha == 1 , default number of filters from the paper
are used at each layer. Defaults to 1.0 .
|
depth_multiplier
|
Depth multiplier for depthwise convolution.
This is called the resolution multiplier in the MobileNet paper.
Defaults to 1.0 .
|
dropout
|
Dropout rate. Defaults to 0.001 .
|
include_top
|
Boolean, whether to include the fully-connected layer
at the top of the network. Defaults to True .
|
weights
|
One of None (random initialization), "imagenet"
(pre-training on ImageNet), or the path to the weights file
to be loaded. Defaults to "imagenet" .
|
input_tensor
|
Optional Keras tensor (i.e. output of layers.Input() )
to use as image input for the model. input_tensor is useful
for sharing inputs between multiple different networks.
Defaults to None .
|
pooling
|
Optional pooling mode for feature extraction when include_top
is False .
None (default) means that the output of the model will be
the 4D tensor output of the last convolutional block.
avg means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
max means that global max pooling will be applied.
|
classes
|
Optional number of classes to classify images into,
only to be specified if include_top is True , and if
no weights argument is specified. Defaults to 1000 .
|
classifier_activation
|
A str or callable. The activation function
to use on the "top" layer. Ignored unless include_top=True .
Set classifier_activation=None to return the logits of the "top"
layer. When loading pretrained weights, classifier_activation
can only be None or "softmax" .
|
Returns |
A model instance.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-06-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-06-07 UTC."],[],[],null,["# tf.keras.applications.MobileNet\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/applications/mobilenet.py#L16-L269) |\n\nInstantiates the MobileNet architecture.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tf.keras.applications.mobilenet.MobileNet`](https://www.tensorflow.org/api_docs/python/tf/keras/applications/MobileNet)\n\n\u003cbr /\u003e\n\n tf.keras.applications.MobileNet(\n input_shape=None,\n alpha=1.0,\n depth_multiplier=1,\n dropout=0.001,\n include_top=True,\n weights='imagenet',\n input_tensor=None,\n pooling=None,\n classes=1000,\n classifier_activation='softmax'\n )\n\n### Used in the notebooks\n\n| Used in the guide |\n|-------------------------------------------------------------------------------|\n| - [Using the SavedModel format](https://www.tensorflow.org/guide/saved_model) |\n\n#### Reference:\n\n- [MobileNets: Efficient Convolutional Neural Networks\n for Mobile Vision Applications](https://arxiv.org/abs/1704.04861)\n\nThis function returns a Keras image classification model,\noptionally loaded with weights pre-trained on ImageNet.\n\nFor image classification use cases, see\n[this page for detailed examples](https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\nFor transfer learning use cases, make sure to read the\n[guide to transfer learning \\& fine-tuning](https://keras.io/guides/transfer_learning/).\n| **Note:** each Keras Application expects a specific kind of input preprocessing. For MobileNet, call [`keras.applications.mobilenet.preprocess_input`](../../../tf/keras/applications/mobilenet/preprocess_input) on your inputs before passing them to the model. [`mobilenet.preprocess_input`](../../../tf/keras/applications/mobilenet/preprocess_input) will scale input pixels between -1 and 1.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `input_shape` | Optional shape tuple, only to be specified if `include_top` is `False` (otherwise the input shape has to be `(224, 224, 3)` (with `\"channels_last\"` data format) or `(3, 224, 224)` (with `\"channels_first\"` data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. `(200, 200, 3)` would be one valid value. Defaults to `None`. `input_shape` will be ignored if the `input_tensor` is provided. |\n| `alpha` | Controls the width of the network. This is known as the width multiplier in the MobileNet paper. \u003cbr /\u003e - If `alpha \u003c 1.0`, proportionally decreases the number of filters in each layer. - If `alpha \u003e 1.0`, proportionally increases the number of filters in each layer. - If `alpha == 1`, default number of filters from the paper are used at each layer. Defaults to `1.0`. |\n| `depth_multiplier` | Depth multiplier for depthwise convolution. This is called the resolution multiplier in the MobileNet paper. Defaults to `1.0`. |\n| `dropout` | Dropout rate. Defaults to `0.001`. |\n| `include_top` | Boolean, whether to include the fully-connected layer at the top of the network. Defaults to `True`. |\n| `weights` | One of `None` (random initialization), `\"imagenet\"` (pre-training on ImageNet), or the path to the weights file to be loaded. Defaults to `\"imagenet\"`. |\n| `input_tensor` | Optional Keras tensor (i.e. output of [`layers.Input()`](../../../tf/keras/Input)) to use as image input for the model. `input_tensor` is useful for sharing inputs between multiple different networks. Defaults to `None`. |\n| `pooling` | Optional pooling mode for feature extraction when `include_top` is `False`. - `None` (default) means that the output of the model will be the 4D tensor output of the last convolutional block. - `avg` means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. - `max` means that global max pooling will be applied. |\n| `classes` | Optional number of classes to classify images into, only to be specified if `include_top` is `True`, and if no `weights` argument is specified. Defaults to `1000`. |\n| `classifier_activation` | A `str` or callable. The activation function to use on the \"top\" layer. Ignored unless `include_top=True`. Set `classifier_activation=None` to return the logits of the \"top\" layer. When loading pretrained weights, `classifier_activation` can only be `None` or `\"softmax\"`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A model instance. ||\n\n\u003cbr /\u003e"]]