If you haven't already, set up authentication.
Authentication verifies your identity for access to Google Cloud services and APIs. To run
code or samples from a local development environment, you can authenticate to
Compute Engine by selecting one of the following options:
Install the Google Cloud CLI.
After installation,
initialize the Google Cloud CLI by running the following command:
This predefined role contains
the permissions required to copy VM between projects. To see the exact permissions that are
required, expand the Required permissions section:
Required permissions
The following permissions are required to copy VM between projects:
compute.instances.create
on the project
To use a custom image to create the VM:
compute.images.useReadOnly
on the image
To use a snapshot to create the VM:
compute.snapshots.useReadOnly
on the snapshot
To use an instance template to create the VM:
compute.instanceTemplates.useReadOnly
on the instance template
To assign a legacy network to the VM:
compute.networks.use
on the project
To specify a static IP address for the VM:
compute.addresses.use
on the project
To assign an external IP address to the VM when using a legacy network:
compute.networks.useExternalIp
on the project
To specify a subnet for your VM:
compute.subnetworks.use
on the project or on the chosen subnet
To assign an external IP address to the VM when using a VPC network:
compute.subnetworks.useExternalIp
on the project or on the chosen subnet
To set VM instance metadata for the VM:
compute.instances.setMetadata
on the project
To set tags for the VM:
compute.instances.setTags
on the VM
To set labels for the VM:
compute.instances.setLabels
on the VM
To set a service account for the VM to use:
compute.instances.setServiceAccount
on the VM
To create a new disk for the VM:
compute.disks.create
on the project
To attach an existing disk in read-only or read-write mode:
compute.disks.use
on the disk
To attach an existing disk in read-only mode:
compute.disks.useReadOnly
on the disk
SOURCE_DISK: The name of the zonal Persistent Disk volume from which you want to
create a snapshot.
SNAPSHOT_TYPE: The snapshot type, either STANDARD or
ARCHIVE. If a snapshot type is not specified, a STANDARD
snapshot is created. Choose Archive for more cost-efficient data retention.
SOURCE_DISK_ZONE: The zone of the zonal Persistent Disk volume from which you want
to create a snapshot.
Regional boot disk
If your VM has a regional boot disk, create a snapshot using the
following command:
SOURCE_SNAPSHOT: the snapshot from which you want
to create the image
LOCATION: Optional: a flag that lets you designate
the region or multi-region where your image is stored. For example,
specify us to store the image in the us
multi-region, or us-central1 to store it in the
us-central1 region. If you don't make a selection,
Compute Engine stores the image in the multi-region closest to
your image's source location.
Optional: Share the custom image with users who create VMs in the
destination project. For more information about sharing custom images, see
Sharing custom image within an organization.
In your destination project, create a VM from the custom image using the
following command:
IMAGE_PROJECT: the ID of the Google Cloud project that
contains the image
IMAGE_FLAG: specify one of the following:
Use the --image IMAGE_NAME flag to specify a custom image.
For example, --image my-debian-image-v2.
If you created your custom images
as part of a custom image family,
use the --image-family IMAGE_FAMILY_NAME flag to
specify that custom image family.
This creates the VM from the most
recent, non-deprecated OS image and OS version in your custom image family.
For example, if you specify --image-family my-debian-family,
Compute Engine creates a VM from the latest OS image in your custom
my-debian-family image family.
SUBNET: if the subnet and instance are in the same
project, replace SUBNET with the name of a subnet that is in the same
region as the instance.
To specify a subnet in a Shared VPC network, replace
SUBNET with a string of the form:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-26 UTC."],[[["\u003cp\u003eThis guide explains how to copy a virtual machine (VM) from one Google Cloud project to another, utilizing snapshots and custom images.\u003c/p\u003e\n"],["\u003cp\u003eBefore copying, you must ensure your boot disk is prepared for snapshots and have the correct authentication set up with the Google Cloud CLI.\u003c/p\u003e\n"],["\u003cp\u003eCopying a VM involves creating a snapshot of the VM's boot disk, creating a custom image from this snapshot, and then creating a new VM in the destination project from that image.\u003c/p\u003e\n"],["\u003cp\u003eSpecific IAM permissions, such as \u003ccode\u003ecompute.instances.create\u003c/code\u003e and \u003ccode\u003ecompute.snapshots.useReadOnly\u003c/code\u003e, are required to perform the VM copy, and these can be granted through the Compute Instance Admin (v1) role.\u003c/p\u003e\n"],["\u003cp\u003eThe guide outlines commands using the Google Cloud CLI for both zonal and regional boot disks to create snapshots and custom images, and to create the new VM.\u003c/p\u003e\n"]]],[],null,["# Copying VMs between projects\n\nLinux Windows\n\n*** ** * ** ***\n\nThis document describes how to copy your VM to a different project.\n\nBefore you begin\n----------------\n\n- Review [Best practices for persistent disk snapshots](/compute/docs/disks/snapshot-best-practices#prepare_for_consistency) and prepare your boot disk for snapshots.\n- If you haven't already, set up [authentication](/compute/docs/authentication). Authentication verifies your identity for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine by selecting one of the following options:\n 1.\n [Install](/sdk/docs/install) the Google Cloud CLI.\n\n After installation,\n [initialize](/sdk/docs/initializing) the Google Cloud CLI by running the following command:\n\n ```bash\n gcloud init\n ```\n\n\n If you're using an external identity provider (IdP), you must first\n [sign in to the gcloud CLI with your federated identity](/iam/docs/workforce-log-in-gcloud).\n | **Note:** If you installed the gcloud CLI previously, make sure you have the latest version by running `gcloud components update`.\n 2. [Set a default region and zone](/compute/docs/gcloud-compute#set_default_zone_and_region_in_your_local_client).\n\n### Required roles\n\n\nTo get the permissions that\nyou need to copy VM between projects,\n\nask your administrator to grant you the\n\n\n[Compute Instance Admin (v1)](/iam/docs/roles-permissions/compute#compute.instanceAdmin.v1) (`roles/compute.instanceAdmin.v1`)\nIAM role on the project.\n\n\nFor more information about granting roles, see [Manage access to projects, folders, and organizations](/iam/docs/granting-changing-revoking-access).\n\n\nThis predefined role contains\n\nthe permissions required to copy VM between projects. To see the exact permissions that are\nrequired, expand the **Required permissions** section:\n\n\n#### Required permissions\n\nThe following permissions are required to copy VM between projects:\n\n- ` compute.instances.create` on the project\n- To use a custom image to create the VM: ` compute.images.useReadOnly` on the image\n- To use a snapshot to create the VM: ` compute.snapshots.useReadOnly` on the snapshot\n- To use an instance template to create the VM: ` compute.instanceTemplates.useReadOnly` on the instance template\n- To assign a [legacy network](/vpc/docs/legacy) to the VM: ` compute.networks.use` on the project\n- To specify a static IP address for the VM: ` compute.addresses.use` on the project\n- To assign an external IP address to the VM when using a legacy network: ` compute.networks.useExternalIp` on the project\n- To specify a subnet for your VM: ` compute.subnetworks.use` on the project or on the chosen subnet\n- To assign an external IP address to the VM when using a VPC network: ` compute.subnetworks.useExternalIp` on the project or on the chosen subnet\n- To set VM instance metadata for the VM: ` compute.instances.setMetadata` on the project\n- To set tags for the VM: ` compute.instances.setTags` on the VM\n- To set labels for the VM: ` compute.instances.setLabels` on the VM\n- To set a service account for the VM to use: ` compute.instances.setServiceAccount` on the VM\n- To create a new disk for the VM: ` compute.disks.create` on the project\n- To attach an existing disk in read-only or read-write mode: ` compute.disks.use` on the disk\n- To attach an existing disk in read-only mode: ` compute.disks.useReadOnly` on the disk\n\n\nYou might also be able to get\nthese permissions\nwith [custom roles](/iam/docs/creating-custom-roles) or\nother [predefined roles](/iam/docs/roles-overview#predefined).\n\nCopy a VM to another project\n----------------------------\n\n1. In your source project, create a snapshot of the VM's boot disk, using one\n of the following commands:\n\n ### Zonal boot disk\n\n If your VM has a zonal boot disk, create a snapshot using the following\n command:\n\n\n ```\n gcloud compute snapshots create SNAPSHOT_NAME \\\n --source-disk SOURCE_DISK \\\n --snapshot-type SNAPSHOT_TYPE \\\n --source-disk-zone SOURCE_DISK_ZONE\n ```\n\n \u003cbr /\u003e\n\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eSNAPSHOT_NAME\u003c/var\u003e: A name for the snapshot.\n - \u003cvar translate=\"no\"\u003eSOURCE_DISK\u003c/var\u003e: The name of the zonal Persistent Disk volume from which you want to create a snapshot.\n - \u003cvar translate=\"no\"\u003eSNAPSHOT_TYPE\u003c/var\u003e: The snapshot type, either **STANDARD** or **ARCHIVE** . If a snapshot type is not specified, a **STANDARD** snapshot is created. Choose Archive for more cost-efficient data retention.\n - \u003cvar translate=\"no\"\u003eSOURCE_DISK_ZONE\u003c/var\u003e: The zone of the zonal Persistent Disk volume from which you want to create a snapshot.\n\n \u003cbr /\u003e\n\n ### Regional boot disk\n\n If your VM has a regional boot disk, create a snapshot using the\n following command:\n\n\n ```\n gcloud compute snapshots create SNAPSHOT_NAME \\\n --source-disk SOURCE_DISK \\\n --source-disk-region=SOURCE_DISK_REGION \\\n --snapshot-type=SNAPSHOT_TYPE\n ```\n\n \u003cbr /\u003e\n\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eSNAPSHOT_NAME\u003c/var\u003e: A name for the snapshot.\n - \u003cvar translate=\"no\"\u003eSOURCE_DISK\u003c/var\u003e: The name of the regional Persistent Disk volume from which you want to create a snapshot.\n - \u003cvar translate=\"no\"\u003eSOURCE_DISK_REGION\u003c/var\u003e: The region of the regional Persistent Disk volume from which you want to create a snapshot.\n - \u003cvar translate=\"no\"\u003eSNAPSHOT_TYPE\u003c/var\u003e: The snapshot type, either **STANDARD** or **ARCHIVE** . If a snapshot type is not specified, a **STANDARD** snapshot is created.\n\n \u003cbr /\u003e\n\n2. Create a custom image from the snapshot using the following command:\n\n ```\n gcloud compute images create IMAGE_NAME \\\n --source-snapshot=SOURCE_SNAPSHOT \\\n [--storage-location=LOCATION]\n ```\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eIMAGE_NAME\u003c/var\u003e: a name for the new image\n - \u003cvar translate=\"no\"\u003eSOURCE_SNAPSHOT\u003c/var\u003e: the snapshot from which you want to create the image\n - \u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e: Optional: a flag that lets you designate the region or multi-region where your image is stored. For example, specify `us` to store the image in the `us` multi-region, or `us-central1` to store it in the `us-central1` region. If you don't make a selection, Compute Engine stores the image in the multi-region closest to your image's source location.\n3. Optional: Share the custom image with users who create VMs in the\n destination project. For more information about sharing custom images, see\n [Sharing custom image within an organization](/compute/docs/images/managing-access-custom-images#share-images-within-organization).\n\n4. In your destination project, create a VM from the custom image using the\n following command:\n\n\n ```\n gcloud compute instances create VM_NAME \\\n --image-project IMAGE_PROJECT \\\n IMAGE_FLAG \\\n --subnet SUBNET\n ```\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eVM_NAME\u003c/var\u003e: the name of the VM\n - \u003cvar translate=\"no\"\u003eIMAGE_PROJECT\u003c/var\u003e: the ID of the Google Cloud project that contains the image\n - \u003cvar translate=\"no\"\u003eIMAGE_FLAG\u003c/var\u003e: specify one of the following:\n - Use the `--image `\u003cvar translate=\"no\"\u003eIMAGE_NAME\u003c/var\u003e flag to specify a custom image.\n\n For example, `--image my-debian-image-v2`.\n - If you created your custom images as part of a [custom image family](/compute/docs/images#custom-families), use the `--image-family `\u003cvar translate=\"no\"\u003eIMAGE_FAMILY_NAME\u003c/var\u003e flag to specify that custom image family.\n\n This creates the VM from the most\n recent, non-deprecated OS image and OS version in your custom image family.\n For example, if you specify `--image-family my-debian-family`,\n Compute Engine creates a VM from the latest OS image in your custom\n `my-debian-family` image family.\n\n | **Note:** Compute Engine uses the default image family and project if you don't specify an image, for example `debian-10` and `debian-cloud`, respectively.\n - \u003cvar translate=\"no\"\u003eSUBNET\u003c/var\u003e: if the subnet and instance are in the same\n project, replace \u003cvar translate=\"no\"\u003eSUBNET\u003c/var\u003e with the name of a subnet that is in the same\n region as the instance.\n\n To specify a subnet in a Shared VPC network, replace\n \u003cvar translate=\"no\"\u003eSUBNET\u003c/var\u003e with a string of the form: \n\n ```\n projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME\u003cvar translate=\"no\"\u003e\n \u003c/var\u003e\n ```\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eHOST_PROJECT_ID\u003c/var\u003e: the project ID of the Shared VPC host project\n - \u003cvar translate=\"no\"\u003eREGION\u003c/var\u003e: the region of the subnet\n - \u003cvar translate=\"no\"\u003eSUBNET_NAME\u003c/var\u003e: the name of the subnet\n\n The region of the subnet for a Shared VPC network must also match the\n region containing the instance.\n\n \u003cbr /\u003e\n\nWhat's next\n-----------\n\n- Customize the destination project's [VPC network](/vpc/docs/vpc)."]]