From bfbe471814b64f31867ff4c89ba18620277281fe Mon Sep 17 00:00:00 2001
From: Justin
Date: Tue, 15 Apr 2014 21:58:57 -0700
Subject: [PATCH 0001/1291] initial commit, first changes
---
running-coreos/cloud-providers/vultr/index.md | 81 +++++++++++++++++++
1 file changed, 81 insertions(+)
create mode 100644 running-coreos/cloud-providers/vultr/index.md
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
new file mode 100644
index 000000000..1cc4965ef
--- /dev/null
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -0,0 +1,81 @@
+---
+layout: docs
+title: Vultr VPS Provider
+category: running_coreos
+sub_category: cloud_provider
+weight: 10
+---
+
+# Running CoreOS {{site.brightbox-version}} on a Vultr VPS
+
+CoreOS is currently in heavy development and actively being tested. These
+instructions will walk you through running a single CoreOS node. This guide assumes you have an account at [Vultr.com](http://vultr.com).
+
+
+## List Images
+
+You can find it by listing all images and grepping for CoreOS:
+
+```
+$ brightbox images list | grep CoreOS
+
+ id owner type created_on status size name
+ ---------------------------------------------------------------------------------------------------------
+ {{site.brightbox-id}} brightbox official 2013-12-15 public 5442 CoreOS {{site.brightbox-version}} (x86_64)
+ ```
+
+## Building Servers
+
+Before building the cluster, we need to generate a unique identifier for it, which is used by CoreOS to discover and identify nodes.
+
+You can use any random string so we’ll use the `uuid` tool here to generate one:
+
+```
+$ TOKEN=`uuid`
+
+$ echo $TOKEN
+53cf11d4-3726-11e3-958f-939d4f7f9688
+```
+
+Then build three servers using the image, in the server group we created and specifying the token as the user data:
+
+```
+$ brightbox servers create -i 3 --type small --name "coreos" --user-data $TOKEN --server-groups grp-cdl6h {{site.brightbox-id}}
+
+Creating 3 small (typ-8fych) servers with image CoreOS {{site.brightbox-version}} ({{ site.brightbox-id }}) in groups grp-cdl6h with 0.05k of user data
+
+ id status type zone created_on image_id cloud_ip_ids name
+--------------------------------------------------------------------------------
+ srv-ko2sk creating small gb1-a 2013-10-18 {{ site.brightbox-id }} coreos
+ srv-vynng creating small gb1-a 2013-10-18 {{ site.brightbox-id }} coreos
+ srv-7tf5d creating small gb1-a 2013-10-18 {{ site.brightbox-id }} coreos
+--------------------------------------------------------------------------------
+```
+
+## Accessing the Cluster
+
+Those servers should take just a minute to build and boot. They automatically install your Brightbox Cloud ssh key on bootup, so you can ssh in straight away as the `core` user.
+
+If you’ve got ipv6 locally, you can ssh in directly:
+
+```
+$ ssh core@ipv6.srv-n8uak.gb1.brightbox.com
+The authenticity of host 'ipv6.srv-n8uak.gb1.brightbox.com (2a02:1348:17c:423d:24:19ff:fef1:8f6)' can't be established.
+RSA key fingerprint is 99:a5:13:60:07:5d:ac:eb:4b:f2:cb:c9:b2:ab:d7:21.
+Are you sure you want to continue connecting (yes/no)? yes
+
+Last login: Thu Oct 17 11:42:04 UTC 2013 from srv-4mhaz.gb1.brightbox.com on pts/0
+ ______ ____ _____
+ / ____/___ ________ / __ \/ ___/
+ / / / __ \/ ___/ _ \/ / / /\__ \
+/ /___/ /_/ / / / __/ /_/ /___/ /
+\____/\____/_/ \___/\____//____/
+core@srv-n8uak ~ $
+```
+
+If you don’t have ipv6, you’ll need to [create and map a Cloud IP](http://brightbox.com/docs/guides/cli/cloud-ips/) first.
+
+## Using CoreOS
+
+Now that you have a cluster bootstrapped it is time to play around.
+Check out the [CoreOS Quickstart]({{site.url}}/docs/quickstart) guide or dig into [more specific topics]({{site.url}}/docs).
From db69f1852a8de3aac6c5cad56dd6553e84656663 Mon Sep 17 00:00:00 2001
From: Justin
Date: Tue, 15 Apr 2014 22:08:22 -0700
Subject: [PATCH 0002/1291] further tweaks, need to add specifics still
---
running-coreos/cloud-providers/vultr/index.md | 12 ------------
1 file changed, 12 deletions(-)
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index 1cc4965ef..750ec6701 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -39,18 +39,6 @@ $ echo $TOKEN
Then build three servers using the image, in the server group we created and specifying the token as the user data:
-```
-$ brightbox servers create -i 3 --type small --name "coreos" --user-data $TOKEN --server-groups grp-cdl6h {{site.brightbox-id}}
-
-Creating 3 small (typ-8fych) servers with image CoreOS {{site.brightbox-version}} ({{ site.brightbox-id }}) in groups grp-cdl6h with 0.05k of user data
-
- id status type zone created_on image_id cloud_ip_ids name
---------------------------------------------------------------------------------
- srv-ko2sk creating small gb1-a 2013-10-18 {{ site.brightbox-id }} coreos
- srv-vynng creating small gb1-a 2013-10-18 {{ site.brightbox-id }} coreos
- srv-7tf5d creating small gb1-a 2013-10-18 {{ site.brightbox-id }} coreos
---------------------------------------------------------------------------------
-```
## Accessing the Cluster
From 5c40ea85785a8a285d028b58afbbb9aabcaccafd Mon Sep 17 00:00:00 2001
From: Justin
Date: Tue, 15 Apr 2014 22:19:26 -0700
Subject: [PATCH 0003/1291] added links, further tweaks
---
running-coreos/cloud-providers/vultr/index.md | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index 750ec6701..bc1062ccb 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -6,10 +6,9 @@ sub_category: cloud_provider
weight: 10
---
-# Running CoreOS {{site.brightbox-version}} on a Vultr VPS
+# Running CoreOS on a Vultr VPS
-CoreOS is currently in heavy development and actively being tested. These
-instructions will walk you through running a single CoreOS node. This guide assumes you have an account at [Vultr.com](http://vultr.com).
+CoreOS is currently in heavy development and actively being tested. These instructions will walk you through running a single CoreOS node. This guide assumes you have an account at [Vultr.com](http://vultr.com).
## List Images
@@ -39,20 +38,22 @@ $ echo $TOKEN
Then build three servers using the image, in the server group we created and specifying the token as the user data:
+[Booting CoreOS with iPXE](http://coreos.com/docs/running-coreos/bare-metal/booting-with-ipxe/)
+[iPXE using scripts](http://ipxe.org/embed)
-## Accessing the Cluster
+
+## Accessing the VPS
Those servers should take just a minute to build and boot. They automatically install your Brightbox Cloud ssh key on bootup, so you can ssh in straight away as the `core` user.
-If you’ve got ipv6 locally, you can ssh in directly:
```
-$ ssh core@ipv6.srv-n8uak.gb1.brightbox.com
-The authenticity of host 'ipv6.srv-n8uak.gb1.brightbox.com (2a02:1348:17c:423d:24:19ff:fef1:8f6)' can't be established.
+$ ssh core@IP_HERE
+The authenticity of host 'IP_HERE (2a02:1348:17c:423d:24:19ff:fef1:8f6)' can't be established.
RSA key fingerprint is 99:a5:13:60:07:5d:ac:eb:4b:f2:cb:c9:b2:ab:d7:21.
Are you sure you want to continue connecting (yes/no)? yes
-Last login: Thu Oct 17 11:42:04 UTC 2013 from srv-4mhaz.gb1.brightbox.com on pts/0
+Last login: Thu Oct 17 11:42:04 UTC 2013 from YOUR_IP on pts/0
______ ____ _____
/ ____/___ ________ / __ \/ ___/
/ / / __ \/ ___/ _ \/ / / /\__ \
@@ -61,7 +62,6 @@ Last login: Thu Oct 17 11:42:04 UTC 2013 from srv-4mhaz.gb1.brightbox.com on pts
core@srv-n8uak ~ $
```
-If you don’t have ipv6, you’ll need to [create and map a Cloud IP](http://brightbox.com/docs/guides/cli/cloud-ips/) first.
## Using CoreOS
From 6baf1dda330b6b14dca063d800cb141679179133 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 15 Apr 2014 12:09:39 -0700
Subject: [PATCH 0004/1291] feat(debugging): install debugging tools with
toolbox
---
.../install-debugging-tools/index.md | 64 +++++++++++++++++++
1 file changed, 64 insertions(+)
create mode 100644 cluster-management/debugging/install-debugging-tools/index.md
diff --git a/cluster-management/debugging/install-debugging-tools/index.md b/cluster-management/debugging/install-debugging-tools/index.md
new file mode 100644
index 000000000..7ed9fb9a1
--- /dev/null
+++ b/cluster-management/debugging/install-debugging-tools/index.md
@@ -0,0 +1,64 @@
+---
+layout: docs
+slug: guides
+title: Install Debugging Tools
+category: cluster_management
+sub_category: debugging
+weight: 7
+---
+
+# Install Debugging Tools
+
+You can use common debugging tools like tcpdump or strace with Toolbox. Using the filesystem of a specified docker container Toolbox will launch a container with full system privileges including access to system PIDs, network interfaces and other global information. Inside of the toolbox, the machine's filesystem is mounted to `/media/root`.
+
+## Quick Debugging
+
+By default, Toolbox uses the stock Fedora docker container. To start using it, simply run:
+
+```
+/usr/bin/toolbox
+```
+
+You're now in the namespace of Fedora and can install any software you'd like via `yum`. For example, if you'd like to use `tcpdump`:
+
+```
+[root@srv-3qy0p ~]# yum install tcpdump
+[root@srv-3qy0p ~]# tcpdump -i ens3
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on ens3, link-type EN10MB (Ethernet), capture size 65535 bytes
+```
+
+### Specify a Custom Docker Image
+
+Create a `.toolboxrc` in the user's home folder to use a specific docker image:
+
+```
+$ cat .toolboxrc
+TOOLBOX_DOCKER_IMAGE=index.example.com/debug
+TOOLBOX_USER=root
+$ /usr/bin/toolbox
+Pulling repository index.example.com/debug
+...
+```
+
+## SSH Directly Into A Toolbox
+
+Advanced users can SSH directly into a toolbox by setting up an `/etc/passwd` entry:
+
+```
+useradd bob -m -p '*' -s /usr/bin/toolbox
+```
+
+To test, SSH as bob:
+
+```
+ssh bob@hostname.example.com
+
+ ______ ____ _____
+ / ____/___ ________ / __ \/ ___/
+ / / / __ \/ ___/ _ \/ / / /\__ \
+/ /___/ /_/ / / / __/ /_/ /___/ /
+\____/\____/_/ \___/\____//____/
+[root@srv-3qy0p ~]# yum install emacs
+[root@srv-3qy0p ~]# emacs /media/root/etc/systemd/system/docker.service
+```
From c2668073f8b40298c61783ecdea90150e80af79f Mon Sep 17 00:00:00 2001
From: Michael Marineau
Date: Wed, 16 Apr 2014 10:41:16 -0700
Subject: [PATCH 0005/1291] churn(libvirt): Rename host from dock0 to coreos0
Makes it sound less dockerish which isn't even a part of this tutorial.
---
running-coreos/platforms/libvirt/index.md | 32 +++++++++++------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/running-coreos/platforms/libvirt/index.md b/running-coreos/platforms/libvirt/index.md
index 4cc5875df..789bc7ce3 100644
--- a/running-coreos/platforms/libvirt/index.md
+++ b/running-coreos/platforms/libvirt/index.md
@@ -18,23 +18,23 @@ list][coreos-dev].
## Download the CoreOS image
-In this guide, the example virtual machine we are creating is called dock0 and
-all files are stored in /usr/src/dock0. This is not a requirement — feel free
+In this guide, the example virtual machine we are creating is called coreos0 and
+all files are stored in /usr/src/coreos0. This is not a requirement — feel free
to substitute that path if you use another one.
We start by downloading the most recent disk image:
- mkdir -p /usr/src/dock0
- cd /usr/src/dock0
+ mkdir -p /usr/src/coreos0
+ cd /usr/src/coreos0
wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_qemu_image.img.bz2
bunzip2 coreos_production_qemu_image.img.bz2
## Virtual machine configuration
-Now create /tmp/dock0.xml with the following contents:
+Now create /tmp/coreos0.xml with the following contents:
- dock0
+ coreos0104857610485761
@@ -55,13 +55,13 @@ Now create /tmp/dock0.xml with the following contents:
/usr/bin/kvm
-
+
-
+
@@ -93,15 +93,15 @@ You can change any of these parameters later.
Now create the metadata directory and import the XML as new VM into your libvirt instance:
- mkdir /usr/src/dock0/metadata
- virsh create /tmp/dock0.xml
+ mkdir /usr/src/coreos0/metadata
+ virsh create /tmp/coreos0.xml
### Network configuration
By default, CoreOS uses DHCP to get its network configuration, but in my
libvirt setup, I connect the VMs with a bridge to the host's eth0.
-Copy the following script to /usr/src/dock0/metadata/run:
+Copy the following script to /usr/src/coreos0/metadata/run:
#!/bin/bash
cat > /run/systemd/network/10-ens3.network <
Date: Wed, 16 Apr 2014 10:48:07 -0700
Subject: [PATCH 0006/1291] churn(libvirt): Move image location under /var
Using /usr/src for libvirt images isn't standard, use something that is
a little bit more typical.
---
running-coreos/platforms/libvirt/index.md | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/running-coreos/platforms/libvirt/index.md b/running-coreos/platforms/libvirt/index.md
index 789bc7ce3..7579bd3ef 100644
--- a/running-coreos/platforms/libvirt/index.md
+++ b/running-coreos/platforms/libvirt/index.md
@@ -19,13 +19,13 @@ list][coreos-dev].
## Download the CoreOS image
In this guide, the example virtual machine we are creating is called coreos0 and
-all files are stored in /usr/src/coreos0. This is not a requirement — feel free
+all files are stored in /var/lib/libvirt/images/coreos0. This is not a requirement — feel free
to substitute that path if you use another one.
We start by downloading the most recent disk image:
- mkdir -p /usr/src/coreos0
- cd /usr/src/coreos0
+ mkdir -p /var/lib/libvirt/images/coreos0
+ cd /var/lib/libvirt/images/coreos0
wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_qemu_image.img.bz2
bunzip2 coreos_production_qemu_image.img.bz2
@@ -55,13 +55,13 @@ Now create /tmp/coreos0.xml with the following contents:
/usr/bin/kvm
-
+
-
+
@@ -93,7 +93,7 @@ You can change any of these parameters later.
Now create the metadata directory and import the XML as new VM into your libvirt instance:
- mkdir /usr/src/coreos0/metadata
+ mkdir /var/lib/libvirt/images/coreos0/metadata
virsh create /tmp/coreos0.xml
### Network configuration
@@ -101,7 +101,7 @@ Now create the metadata directory and import the XML as new VM into your libvirt
By default, CoreOS uses DHCP to get its network configuration, but in my
libvirt setup, I connect the VMs with a bridge to the host's eth0.
-Copy the following script to /usr/src/coreos0/metadata/run:
+Copy the following script to /var/lib/libvirt/images/coreos0/metadata/run:
#!/bin/bash
cat > /run/systemd/network/10-ens3.network <
Date: Wed, 16 Apr 2014 10:56:33 -0700
Subject: [PATCH 0007/1291] fix(libvirt): Update config for modern qemu times.
These days 'qemu-kvm' is preferred over 'kvm' and don't bother locking
the VM to the old 0.15 hardware definition.
---
running-coreos/platforms/libvirt/index.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/running-coreos/platforms/libvirt/index.md b/running-coreos/platforms/libvirt/index.md
index 7579bd3ef..93954d797 100644
--- a/running-coreos/platforms/libvirt/index.md
+++ b/running-coreos/platforms/libvirt/index.md
@@ -39,7 +39,7 @@ Now create /tmp/coreos0.xml with the following contents:
10485761
- hvm
+ hvm
@@ -52,7 +52,7 @@ Now create /tmp/coreos0.xml with the following contents:
restartrestart
- /usr/bin/kvm
+ /usr/bin/qemu-kvm
From f42029ff7e68bebc627dc3f0578413aa3eb514e0 Mon Sep 17 00:00:00 2001
From: Michael Marineau
Date: Wed, 16 Apr 2014 11:20:29 -0700
Subject: [PATCH 0008/1291] feat(libvirt): Rework documentation around using
config drive.
---
running-coreos/platforms/libvirt/index.md | 63 +++++++++++++----------
1 file changed, 36 insertions(+), 27 deletions(-)
diff --git a/running-coreos/platforms/libvirt/index.md b/running-coreos/platforms/libvirt/index.md
index 93954d797..30e2485ac 100644
--- a/running-coreos/platforms/libvirt/index.md
+++ b/running-coreos/platforms/libvirt/index.md
@@ -61,7 +61,7 @@ Now create /tmp/coreos0.xml with the following contents:
-
+
@@ -91,43 +91,53 @@ Now create /tmp/coreos0.xml with the following contents:
You can change any of these parameters later.
-Now create the metadata directory and import the XML as new VM into your libvirt instance:
+### Config drive
- mkdir /var/lib/libvirt/images/coreos0/metadata
- virsh create /tmp/coreos0.xml
+Now create a config drive file system to configure CoreOS itself:
-### Network configuration
+ mkdir -p /var/lib/libvirt/images/coreos0/configdrive/openstack/latest
+ touch /var/lib/libvirt/images/coreos0/configdrive/openstack/latest/user_data
-By default, CoreOS uses DHCP to get its network configuration, but in my
-libvirt setup, I connect the VMs with a bridge to the host's eth0.
+The `user_data` file may contain a script for a [cloud config][cloud-config]
+file. We recommend using ssh keys to log into the VM so at a minimum the
+contents of `user_data` should look something like this:
-Copy the following script to /var/lib/libvirt/images/coreos0/metadata/run:
+ #config-drive
- #!/bin/bash
- cat > /run/systemd/network/10-ens3.network <
Date: Wed, 16 Apr 2014 11:34:49 -0700
Subject: [PATCH 0009/1291] fix(libvirt): Add missing links
---
running-coreos/platforms/libvirt/index.md | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/running-coreos/platforms/libvirt/index.md b/running-coreos/platforms/libvirt/index.md
index 30e2485ac..61be79eb2 100644
--- a/running-coreos/platforms/libvirt/index.md
+++ b/running-coreos/platforms/libvirt/index.md
@@ -107,6 +107,8 @@ contents of `user_data` should look something like this:
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq.......
+[cloud-config]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config
+
### Network configuration
By default, CoreOS uses DHCP to get its network configuration. In this
@@ -132,6 +134,8 @@ add a [networkd unit][systemd-network] to `user_data`:
Gateway=203.0.113.1
DNS=8.8.8.8
+[systemd-network]: http://www.freedesktop.org/software/systemd/man/systemd.network.html
+
## Virtual machine startup
From 04456759cee30d051a85590abd368ccdb6300788 Mon Sep 17 00:00:00 2001
From: Michael Marineau
Date: Wed, 16 Apr 2014 11:42:02 -0700
Subject: [PATCH 0010/1291] feat(openstack): Add note on using config drive.
---
running-coreos/platforms/openstack/index.md | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/running-coreos/platforms/openstack/index.md b/running-coreos/platforms/openstack/index.md
index a5f08bf23..ec1301d30 100644
--- a/running-coreos/platforms/openstack/index.md
+++ b/running-coreos/platforms/openstack/index.md
@@ -49,7 +49,12 @@ $ glance image-create --name CoreOS \
## Cloud-Config
-CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config). We're going to provide our cloud-config to Openstack via the user-data flag. Our cloud-config will also contain SSH keys that will be used to connect to the instance. In order for this to work your OpenStack cloud provider must be running the OpenStack metadata service.
+CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features][cloud-config].
+We're going to provide our cloud-config to Openstack via the user-data flag. Our cloud-config will also contain SSH keys that will be used to connect to the instance.
+In order for this to work your OpenStack cloud provider must support [config drive][config-drive] or the OpenStack metadata service.
+
+[cloud-config]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config
+[config-drive]: http://docs.openstack.org/user-guide/content/config-drive.html
The most common cloud-config for Openstack looks like:
@@ -87,6 +92,8 @@ nova boot \
--security-groups default coreos
```
+To use config drive you may need to add `--config-drive=true` to command above.
+
Your first CoreOS cluster should now be running. The only thing left to do is
find an IP and SSH in.
From c0d9fbb03114f365e895a61f45b362e2b2efc525 Mon Sep 17 00:00:00 2001
From: Justin Paine
Date: Wed, 16 Apr 2014 16:09:19 -0700
Subject: [PATCH 0011/1291] still need to proof read
---
running-coreos/cloud-providers/vultr/index.md | 49 ++++++++++++-------
1 file changed, 30 insertions(+), 19 deletions(-)
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index bc1062ccb..875657059 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -8,43 +8,54 @@ weight: 10
# Running CoreOS on a Vultr VPS
-CoreOS is currently in heavy development and actively being tested. These instructions will walk you through running a single CoreOS node. This guide assumes you have an account at [Vultr.com](http://vultr.com).
+CoreOS is currently in heavy development and actively being tested. These instructions will walk you through running a single CoreOS node. This guide assumes you have an account at [Vultr.com](http://vultr.com). This also assumes you have a public + private key combination generated. Here's a helpful guide if you need to generate these keys: [How to set up SSH keys](https://www.digitalocean.com/community/articles/how-to-set-up-ssh-keys--2).
-## List Images
+## Create the VPS
-You can find it by listing all images and grepping for CoreOS:
+Create a new VPS (any location, any size of your choice), and then for the "Operating System" selection make sure to select "Custom". Click "Place Order". Once you receive the welcome email the VPS will be ready to use (typically 2-3 minutes at most).
+## Create the script
+
+The simplest option to boot up CoreOS as quickly as possible is to load script that contains the series of commands you'd otherwise need to manually type at the command line. This script needs to be publicly accessible (host is on your server for example).
+
+A sample script will look like this :
+
+```
+#!ipxe
+set coreos-version dev-channel
+set base-url http://storage.core-os.net/coreos/amd64-generic/${coreos-version}
+kernel ${base-url}/coreos_production_pxe.vmlinuz root=squashfs: state=tmpfs: sshkey="YOUR_PUBLIC_KEY_HERE"
+initrd ${base-url}/coreos_production_pxe_image.cpio.gz
+boot
```
-$ brightbox images list | grep CoreOS
+Make sure to replace `YOUR_PUBLIC_KEY_HERE` with your actual public key.
- id owner type created_on status size name
- ---------------------------------------------------------------------------------------------------------
- {{site.brightbox-id}} brightbox official 2013-12-15 public 5442 CoreOS {{site.brightbox-version}} (x86_64)
- ```
+Additional reading can be found at [Booting CoreOS with iPXE](http://coreos.com/docs/running-coreos/bare-metal/booting-with-ipxe/) and [Embedded scripts for iPXE](http://ipxe.org/embed).
-## Building Servers
+## Getting CoreOS running
-Before building the cluster, we need to generate a unique identifier for it, which is used by CoreOS to discover and identify nodes.
+Once you received the email indicating the VPS was ready, click "Manage" for that VPS. Under "Server Actions" Click on "View Console" which will open a new window, and show the iPXE command line prompt.
-You can use any random string so we’ll use the `uuid` tool here to generate one:
+Type the following commands:
```
-$ TOKEN=`uuid`
-
-$ echo $TOKEN
-53cf11d4-3726-11e3-958f-939d4f7f9688
+iPXE> dhcp
```
+The output should end with "OK".
-Then build three servers using the image, in the server group we created and specifying the token as the user data:
+then type:
-[Booting CoreOS with iPXE](http://coreos.com/docs/running-coreos/bare-metal/booting-with-ipxe/)
-[iPXE using scripts](http://ipxe.org/embed)
+```
+iPXE> chain http://PATH_TO_YOUR_SCRIPT/script.txt
+```
+Make sure to update ```PATH_TO_YOUR_SCRIPT``` with your correct path to script you created earlier.
+You'll see several lines fly past on the consoles the kernel is loaded, and then the initrd is loaded. CoreOS will automatically then boot up, and you'll end up at a login prompt.
## Accessing the VPS
-Those servers should take just a minute to build and boot. They automatically install your Brightbox Cloud ssh key on bootup, so you can ssh in straight away as the `core` user.
+You can now login to CoreOS, assuming the associated private key is in place on your local computer you'll immediately be logged in. You may need to specify the specific location using ```-i LOCATION```. If you need additional details on how to specify the location of your private key file see [here](http://www.cyberciti.biz/faq/force-ssh-client-to-use-given-private-key-identity-file/).
```
From e0ea8c663e42cd741252ab8ccf187e9be9081ed7 Mon Sep 17 00:00:00 2001
From: Justin Paine
Date: Wed, 16 Apr 2014 16:36:10 -0700
Subject: [PATCH 0012/1291] Update index.md
several wording changes.
---
running-coreos/cloud-providers/vultr/index.md | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index 875657059..b819539e5 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -13,11 +13,11 @@ CoreOS is currently in heavy development and actively being tested. These instr
## Create the VPS
-Create a new VPS (any location, any size of your choice), and then for the "Operating System" selection make sure to select "Custom". Click "Place Order". Once you receive the welcome email the VPS will be ready to use (typically 2-3 minutes at most).
+Create a new VPS (any location and any size of your choice), and then for the "Operating System" value select "Custom". Click "Place Order". Once you receive the welcome email the VPS will be ready to use (typically less than 2-3 minutes).
## Create the script
-The simplest option to boot up CoreOS as quickly as possible is to load script that contains the series of commands you'd otherwise need to manually type at the command line. This script needs to be publicly accessible (host is on your server for example).
+The simplest option to boot up CoreOS is to load a script that contains the series of commands you'd otherwise need to manually type at the command line. This script needs to be publicly accessible (host it on your own server -- http://example.com/script.txt for example). Save this script as a text file (.txt extension).
A sample script will look like this :
@@ -29,13 +29,13 @@ kernel ${base-url}/coreos_production_pxe.vmlinuz root=squashfs: state=tmpfs: ssh
initrd ${base-url}/coreos_production_pxe_image.cpio.gz
boot
```
-Make sure to replace `YOUR_PUBLIC_KEY_HERE` with your actual public key.
+Make sure to replace `YOUR_PUBLIC_KEY_HERE` with your actual public key, it will begin with "ssh-rsa...".
Additional reading can be found at [Booting CoreOS with iPXE](http://coreos.com/docs/running-coreos/bare-metal/booting-with-ipxe/) and [Embedded scripts for iPXE](http://ipxe.org/embed).
## Getting CoreOS running
-Once you received the email indicating the VPS was ready, click "Manage" for that VPS. Under "Server Actions" Click on "View Console" which will open a new window, and show the iPXE command line prompt.
+Once you have received the email indicating the VPS is ready, click "Manage" for that VPS in your Vultr account area. Under "Server Actions" Click on "View Console" which will open a new window, and show the iPXE command prompt.
Type the following commands:
@@ -47,16 +47,18 @@ The output should end with "OK".
then type:
```
-iPXE> chain http://PATH_TO_YOUR_SCRIPT/script.txt
+iPXE> chain http://PATH_TO_YOUR_SCRIPT
```
-Make sure to update ```PATH_TO_YOUR_SCRIPT``` with your correct path to script you created earlier.
+Make sure to update ```PATH_TO_YOUR_SCRIPT``` with your correct path to script you created earlier. For example, ```http://example.com/script.txt```
-You'll see several lines fly past on the consoles the kernel is loaded, and then the initrd is loaded. CoreOS will automatically then boot up, and you'll end up at a login prompt.
+You'll see several lines scroll past on the console as the kernel is loaded, and then the initrd is loaded. CoreOS will automatically then boot up, and you'll end up at a login prompt.
## Accessing the VPS
You can now login to CoreOS, assuming the associated private key is in place on your local computer you'll immediately be logged in. You may need to specify the specific location using ```-i LOCATION```. If you need additional details on how to specify the location of your private key file see [here](http://www.cyberciti.biz/faq/force-ssh-client-to-use-given-private-key-identity-file/).
+SSH to the IP of your VPS, and specify the "core" user specifically: ```ssh core@IP_HERE```
+
```
$ ssh core@IP_HERE
From 0e8ad7d6d22d963c4953ef4f434c19f7892bb248 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 16 Apr 2014 16:09:38 -0700
Subject: [PATCH 0013/1291] feat(adding-users): guide for adding users
---
.../setup/adding-users/index.md | 57 +++++++++++++++++++
1 file changed, 57 insertions(+)
create mode 100644 cluster-management/setup/adding-users/index.md
diff --git a/cluster-management/setup/adding-users/index.md b/cluster-management/setup/adding-users/index.md
new file mode 100644
index 000000000..9db207860
--- /dev/null
+++ b/cluster-management/setup/adding-users/index.md
@@ -0,0 +1,57 @@
+---
+layout: docs
+title: Adding Users
+category: cluster_management
+sub_category: setting_up
+weight: 7
+---
+
+# Adding Users
+
+You can create user accounts on a CoreOS machine manually with `useradd` or via cloud-config when the machine is created.
+
+## Add Users via Cloud-Config
+
+Managing users via cloud-config is preferred because it allows you to use the same configuration across many servers and the cloud-config file can be stored in a repo and versioned. In your cloud-config, you can specify many [different parameters]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/#users) for each user. Here's an example:
+
+```
+#cloud-config
+
+users:
+ - name: elroy
+ passwd: $6$5s2u6/jR$un0AvWnqilcgaNB3Mkxd5yYv6mTlWfOoCYHZmfi3LDKVltj.E8XNKEcwWm...
+ groups:
+ - sudo
+ - docker
+ ssh-authorized-keys:
+ - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h...
+```
+
+Check out the entire [Customize with Cloud-Config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/) guide for the full details.
+
+## Add User Manually
+
+If you'd like to add a user manually, SSH to the machine and use the `useradd` toll. To create the user `user`, run:
+
+```
+sudo useradd -p "*" -U -m user1 -G sudo
+```
+
+The `"*"` creates a user that cannot login with a password but can log in via SSH key. `-U` creates a group for the user, `-G` adds the user to the existing `sudo` group and `-m` creates a home directory. If you'd like to add a password for the user, run:
+
+```
+$ sudo passwd user1
+New password:
+Re-enter new password:
+passwd: password changed.
+```
+
+To assign an SSH key, run:
+
+```
+update-ssh-keys -u user1 user1.pem
+```
+
+## Further Reading
+
+Read the [full cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/) guide to install users and more.
\ No newline at end of file
From 0d307835fde4da22c1ed3a35ce0224efd342e910 Mon Sep 17 00:00:00 2001
From: Michael Marineau
Date: Wed, 16 Apr 2014 18:16:07 -0700
Subject: [PATCH 0014/1291] fix(notes-for-distributors): Complete doc on
checking images.
Document the preferred name for .DIGESTS.asc files and how to use the
file with sha512sum to verify the disk images.
---
.../distributors/notes-for-distributors/index.md | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/sdk-distributors/distributors/notes-for-distributors/index.md b/sdk-distributors/distributors/notes-for-distributors/index.md
index 61a117b3f..443a9f90c 100644
--- a/sdk-distributors/distributors/notes-for-distributors/index.md
+++ b/sdk-distributors/distributors/notes-for-distributors/index.md
@@ -14,7 +14,12 @@ Images of CoreOS are hosted at `http://storage.core-os.net/coreos/amd64-usr/`. A
If you are importing images for use inside of your environment it is recommended that you import from a URL in the following format `http://storage.core-os.net/coreos/amd64-usr/${CHANNEL}/`. For example to grab the alpha OpenStack version of CoreOS you can import `http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_openstack_image.img.bz2`. There is a `version.txt` file in this directory which you can use to label the image with a version number as well.
-It is recommended that you also verify files using the [CoreOS Image Signing Key][signing-key]. The digests are simply the image URL with `.DIGESTS.asc` appended to it. You can verify the digest with `gpg --verify` after importing the signing key.
+It is recommended that you also verify files using the [CoreOS Image Signing Key][signing-key]. The digests are simply the image URL, replacing `_image.img.bz2` with `.DIGESTS.asc`. You can verify the digest with `gpg --verify` after importing the signing key. Then the image itself can be verified based on one of the hashes in `.DIGESTS.asc`. For example:
+
+ wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_openstack_image.img.bz2
+ wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_openstack.DIGESTS.asc
+ gpg --verify coreos_production_openstack.DIGESTS.asc
+ sha512sum -c coreos_production_openstack.DIGESTS.asc
[signing-key]: {{site.url}}/security/image-signing-key
From 02e5c456810c409e498756ce864f28626c22ba35 Mon Sep 17 00:00:00 2001
From: Justin Paine
Date: Thu, 17 Apr 2014 15:04:49 -0700
Subject: [PATCH 0015/1291] added screenshot, clarify example path for script
---
running-coreos/cloud-providers/vultr/index.md | 16 +++++++++++-----
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index b819539e5..df1078501 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -8,16 +8,23 @@ weight: 10
# Running CoreOS on a Vultr VPS
-CoreOS is currently in heavy development and actively being tested. These instructions will walk you through running a single CoreOS node. This guide assumes you have an account at [Vultr.com](http://vultr.com). This also assumes you have a public + private key combination generated. Here's a helpful guide if you need to generate these keys: [How to set up SSH keys](https://www.digitalocean.com/community/articles/how-to-set-up-ssh-keys--2).
+CoreOS is currently in heavy development and actively being tested. These instructions will walk you through running a single CoreOS node. This guide assumes:
+* You have an account at [Vultr.com](http://vultr.com).
+* The location of your iPXE script (referenced later in the guide) is located at ```http://example.com/script.txt```
+* You have a public + private key combination generated. Here's a helpful guide if you need to generate these keys: [How to set up SSH keys](https://www.digitalocean.com/community/articles/how-to-set-up-ssh-keys--2).
## Create the VPS
-Create a new VPS (any location and any size of your choice), and then for the "Operating System" value select "Custom". Click "Place Order". Once you receive the welcome email the VPS will be ready to use (typically less than 2-3 minutes).
+Create a new VPS (any server type and location of your choice), and then for the "Operating System" select "Custom". Click "Place Order".
+
+
+
+Once you receive the welcome email the VPS will be ready to use (typically less than 2-3 minutes).
## Create the script
-The simplest option to boot up CoreOS is to load a script that contains the series of commands you'd otherwise need to manually type at the command line. This script needs to be publicly accessible (host it on your own server -- http://example.com/script.txt for example). Save this script as a text file (.txt extension).
+The simplest option to boot up CoreOS is to load a script that contains the series of commands you'd otherwise need to manually type at the command line. This script needs to be publicly accessible (host this file on your own server). Save this script as a text file (.txt extension).
A sample script will look like this :
@@ -47,9 +54,8 @@ The output should end with "OK".
then type:
```
-iPXE> chain http://PATH_TO_YOUR_SCRIPT
+iPXE> chain http://example.com/script.txt
```
-Make sure to update ```PATH_TO_YOUR_SCRIPT``` with your correct path to script you created earlier. For example, ```http://example.com/script.txt```
You'll see several lines scroll past on the console as the kernel is loaded, and then the initrd is loaded. CoreOS will automatically then boot up, and you'll end up at a login prompt.
From 2700f7ac91799968f5eeb210e5d3a35cbf4eba4b Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 17 Apr 2014 15:06:04 -0700
Subject: [PATCH 0016/1291] fix(*): remove out of date banners
---
.../debugging/prevent-reboot-after-update/index.md | 6 +-----
cluster-management/scaling/adding-disk-space/index.md | 6 +-----
.../launching/getting-started-with-systemd/index.md | 4 ----
quickstart/index.md | 4 ----
4 files changed, 2 insertions(+), 18 deletions(-)
diff --git a/cluster-management/debugging/prevent-reboot-after-update/index.md b/cluster-management/debugging/prevent-reboot-after-update/index.md
index 588c24069..5753bb749 100644
--- a/cluster-management/debugging/prevent-reboot-after-update/index.md
+++ b/cluster-management/debugging/prevent-reboot-after-update/index.md
@@ -7,10 +7,6 @@ sub_category: debugging
weight: 8
---
-
-These instructions have been updated for our new images.
-
-
# Prevent Reboot After Update
This is a temporary workaround to disable auto updates. As we move out of the alpha there will be a nicer method.
@@ -58,4 +54,4 @@ sudo systemctl unmask update-engine-reboot-manager.service
sudo systemctl start update-engine-reboot-manager.service
```
-Have fun!
\ No newline at end of file
+Have fun!
diff --git a/cluster-management/scaling/adding-disk-space/index.md b/cluster-management/scaling/adding-disk-space/index.md
index 0e9af1be3..2f146bd34 100644
--- a/cluster-management/scaling/adding-disk-space/index.md
+++ b/cluster-management/scaling/adding-disk-space/index.md
@@ -7,10 +7,6 @@ sub_category: scaling
weight: 5
---
-
-These instructions have been updated for our new images.
-
-
# Adding Disk Space to Your CoreOS Machine
On a CoreOS machine, the operating system itself is mounted as a read-only partition at `/usr`. The root partition provides read-write storage by default and on a fresh install is mostly blank. The default size of this partition depends on the platform but it is usually between 3GB and 16GB. If more space is required simply extend the virtual machine's disk image and CoreOS will fix the partition table and resize the root partition to fill the disk on the next boot.
@@ -65,4 +61,4 @@ image to a VDI image and configuring a new virtual machine with it:
```
VBoxManage clonehd old.vmdk new.vdi --format VDI
VBoxManage modifyhd new.vdi --resize 20480
-```
\ No newline at end of file
+```
diff --git a/launching-containers/launching/getting-started-with-systemd/index.md b/launching-containers/launching/getting-started-with-systemd/index.md
index ab50eb865..2c41247ab 100644
--- a/launching-containers/launching/getting-started-with-systemd/index.md
+++ b/launching-containers/launching/getting-started-with-systemd/index.md
@@ -7,10 +7,6 @@ sub_category: launching
weight: 5
---
-
-These instructions have been updated for our new images.
-
-
# Getting Started with systemd
systemd is an init system that provides many powerful features for starting, stopping and managing processes. Within the CoreOS world, you will almost exclusively use systemd to manage the lifecycle of your Docker containers.
diff --git a/quickstart/index.md b/quickstart/index.md
index 89364e890..6ca69168c 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -4,10 +4,6 @@ title: CoreOS Quick Start
#redirect handled in alias_generator.rb
---
-
-These instructions have been updated for our new images.
-
-
# Quick Start
If you don't have a CoreOS machine running, check out the guides on running CoreOS on [Vagrant][vagrant-guide], [Amazon EC2][ec2-guide], [QEMU/KVM][qemu-guide], [VMware][vmware-guide] and [OpenStack][openstack-guide]. With either of these guides you will have a machine up and running in a few minutes.
From 91b52aa87e9577879dd17f3dd1f53aa04f44f4ef Mon Sep 17 00:00:00 2001
From: Michael Marineau
Date: Thu, 17 Apr 2014 18:29:58 -0700
Subject: [PATCH 0017/1291] fix(bare-metal): First pass at updating PXE and
install instructions.
Beyond the basic updates to the new images a few changes include:
- Removed PXE instructions from the install-to-disk instructions.
- Removed instructions to email Carly.
- Document using cloud config with coreos-install.
---
.../bare-metal/booting-with-pxe/index.md | 62 +++++++++-------
.../bare-metal/installing-to-disk/index.md | 71 ++++++-------------
2 files changed, 58 insertions(+), 75 deletions(-)
diff --git a/running-coreos/bare-metal/booting-with-pxe/index.md b/running-coreos/bare-metal/booting-with-pxe/index.md
index e4a35f6ae..abfb88949 100644
--- a/running-coreos/bare-metal/booting-with-pxe/index.md
+++ b/running-coreos/bare-metal/booting-with-pxe/index.md
@@ -23,11 +23,14 @@ If you need suggestions on how to set a server up, check out guides for [Debian]
### Setting up pxelinux.cfg
-When configuring the CoreOS pxelinux.cfg entry there are are three important kernel parameters:
+When configuring the CoreOS pxelinux.cfg there are a few kernel options that may be useful but all are optional:
-- **root=squashfs:**: tells CoreOS to run out of the squashfs root provided in the PXE initrd
-- **state=tmpfs:**: tells CoreOS to put all state into a tmpfs filesystem instead of searching for a disk labeled "STATE"
-- **sshkey**: the given SSH public key will be added to the `core` user's authorized_keys file. Replace the example key below with your own (it is usually in `~/.ssh/id_rsa.pub`)
+- **rootfstype=tmpfs**: Use tmpfs for the writable root filesystem. This is the default behavior.
+- **rootfstype=btrfs**: Use btrfs in ram for the writable root filesystem. Use this option if you want to use docker without any further configuration. *Experimental*
+- **root**: Use a local filesystem for root instead of one of two in-ram options above. The filesystem must be formatted in advance but may be completely blank, it will be initialized on boot. The filesystem may be specified by any of the usual ways including device, label, or UUID; e.g: `root=/dev/sda1`, `root=LABEL=ROOT` or `root=UUID=2c618316-d17a-4688-b43b-aa19d97ea821`.
+- **sshkey**: Add the given SSH public key to the `core` user's authorized_keys file. Replace the example key below with your own (it is usually in `~/.ssh/id_rsa.pub`)
+- **console**: Enable kernel output and a login prompt on a given tty. The default, `tty0`, generally maps to VGA. Can be used multiple times, e.g. `console=tty0 console=ttyS0`
+- **coreos.autologin**: Drop directly to a shell on a given console without prompting for a password. Useful for troubleshooting but use with caution. For any console that doesn't normally get a login prompt by default be sure to combine with the `console` option, e.g. `console=ttyS0 coreos.autologin=ttyS0`. Without any argument it enables access on all consoles. *Experimental*
This is an example pxelinux.cfg file that assumes CoreOS is the only option.
You should be able to copy this verbatim into `/var/lib/tftpboot/pxelinux.cfg/default` after putting in your own SSH key.
@@ -42,26 +45,26 @@ display boot.msg
label coreos
menu default
kernel coreos_production_pxe.vmlinuz
- append initrd=coreos_production_pxe_image.cpio.gz root=squashfs: state=tmpfs: sshkey="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAYQC2PxAKTLdczK9+RNsGGPsz0eC2pBlydBEcrbI7LSfiN7Bo5hQQVjki+Xpnp8EEYKpzu6eakL8MJj3E28wT/vNklT1KyMZrXnVhtsmOtBKKG/++odpaavdW2/AU0l7RZiE= coreos pxe demo"
+ append initrd=coreos_production_pxe_image.cpio.gz sshkey="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAYQC2PxAKTLdczK9+RNsGGPsz0eC2pBlydBEcrbI7LSfiN7Bo5hQQVjki+Xpnp8EEYKpzu6eakL8MJj3E28wT/vNklT1KyMZrXnVhtsmOtBKKG/++odpaavdW2/AU0l7RZiE= coreos pxe demo"
```
-**Other Arguments**
-
-- **console**: If you need a login prompt to show up on another tty besides the default append a list of console arguments e.g. `console=tty0 console=ttyS0`
-
### Download the files
In the config above you can see that a Kernel image and a initramfs file is needed.
-Download these two files into your tftp root:
+Download these two files into your tftp root.
+The extra `coreos_production_pxe.DIGESTS.asc` file can be used to [verify the others][verify-notes].
```
cd /var/lib/tftpboot
-wget http://storage.core-os.net/coreos/amd64-generic/dev-channel/coreos_production_pxe.vmlinuz
-wget http://storage.core-os.net/coreos/amd64-generic/dev-channel/coreos_production_pxe_image.cpio.gz
+wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_pxe.vmlinuz
+wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_pxe_image.cpio.gz
+wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_pxe.DIGESTS.asc
```
PXE booted machines cannot currently update themselves.
-To update to the latest version of CoreOS download these two files again and reboot.
+To update to the latest version of CoreOS download/verify these files again and reboot.
+
+[verify-notes]: {{site.url}}/docs/sdk-distributors/distributors/notes-for-distributors/#importing-images
## Booting the Box
@@ -73,9 +76,7 @@ If something goes wrong you can direct questions to the [IRC channel][irc] or [m
This is localhost.unknown_domain (Linux x86_64 3.10.10+) 19:53:36
SSH host key: 24:2e:f1:3f:5f:9c:63:e5:8c:17:47:32:f4:09:5d:78 (RSA)
SSH host key: ed:84:4d:05:e3:7d:e3:d0:b9:58:90:58:3b:99:3a:4c (DSA)
-docker0: 172.17.42.1 fe80::e89f:b5ff:fece:979f
-lo: 127.0.0.1 ::1
-eth0: 10.0.2.15 fe80::5054:ff:fe12:3456
+ens0: 10.0.2.15 fe80::5054:ff:fe12:3456
localhost login:
```
@@ -91,19 +92,31 @@ ssh core@10.0.2.15
## Update Process
-Since our upgrade process requires a disk, this image does not have the option to update itself. Instead, the box simply needs to be rebooted and will be running the latest verison, assuming that the image served by the PXE server is regularly updated.
+Since our upgrade process requires a disk, this image does not have the option to update itself. Instead, the box simply needs to be rebooted and will be running the latest version, assuming that the image served by the PXE server is regularly updated.
## Installation
-CoreOS can be completely installed on disk or run from RAM but store user data on disk. Read more in our [Installing CoreOS guide](/docs/running-coreos/bare-metal/installing-to-disk).
+Once booted it is possible to [install CoreOS on a local disk][install-to-disk] or to just use local storage for the writable root filesystem while continuing to boot CoreOS itself via PXE.
+If you plan on using Docker we recommend using a local btrfs filesystem but ext4 is also available if supporting Docker is not required.
+For example, to setup a btrfs root filesystem on `/dev/sda`:
+
+```
+cfdisk -z /dev/sda
+touch "/usr.squashfs (deleted)" # work around a bug in mkfs.btrfs 3.12
+mkfs.btrfs -L ROOT /dev/sda1
+```
+
+And add `root=/dev/sda1` or `root=LABEL=ROOT` to the kernel options as documented above.
+
+[install-to-disk]: {{site.url}}/docs/running-coreos/bare-metal/installing-to-disk
## Adding a Custom OEM
-CoreOS has an [OEM partition][oem] that is used to setup networking, SSH keys, etc on boot.
-If you have site specific customizations you need to make the PXE image this is the perfect place to make it.
-Simply create a `./usr/share/oem/` directory as described on the [OEM page][oem] and append it to the CPIO:
+Similar to the [OEM partition][oem] in CoreOS disk images, PXE images can be customized with a [cloud config][cloud-config] bundled in the initramfs. Simply create a `./usr/share/oem/` directory containing `cloud-config.yml` and append it to the cpio:
```
+mkdir -p usr/share/oem
+cp cloud-config.yml ./usr/share/oem
gzip -d coreos_production_pxe_image.cpio.gz
find usr | cpio -o -A -H newc -O coreos_production_pxe_image.cpio
gzip coreos_production_pxe_image.cpio
@@ -114,14 +127,15 @@ Confirm the archive looks correct and has your `run` file inside of it:
```
gzip -dc coreos_production_pxe_image.cpio.gz | cpio -it
./
-newroot.squashfs
+usr.squashfs
usr
usr/share
usr/share/oem
-usr/share/oem/run
+usr/share/oem/cloud-config.yml
```
-[oem]: {{site.url}}/docs/oem/
+[oem]: {{site.url}}/docs/sdk-distributors/distributors/notes-for-distributors/#image-customization
+[cloud-config]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/
## Using CoreOS
diff --git a/running-coreos/bare-metal/installing-to-disk/index.md b/running-coreos/bare-metal/installing-to-disk/index.md
index af3fc7a83..fdf4fbf28 100644
--- a/running-coreos/bare-metal/installing-to-disk/index.md
+++ b/running-coreos/bare-metal/installing-to-disk/index.md
@@ -9,14 +9,9 @@ weight: 7
# Installing CoreOS to Disk
-There are two options for installation on bare metal:
+## Install Script
-- Use the installer and put a full CoreOS installation on disk
-- Set up a STATE partition and only store user data on disk and run CoreOS from RAM
-
-### Full Installation
-
-There is a simple installer that will destroy everything on the given target disk.
+There is a simple installer that will destroy everything on the given target disk and install CoreOS.
Essentially it downloads an image, verifies it with gpg and then copies it bit for bit to disk.
The script is self-contained and located [on Github here](https://raw.github.com/coreos/init/master/bin/coreos-install "coreos-install").
@@ -26,69 +21,43 @@ It is already installed if you are booting CoreOS via PXE but you can also use i
coreos-install -d /dev/sda
```
-You most likely will want to take your ssh authorized key files over to this new install too.
+When running on CoreOS the install script will attempt to install the same version. If you want to ensure you are installing the latest available version use the `-V` option:
```
-mount -o subvol=root /dev/sda9 /mnt/
-cp -Ra ~core/.ssh /mnt/home/core/
+coreos-install -d /dev/sda -V alpha
```
-### STATE Only Installation
-
-If you want to run CoreOS out of RAM but keep your containers and state on disk you will need to setup a STATE partition.
-For now this is a manual process.
+## Cloud Config
-First, add a single partition to your disk:
+By default there isn't a password or any other way to log into a fresh CoreOS system.
+The easiest way to configure accounts, add systemd units, and more is via cloud config.
+Jump over to the [docs to learn about the supported features][cloud-config].
+As an example this will install a ssh key for the default `core` user:
```
-parted -a optimal /dev/sda
-mklabel gpt
-mkpart primary 1 100%
+#cloud-config
+
+ssh_authorized_keys:
+ - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h...
```
-Next, format the disk and set the label:
+Pass this file to `coreos-install` via the `-c` option.
+It will be installed to `/var/lib/coreos-install/user_data` and evaluated on every boot.
```
-mkfs.ext4 /dev/sda1
-e2label /dev/sda1 STATE
+coreos-install -d /dev/sda -c ~/config
```
-Now you can remove the `state=tmpfs:` line from the PXE parameters and the next time you start the machine it will search for the disk and use it.
-
-## Hardware Support
-
-We are still working on the full set of hardware that we will be supporting.
-We have most of the common hardware working.
-If you run into issues ping on us #coreos or email [Carly][carly-email].
+[cloud-config]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config
-[carly-email]: mailto:carly.stoughton+pxehardware@coreos.com
+## Manual Tweaks
-## Adding a Custom OEM
+If cloud config doesn't handle something you need to do or you just want to take a look at the root btrfs filesystem before booting your new install just mount the ninth partition:
-CoreOS has an [OEM partition][oem] that is used to setup networking, SSH keys, etc on boot.
-If you have site specific customizations you need to make the PXE image this is the perfect place to make it.
-Simply create a `./usr/share/oem/` directory as described on the [OEM page][oem] and append it to the CPIO:
-
-```
-gzip -d coreos_production_pxe_image.cpio.gz
-find usr | cpio -o -A -H newc -O coreos_production_pxe_image.cpio
-gzip coreos_production_pxe_image.cpio
```
-
-Confirm the archive looks correct and has your `run` file inside of it:
-
-```
-gzip -dc coreos_production_pxe_image.cpio.gz | cpio -it
-./
-newroot.squashfs
-usr
-usr/share
-usr/share/oem
-usr/share/oem/run
+mount -o subvol=root /dev/sda9 /mnt/
```
-[oem]: {{site.url}}/docs/oem/
-
## Using CoreOS
Now that you have a machine booted it is time to play around.
From 6ec6b85ddea68946a983e63e6d22f007c8a8bc90 Mon Sep 17 00:00:00 2001
From: Michael Marineau
Date: Thu, 17 Apr 2014 19:52:58 -0700
Subject: [PATCH 0018/1291] feat(installing-to-disk): Document all
coreos-install options
---
running-coreos/bare-metal/installing-to-disk/index.md | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/running-coreos/bare-metal/installing-to-disk/index.md b/running-coreos/bare-metal/installing-to-disk/index.md
index fdf4fbf28..daa35a21f 100644
--- a/running-coreos/bare-metal/installing-to-disk/index.md
+++ b/running-coreos/bare-metal/installing-to-disk/index.md
@@ -27,6 +27,14 @@ When running on CoreOS the install script will attempt to install the same versi
coreos-install -d /dev/sda -V alpha
```
+For reference here are the rest of the `coreos-install` options:
+
+ -d DEVICE Install CoreOS to the given device.
+ -V VERSION Version to install (e.g. alpha)
+ -o OEM OEM type to install (e.g. openstack)
+ -c CLOUD Insert a cloud-init config to be executed on boot.
+ -t TMPDIR Temporary location with enough space to download images.
+
## Cloud Config
By default there isn't a password or any other way to log into a fresh CoreOS system.
From 3410295ab113989a4e986a70b675561cb05d97ed Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 18 Apr 2014 10:53:41 -0700
Subject: [PATCH 0019/1291] fix(ipxe): update to usr image and add cloud-config
instructions
---
.../bare-metal/booting-with-ipxe/index.md | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/running-coreos/bare-metal/booting-with-ipxe/index.md b/running-coreos/bare-metal/booting-with-ipxe/index.md
index 73d39b36a..beaff2aa6 100644
--- a/running-coreos/bare-metal/booting-with-ipxe/index.md
+++ b/running-coreos/bare-metal/booting-with-ipxe/index.md
@@ -21,14 +21,14 @@ To illustrate iPXE in action we will qemu-kvm in this guide.
### Setting up the Boot Script
iPXE downloads a boot script from a publicly available URL.
-You will need to host this URL somewhere public and replace the example SSH key with your own.
+You will need to host this URL somewhere public and replace the example SSH key with your own. You can also run a [custom iPXE server](https://github.com/kelseyhightower/coreos-ipxe-server).
```
#!ipxe
-set coreos-version dev-channel
-set base-url http://storage.core-os.net/coreos/amd64-generic/${coreos-version}
-kernel ${base-url}/coreos_production_pxe.vmlinuz root=squashfs: state=tmpfs: sshkey="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAYQC2PxAKTLdczK9+RNsGGPsz0eC2pBlydBEcrbI7LSfiN7Bo5hQQVjki+Xpnp8EEYKpzu6eakL8MJj3E28wT/vNklT1KyMZrXnVhtsmOtBKKG/++odpaavdW2/AU0l7RZiE= coreos pxe demo"
+set coreos-version alpha
+set base-url http://storage.core-os.net/coreos/amd64-usr/${coreos-version}
+kernel ${base-url}/coreos_production_pxe.vmlinuz sshkey="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAYQC2PxAKTLdczK9+RNsGGPsz0eC2pBlydBEcrbI7LSfiN7Bo5hQQVjki+Xpnp8EEYKpzu6eakL8MJj3E28wT/vNklT1KyMZrXnVhtsmOtBKKG/++odpaavdW2/AU0l7RZiE= coreos pxe demo"
initrd ${base-url}/coreos_production_pxe_image.cpio.gz
boot
```
@@ -57,7 +57,7 @@ Immediatly iPXE should download your boot script URL and start grabbing the imag
```
${YOUR_BOOT_URL}... ok
-http://storage.core-os.net/coreos/amd64-generic/dev-channel/coreos_production_pxe.vmlinuz... 98%
+http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_pxe.vmlinuz... 98%
```
After a few moments of downloading CoreOS should boot normally.
@@ -68,7 +68,11 @@ Since our upgrade process requires a disk, this image does not have the option t
## Installation
-CoreOS can be completely installed on disk or run from RAM but store user data on disk. Read more in our [Installing CoreOS guide](/docs/running-coreos/bare-metal/installing-to-disk).
+CoreOS can be completely installed on disk or run from RAM but store user data on disk. Read more in our [Installing CoreOS guide]({{site.url}}/docs/running-coreos/bare-metal/booting-with-pxe/#installation).
+
+## Adding a Custom OEM
+
+Similar to the [OEM partition][oem] in CoreOS disk images, iPXE images can be customized with a [cloud config][cloud-config] bundled in the initramfs. You can view the [instructions on the PXE docs]({{site.url/docs/bare-metal/booting-with-pxe/#adding-a-custom-oem}}).
## Using CoreOS
From 98011cb190b75100eb3bf1ee8fb744ae72716792 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 18 Apr 2014 11:46:39 -0700
Subject: [PATCH 0020/1291] fix(bare-metal/ipxe): fix broken links
---
running-coreos/bare-metal/booting-with-ipxe/index.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/running-coreos/bare-metal/booting-with-ipxe/index.md b/running-coreos/bare-metal/booting-with-ipxe/index.md
index beaff2aa6..6e1d6ba3a 100644
--- a/running-coreos/bare-metal/booting-with-ipxe/index.md
+++ b/running-coreos/bare-metal/booting-with-ipxe/index.md
@@ -74,6 +74,9 @@ CoreOS can be completely installed on disk or run from RAM but store user data o
Similar to the [OEM partition][oem] in CoreOS disk images, iPXE images can be customized with a [cloud config][cloud-config] bundled in the initramfs. You can view the [instructions on the PXE docs]({{site.url/docs/bare-metal/booting-with-pxe/#adding-a-custom-oem}}).
+[oem]: {{site.url}}/docs/sdk-distributors/distributors/notes-for-distributors/#image-customization
+[cloud-config]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/
+
## Using CoreOS
Now that you have a machine booted it is time to play around.
From 07c17d92d3358fe8484791fc286405ce4e0a6e0d Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 18 Apr 2014 11:55:05 -0700
Subject: [PATCH 0021/1291] fix(vagrant): update clustering instructions
---
running-coreos/platforms/vagrant/index.md | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/running-coreos/platforms/vagrant/index.md b/running-coreos/platforms/vagrant/index.md
index 082a54d4c..2145a8c17 100644
--- a/running-coreos/platforms/vagrant/index.md
+++ b/running-coreos/platforms/vagrant/index.md
@@ -18,7 +18,7 @@ You can direct questions to the [IRC channel][irc] or [mailing list][coreos-dev]
Vagrant is a simple-to-use command line virtual machine manager. There are
install packages available for Windows, Linux and OSX. Find the latest
installer on the [Vagrant downloads page][vagrant]. Be sure to get
-version 1.3.1 or greater.
+version 1.5 or greater.
[vagrant]: http://www.vagrantup.com/downloads.html
@@ -71,7 +71,8 @@ coreos:
## Startup CoreOS
-With Vagrant, you can start a single machine or an entire cluster. To start a cluster, edit `NUM_INSTANCES` in the Vagrantfile to three or more. The cluster will be automatically configured if you provided a discovery URL in the cloud-config.
+With Vagrant, you can start a single machine or an entire cluster. Launching a CoreOS cluster on Vagrant is as simple as configuring `$num_instances` in a `config.rb` file to 3 (or more!) and running `vagrant up`.
+Make sure you provide a fresh discovery URL in your `user-data` if you wish to bootstrap etcd in your cluster.
### Using Vagrant's default VirtualBox Provider
From 3a8ef28d09d2b93353ffc4c92b0f97b9b2f765d7 Mon Sep 17 00:00:00 2001
From: Justin Paine
Date: Fri, 18 Apr 2014 14:00:49 -0700
Subject: [PATCH 0022/1291] Update index.md
Replaced IP_HERE with just IP
---
running-coreos/cloud-providers/vultr/index.md | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index df1078501..35a4a429b 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -63,16 +63,16 @@ You'll see several lines scroll past on the console as the kernel is loaded, and
You can now login to CoreOS, assuming the associated private key is in place on your local computer you'll immediately be logged in. You may need to specify the specific location using ```-i LOCATION```. If you need additional details on how to specify the location of your private key file see [here](http://www.cyberciti.biz/faq/force-ssh-client-to-use-given-private-key-identity-file/).
-SSH to the IP of your VPS, and specify the "core" user specifically: ```ssh core@IP_HERE```
+SSH to the IP of your VPS, and specify the "core" user specifically: ```ssh core@IP```
```
-$ ssh core@IP_HERE
-The authenticity of host 'IP_HERE (2a02:1348:17c:423d:24:19ff:fef1:8f6)' can't be established.
+$ ssh core@IP
+The authenticity of host 'IP (2a02:1348:17c:423d:24:19ff:fef1:8f6)' can't be established.
RSA key fingerprint is 99:a5:13:60:07:5d:ac:eb:4b:f2:cb:c9:b2:ab:d7:21.
Are you sure you want to continue connecting (yes/no)? yes
-Last login: Thu Oct 17 11:42:04 UTC 2013 from YOUR_IP on pts/0
+Last login: Thu Oct 17 11:42:04 UTC 2013 from 127.0.0.1 on pts/0
______ ____ _____
/ ____/___ ________ / __ \/ ___/
/ / / __ \/ ___/ _ \/ / / /\__ \
From 2e6295d0ed6a792a08d6f525e900bedcc79759c1 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 22 Apr 2014 10:02:06 -0700
Subject: [PATCH 0023/1291] fix(vultr): remove provider from title
---
running-coreos/cloud-providers/vultr/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index 35a4a429b..d855a97c4 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -1,6 +1,6 @@
---
layout: docs
-title: Vultr VPS Provider
+title: Vultr VPS
category: running_coreos
sub_category: cloud_provider
weight: 10
From f615e2a0908718ac15904da61e24291bb45f7ad2 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 22 Apr 2014 10:32:33 -0700
Subject: [PATCH 0024/1291] fix(ipxe): remove git.io shortener references
---
running-coreos/bare-metal/booting-with-ipxe/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/bare-metal/booting-with-ipxe/index.md b/running-coreos/bare-metal/booting-with-ipxe/index.md
index 6e1d6ba3a..b6feabb1a 100644
--- a/running-coreos/bare-metal/booting-with-ipxe/index.md
+++ b/running-coreos/bare-metal/booting-with-ipxe/index.md
@@ -33,7 +33,7 @@ initrd ${base-url}/coreos_production_pxe_image.cpio.gz
boot
```
-An easy place to host this boot script is on https://gist.github.com and then shorten the raw URL with http://git.io.
+An easy place to host this boot script is on https://gist.github.com. Use the "raw" gist URL, which can be found by clicking the "〈 〉" icon near the top right of the text area.
### Booting iPXE
From 38e959d1f59011ca5a92a2d987357fa9c2f43715 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 22 Apr 2014 11:12:43 -0700
Subject: [PATCH 0025/1291] fix(ipxe): use pastie.org
---
running-coreos/bare-metal/booting-with-ipxe/index.md | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/running-coreos/bare-metal/booting-with-ipxe/index.md b/running-coreos/bare-metal/booting-with-ipxe/index.md
index b6feabb1a..2784d9273 100644
--- a/running-coreos/bare-metal/booting-with-ipxe/index.md
+++ b/running-coreos/bare-metal/booting-with-ipxe/index.md
@@ -33,7 +33,9 @@ initrd ${base-url}/coreos_production_pxe_image.cpio.gz
boot
```
-An easy place to host this boot script is on https://gist.github.com. Use the "raw" gist URL, which can be found by clicking the "〈 〉" icon near the top right of the text area.
+An easy place to host this boot script is on http://pastie.org. Be sure to reference the "raw" version of script, which is accessed by clicking on the clipboard in the top right.
+
+Note: the iPXE environment won't open https links, which means you can't use https://gist.github.com to store your script. Bummer, right?
### Booting iPXE
From 30496391cbd93870b2266b764b280c63cf60052d Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 22 Apr 2014 11:19:30 -0700
Subject: [PATCH 0026/1291] fix(ipxe): make links clickable
---
running-coreos/bare-metal/booting-with-ipxe/index.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/running-coreos/bare-metal/booting-with-ipxe/index.md b/running-coreos/bare-metal/booting-with-ipxe/index.md
index 2784d9273..046d24e91 100644
--- a/running-coreos/bare-metal/booting-with-ipxe/index.md
+++ b/running-coreos/bare-metal/booting-with-ipxe/index.md
@@ -33,9 +33,9 @@ initrd ${base-url}/coreos_production_pxe_image.cpio.gz
boot
```
-An easy place to host this boot script is on http://pastie.org. Be sure to reference the "raw" version of script, which is accessed by clicking on the clipboard in the top right.
+An easy place to host this boot script is on [http://pastie.org](http://pastie.org). Be sure to reference the "raw" version of script, which is accessed by clicking on the clipboard in the top right.
-Note: the iPXE environment won't open https links, which means you can't use https://gist.github.com to store your script. Bummer, right?
+Note: the iPXE environment won't open https links, which means you can't use [https://gist.github.com](https://gist.github.com) to store your script. Bummer, right?
### Booting iPXE
From 5cb3b197a060070c85ed49a6f57c2beca1fd6b76 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 22 Apr 2014 10:13:20 -0700
Subject: [PATCH 0027/1291] fix(vagrant): add note about manually adding box
---
running-coreos/platforms/vagrant/index.md | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/running-coreos/platforms/vagrant/index.md b/running-coreos/platforms/vagrant/index.md
index 2145a8c17..564ff85cc 100644
--- a/running-coreos/platforms/vagrant/index.md
+++ b/running-coreos/platforms/vagrant/index.md
@@ -134,6 +134,12 @@ vagrant box remove coreos-alpha vmware_fusion
vagrant box remove coreos-alpha virtualbox
```
+If you'd like to download the box separately, you can download the URL contained in the Vagrantfile and add it manually:
+
+```
+vagrant box add coreos-alpha
+```
+
## Using CoreOS
Now that you have a machine booted it is time to play around.
From 97d587e17ab847eb6bb4a81ba8d3684500facfc4 Mon Sep 17 00:00:00 2001
From: Brian Waldon
Date: Tue, 22 Apr 2014 18:08:54 -0700
Subject: [PATCH 0028/1291] doc(coreos-cloudinit): Add note about
PXE+coreos-cloudinit
---
running-coreos/bare-metal/booting-with-pxe/index.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/running-coreos/bare-metal/booting-with-pxe/index.md b/running-coreos/bare-metal/booting-with-pxe/index.md
index abfb88949..445fdf08f 100644
--- a/running-coreos/bare-metal/booting-with-pxe/index.md
+++ b/running-coreos/bare-metal/booting-with-pxe/index.md
@@ -31,6 +31,9 @@ When configuring the CoreOS pxelinux.cfg there are a few kernel options that may
- **sshkey**: Add the given SSH public key to the `core` user's authorized_keys file. Replace the example key below with your own (it is usually in `~/.ssh/id_rsa.pub`)
- **console**: Enable kernel output and a login prompt on a given tty. The default, `tty0`, generally maps to VGA. Can be used multiple times, e.g. `console=tty0 console=ttyS0`
- **coreos.autologin**: Drop directly to a shell on a given console without prompting for a password. Useful for troubleshooting but use with caution. For any console that doesn't normally get a login prompt by default be sure to combine with the `console` option, e.g. `console=ttyS0 coreos.autologin=ttyS0`. Without any argument it enables access on all consoles. *Experimental*
+- **cloud-config-url**: CoreOS will attempt to download a cloud-config document and use it to provision your booted system. See the [coreos-cloudinit-project][cloudinit] for more information.
+
+[cloudinit]: https://github.com/coreos/coreos-cloudinit
This is an example pxelinux.cfg file that assumes CoreOS is the only option.
You should be able to copy this verbatim into `/var/lib/tftpboot/pxelinux.cfg/default` after putting in your own SSH key.
From ee7823019aece136842f9a2aa753459707294f40 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 23 Apr 2014 16:56:50 -0700
Subject: [PATCH 0029/1291] fix(launching): update depreciated flags
---
.../launching/getting-started-with-systemd/index.md | 6 +++---
.../launching/launching-containers-fleet/index.md | 2 +-
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/launching-containers/launching/getting-started-with-systemd/index.md b/launching-containers/launching/getting-started-with-systemd/index.md
index 2c41247ab..845c25cac 100644
--- a/launching-containers/launching/getting-started-with-systemd/index.md
+++ b/launching-containers/launching/getting-started-with-systemd/index.md
@@ -83,11 +83,11 @@ The full list is located on the [systemd man page](http://www.freedesktop.org/so
Let's put a few of these concepts togther to register new units within etcd. Imagine we had another container running that would read these values from etcd and act upon them.
-We can use `ExecStart` to either create a container with the `docker run` command or start a pre-existing container with the `docker start -a` command. We need to account for both because you can't issue multiple docker run commands when specifying a `-name`. In either case we must leave the container in the foreground (i.e. don't run with `-d`) so systemd knows the service is running.
+We can use `ExecStart` to either create a container with the `docker run` command or start a pre-existing container with the `docker start -a` command. We need to account for both because you can't issue multiple docker run commands when specifying a `--name`. In either case we must leave the container in the foreground (i.e. don't run with `-d`) so systemd knows the service is running.
Since our container will be started in `ExecStart`, it makes sense for our etcd command to run as `ExecStartPost` to ensure that our container is started and functioning.
-When the service is told to stop, we need to stop the docker container using its `-name` from the run command. We also need to clean up our etcd key when the container exits or the unit is failed by using `ExecStopPost`.
+When the service is told to stop, we need to stop the docker container using its `--name` from the run command. We also need to clean up our etcd key when the container exits or the unit is failed by using `ExecStopPost`.
```
[Unit]
@@ -96,7 +96,7 @@ After=etcd.service
After=docker.service
[Service]
-ExecStart=/bin/bash -c '/usr/bin/docker start -a apache || /usr/bin/docker run -name apache -p 80:80 coreos/apache /usr/sbin/apache2ctl -D FOREGROUND'
+ExecStart=/bin/bash -c '/usr/bin/docker start -a apache || /usr/bin/docker run --name apache -p 80:80 coreos/apache /usr/sbin/apache2ctl -D FOREGROUND'
ExecStartPost=/usr/bin/etcdctl set /domains/example.com/10.10.10.123:8081 running
ExecStop=/usr/bin/docker stop apache
ExecStopPost=/usr/bin/etcdctl rm /domains/example.com/10.10.10.123:8081
diff --git a/launching-containers/launching/launching-containers-fleet/index.md b/launching-containers/launching/launching-containers-fleet/index.md
index 5db46f4c5..473fe3192 100644
--- a/launching-containers/launching/launching-containers-fleet/index.md
+++ b/launching-containers/launching/launching-containers-fleet/index.md
@@ -66,7 +66,7 @@ After=docker.service
Requires=docker.service
[Service]
-ExecStart=/usr/bin/docker run -name apache -p 80:80 coreos/apache /usr/sbin/apache2ctl -D FOREGROUND
+ExecStart=/usr/bin/docker run --name apache -p 80:80 coreos/apache /usr/sbin/apache2ctl -D FOREGROUND
ExecStop=/usr/bin/docker stop apache
[X-Fleet]
From 59f1317454150491624a4a630efa0d0d6b3312c1 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 25 Apr 2014 12:09:01 -0700
Subject: [PATCH 0030/1291] fix(getting-started-etcd): move to correct category
---
.../getting-started-with-etcd/index.md | 4 ++--
quickstart/index.md | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
rename {cluster-management/setup => distributed-configuration}/getting-started-with-etcd/index.md (99%)
diff --git a/cluster-management/setup/getting-started-with-etcd/index.md b/distributed-configuration/getting-started-with-etcd/index.md
similarity index 99%
rename from cluster-management/setup/getting-started-with-etcd/index.md
rename to distributed-configuration/getting-started-with-etcd/index.md
index 16d8075a8..5d9ad1bf4 100644
--- a/cluster-management/setup/getting-started-with-etcd/index.md
+++ b/distributed-configuration/getting-started-with-etcd/index.md
@@ -2,8 +2,8 @@
layout: docs
slug: guides/etcd
title: Getting Started with etcd
-category: cluster_management
-sub_category: setting_up
+category: distributed_configuration
+sub_category: reading_writing
weight: 5
---
diff --git a/quickstart/index.md b/quickstart/index.md
index 6ca69168c..db407ef1a 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -37,8 +37,8 @@ curl -L http://127.0.0.1:4001/v1/keys/message
If you followed a guide to set up more than one CoreOS machine, you can SSH into another machine and can retrieve this same value.
#### More Detailed Information
-View Complete Guide
-Read etcd API Docs
+View Complete Guide
+Read etcd API Docs
## Container Management with docker
From ba7e87c24f26cc2acab4d5e52af5e5d4a737bb70 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 25 Apr 2014 13:18:55 -0700
Subject: [PATCH 0031/1291] fix(quickstart): use correct urls
---
quickstart/index.md | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/quickstart/index.md b/quickstart/index.md
index db407ef1a..71ff892e7 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -57,7 +57,7 @@ docker run -i -t busybox /bin/sh
```
#### More Detailed Information
-View Complete Guide
+View Complete GuideRead docker Docs
## Process Management with systemd
@@ -110,6 +110,7 @@ sudo systemctl stop hello.service
```
#### More Detailed Information
+View Complete GuideRead systemd Website
#### Chaos Monkey
From 283ef02405f4a566280de8e10db044fc422565f2 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 25 Apr 2014 14:19:16 -0700
Subject: [PATCH 0032/1291] feat(pxe): add cloud-config example
---
.../bare-metal/booting-with-pxe/index.md | 20 +++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)
diff --git a/running-coreos/bare-metal/booting-with-pxe/index.md b/running-coreos/bare-metal/booting-with-pxe/index.md
index 445fdf08f..838b9d321 100644
--- a/running-coreos/bare-metal/booting-with-pxe/index.md
+++ b/running-coreos/bare-metal/booting-with-pxe/index.md
@@ -36,7 +36,7 @@ When configuring the CoreOS pxelinux.cfg there are a few kernel options that may
[cloudinit]: https://github.com/coreos/coreos-cloudinit
This is an example pxelinux.cfg file that assumes CoreOS is the only option.
-You should be able to copy this verbatim into `/var/lib/tftpboot/pxelinux.cfg/default` after putting in your own SSH key.
+You should be able to copy this verbatim into `/var/lib/tftpboot/pxelinux.cfg/default` after providing a cloud-config URL:
```
default coreos
@@ -48,9 +48,25 @@ display boot.msg
label coreos
menu default
kernel coreos_production_pxe.vmlinuz
- append initrd=coreos_production_pxe_image.cpio.gz sshkey="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAYQC2PxAKTLdczK9+RNsGGPsz0eC2pBlydBEcrbI7LSfiN7Bo5hQQVjki+Xpnp8EEYKpzu6eakL8MJj3E28wT/vNklT1KyMZrXnVhtsmOtBKKG/++odpaavdW2/AU0l7RZiE= coreos pxe demo"
+ append initrd=coreos_production_pxe_image.cpio.gz cloud-config-url=http://example.com/pxe-cloud-config.yml
```
+Here's a common cloud-config example which should be located at the URL from above:
+
+```
+#cloud-config
+coreos:
+ units:
+ - name: etcd.service
+ command: start
+ - name: fleet.service
+ command: start
+ ssh_authorized_keys:
+ - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAYQC2PxAKTLdczK9+RNsGGPsz0eC2pBlydBEcrbI7LSfiN7Bo5hQQVjki+Xpnp8EEYKpzu6eakL8MJj3E28wT/vNklT1KyMZrXnVhtsmOtBKKG/++odpaavdW2/AU0l7RZiE=
+```
+
+You can view all of the [cloud-config options here]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/).
+
### Download the files
In the config above you can see that a Kernel image and a initramfs file is needed.
From c82283dd47b2c4afbac026f8120cff2381b04038 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 25 Apr 2014 14:31:19 -0700
Subject: [PATCH 0033/1291] fix(pxe): incorrect indentation of authorized keys
---
running-coreos/bare-metal/booting-with-pxe/index.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/running-coreos/bare-metal/booting-with-pxe/index.md b/running-coreos/bare-metal/booting-with-pxe/index.md
index 838b9d321..e13e0528a 100644
--- a/running-coreos/bare-metal/booting-with-pxe/index.md
+++ b/running-coreos/bare-metal/booting-with-pxe/index.md
@@ -61,8 +61,8 @@ coreos:
command: start
- name: fleet.service
command: start
- ssh_authorized_keys:
- - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAYQC2PxAKTLdczK9+RNsGGPsz0eC2pBlydBEcrbI7LSfiN7Bo5hQQVjki+Xpnp8EEYKpzu6eakL8MJj3E28wT/vNklT1KyMZrXnVhtsmOtBKKG/++odpaavdW2/AU0l7RZiE=
+ssh_authorized_keys:
+ - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAYQC2PxAKTLdczK9+RNsGGPsz0eC2pBlydBEcrbI7LSfiN7Bo5hQQVjki+Xpnp8EEYKpzu6eakL8MJj3E28wT/vNklT1KyMZrXnVhtsmOtBKKG/++odpaavdW2/AU0l7RZiE=
```
You can view all of the [cloud-config options here]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/).
From 7b24f377f9d8d4a2333c979533745345ea7041c2 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 25 Apr 2014 16:06:22 -0700
Subject: [PATCH 0034/1291] feat(vmware): add cloud-config details
---
running-coreos/platforms/vmware/index.md | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/running-coreos/platforms/vmware/index.md b/running-coreos/platforms/vmware/index.md
index f1c91e5b4..611bbc8de 100644
--- a/running-coreos/platforms/vmware/index.md
+++ b/running-coreos/platforms/vmware/index.md
@@ -57,6 +57,14 @@ The above step creates the following files in ../coreos/:
The last step uploads the files to your ESXi datastore and registers your VM. You can now tweak the VM settings, like memory and virtual cores, then power it on. These instructions were tested to deploy to an ESXi 5.5 host.
+## Cloud-Config
+
+Cloud-config can be specified by attaching a [config-drive]({{site.url}}/docs/cluster-management/setup/cloudinit-config-drive/) with the label `config-2`. This is commonly done through whatever interface allows for attaching cd-roms or new drives.
+
+Note that the config-drive standard was originally an Openstack feature, which is why you'll see strings containing `openstack`. This filepath needs to be retained, although CoreOS supports config-drive on all platforms.
+
+For more information on customization that can be done with cloud-config, head on over to the [cloud-config guide]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/).
+
## Logging in
Networking can take a bit of time to come up under VMware and you will need to
From 3a12dc84d951d19962055dff717b0bfec8e9f8b7 Mon Sep 17 00:00:00 2001
From: Jari Sukanen
Date: Sat, 26 Apr 2014 10:07:21 +0000
Subject: [PATCH 0035/1291] fix(libvirt): update examples to match new
cloud-config
---
running-coreos/platforms/libvirt/index.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/running-coreos/platforms/libvirt/index.md b/running-coreos/platforms/libvirt/index.md
index 61be79eb2..9364b9303 100644
--- a/running-coreos/platforms/libvirt/index.md
+++ b/running-coreos/platforms/libvirt/index.md
@@ -62,7 +62,7 @@ Now create /tmp/coreos0.xml with the following contents:
-
+
@@ -102,7 +102,7 @@ The `user_data` file may contain a script for a [cloud config][cloud-config]
file. We recommend using ssh keys to log into the VM so at a minimum the
contents of `user_data` should look something like this:
- #config-drive
+ #cloud-config
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq.......
@@ -117,7 +117,7 @@ on the host's eth0 and the local network. To configure a static address
add a [networkd unit][systemd-network] to `user_data`:
- #config-drive
+ #cloud-config
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq.......
From 6e7198e9458f244a2d987164cbc60f832a83c33d Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Mon, 28 Apr 2014 13:17:48 -0700
Subject: [PATCH 0036/1291] feat(mounting-storage): instructions to use
ephemeral storage for docker
---
.../setup/mounting-storage/index.md | 45 +++++++++++++++++++
1 file changed, 45 insertions(+)
diff --git a/cluster-management/setup/mounting-storage/index.md b/cluster-management/setup/mounting-storage/index.md
index 952e5b5c9..6496436f7 100644
--- a/cluster-management/setup/mounting-storage/index.md
+++ b/cluster-management/setup/mounting-storage/index.md
@@ -28,6 +28,51 @@ As you can see, it's pretty simple. You specify the attached device and where yo
It's important to note that [systemd requires](http://www.freedesktop.org/software/systemd/man/systemd.mount.html) mount units to be named after the "mount point directories they control". In our example above, we want our device mounted at `/media/ephemeral` so it must be named `media-ephemeral.mount`.
+## Use Attached Storage for Docker
+
+Docker containers can be very large and debugging a build process makes it easy to accumulate hundreds of containers. It's advantagous to use attached storage to expand your capacity for container images. Be aware that some cloud providers treat certain disks as ephemeral and you will lose all docker images contained on that disk.
+
+We're going to bind mount a btrfs device to `/var/lib/docker`, where docker stores images. We can do this on the fly when the machines starts up with a oneshot unit that formats the drive and another one that runs afterwards to mount it. Be sure to hardcode the correct device or look for a device by label:
+
+```
+#cloud-config
+coreos:
+ units
+ - name: media-ephemeral.mount
+ command: start
+ content: |
+ [Mount]
+ What=/dev/xvdb
+ Where=/media/ephemeral
+ Type=btrfs
+ - name: format-ephemeral.service
+ command: start
+ content: |
+ [Unit]
+ Description=Formats the ephemeral drive
+ [Service]
+ Type=oneshot
+ ExecStart=/usr/sbin/wipefs -f /dev/xvdb
+ ExecStartPost=/usr/sbin/mkfs.btrfs -f /dev/xvdb
+ ExecStartPost=/usr/bin/mount -t btrfs /dev/xvdb /media/ephemeral
+ - name: docker-storage.service
+ command: start
+ content: |
+ [Unit]
+ Requires=format-ephemeral.service
+ Description=Mount ephemeral as /var/lib/docker
+ [Service]
+ Type=oneshot
+ ExecStartPre=/usr/bin/systemctl stop docker
+ ExecStartPre=/usr/bin/rm -rf /var/lib/docker/*
+ ExecStart=/usr/bin/mkdir -p /media/ephemeral/docker
+ ExecStart=/usr/bin/mkdir -p /var/lib/docker
+ ExecStartPost=/usr/bin/mount -o bind /media/ephemeral/docker /var/lib/docker
+ ExecStartPost=/usr/bin/systemctl start --no-block docker
+```
+
+Notice that we're starting all three of these units at the same time and using the power of systemd to work out the dependencies for us. In this case, `docker-storage.service` requires `format-ephemeral.service`, ensuring that our storage will always be formatted before it is bind mounted. Docker will refuse to start otherwise.
+
## Further Reading
Read the [full docs](http://www.freedesktop.org/software/systemd/man/systemd.mount.html) to learn about the available options. Examples specific to [EC2]({{site.url}}/docs/running-coreos/cloud-providers/ec2/#instance-storage), [Google Compute Engine]({{site.url}}/docs/running-coreos/cloud-providers/google-compute-engine/#additional-storage) and [Rackspace Cloud]({{site.url}}/docs/running-coreos/cloud-providers/rackspace/#mount-data-disk) can be used as a starting point.
\ No newline at end of file
From f3d7039637877dbfe054d5bc4cae12b78eb0515c Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Mon, 28 Apr 2014 13:26:33 -0700
Subject: [PATCH 0037/1291] feat(customizing-docker): link to mounting storage
---
.../building/customizing-docker/index.md | 118 +++++++++---------
1 file changed, 61 insertions(+), 57 deletions(-)
diff --git a/launching-containers/building/customizing-docker/index.md b/launching-containers/building/customizing-docker/index.md
index e3342b768..6dd81e77f 100644
--- a/launching-containers/building/customizing-docker/index.md
+++ b/launching-containers/building/customizing-docker/index.md
@@ -10,62 +10,6 @@ weight: 7
The docker systemd unit can be customized by overriding the unit that ships with the default CoreOS settings. Common use-cases for doing this are covered below.
-## Enabling the docker Debug Flag
-
-First, copy the existing unit from the read-only file system into the read/write file system, so we can edit it:
-
-```
-cp /usr/lib/systemd/system/docker.service /etc/systemd/system/
-```
-
-Edit the `ExecStart` line to add the -D flag:
-
-```
-ExecStart=/usr/bin/docker -d -s=btrfs -r=false -H fd:// -D
-```
-
-Now lets tell systemd about the new unit and restart docker:
-
-```
-systemctl daemon-reload
-systemctl restart docker
-```
-
-To test our debugging stream, run a docker command and then read the systemd journal, which should contain the output:
-
-```
-docker ps
-journalctl -u docker
-```
-
-### Cloud-Config
-
-If you need to modify a flag across many machines, you can provide the new unit with cloud-config:
-
-```
-#cloud-config
-
-coreos:
- units:
- - name: docker.service
- command: restart
- content: |
- [Unit]
- Description=Docker Application Container Engine
- Documentation=http://docs.docker.io
- After=network.target
- Requires=docker-tcp.socket
-
- [Service]
- ExecStartPre=/bin/mount --make-rprivate /
- # Run docker but don't have docker automatically restart
- # containers. This is a job for systemd and unit files.
- ExecStart=/usr/bin/docker -d -s=btrfs -r=false -H fd:// -D
-
- [Install]
- WantedBy=multi-user.target
-```
-
## Enable the Remote API on a New Socket
Create a file called `/etc/systemd/system/docker-tcp.socket` to make docker available on a tcp socket on port 4243.
@@ -124,4 +68,64 @@ coreos:
[Service]
Type=oneshot
ExecStart=/usr/bin/systemctl enable docker-tcp.socket
-```
\ No newline at end of file
+```
+
+## Use Attached Storage for Docker Images
+
+Docker containers can be very large and debugging a build process makes it easy to accumulate hundreds of containers. It's advantagous to use attached storage to expand your capacity for container images. Check out the guide to [mounting storage to your CoreOS machine]({{site.url}}/docs/cluster-management/setup/mounting-storage/index.md#use-attached-storage-for-docker) for an example of how to bind mount storage into `/var/lib/docker`.
+
+## Enabling the docker Debug Flag
+
+First, copy the existing unit from the read-only file system into the read/write file system, so we can edit it:
+
+```
+cp /usr/lib/systemd/system/docker.service /etc/systemd/system/
+```
+
+Edit the `ExecStart` line to add the -D flag:
+
+```
+ExecStart=/usr/bin/docker -d -s=btrfs -r=false -H fd:// -D
+```
+
+Now lets tell systemd about the new unit and restart docker:
+
+```
+systemctl daemon-reload
+systemctl restart docker
+```
+
+To test our debugging stream, run a docker command and then read the systemd journal, which should contain the output:
+
+```
+docker ps
+journalctl -u docker
+```
+
+### Cloud-Config
+
+If you need to modify a flag across many machines, you can provide the new unit with cloud-config:
+
+```
+#cloud-config
+
+coreos:
+ units:
+ - name: docker.service
+ command: restart
+ content: |
+ [Unit]
+ Description=Docker Application Container Engine
+ Documentation=http://docs.docker.io
+ After=network.target
+ Requires=docker-tcp.socket
+
+ [Service]
+ ExecStartPre=/bin/mount --make-rprivate /
+ # Run docker but don't have docker automatically restart
+ # containers. This is a job for systemd and unit files.
+ ExecStart=/usr/bin/docker -d -s=btrfs -r=false -H fd:// -D
+
+ [Install]
+ WantedBy=multi-user.target
+```
From 949fbc25de3d59dcd95851549a51f389ce182e3a Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 30 Apr 2014 13:39:04 -0700
Subject: [PATCH 0038/1291] fix(mounting-storage): simplify units
---
.../setup/mounting-storage/index.md | 29 +++++++------------
1 file changed, 10 insertions(+), 19 deletions(-)
diff --git a/cluster-management/setup/mounting-storage/index.md b/cluster-management/setup/mounting-storage/index.md
index 6496436f7..fec070187 100644
--- a/cluster-management/setup/mounting-storage/index.md
+++ b/cluster-management/setup/mounting-storage/index.md
@@ -38,13 +38,6 @@ We're going to bind mount a btrfs device to `/var/lib/docker`, where docker stor
#cloud-config
coreos:
units
- - name: media-ephemeral.mount
- command: start
- content: |
- [Mount]
- What=/dev/xvdb
- Where=/media/ephemeral
- Type=btrfs
- name: format-ephemeral.service
command: start
content: |
@@ -52,23 +45,21 @@ coreos:
Description=Formats the ephemeral drive
[Service]
Type=oneshot
+ RemainAfterExit=yes
ExecStart=/usr/sbin/wipefs -f /dev/xvdb
- ExecStartPost=/usr/sbin/mkfs.btrfs -f /dev/xvdb
- ExecStartPost=/usr/bin/mount -t btrfs /dev/xvdb /media/ephemeral
- - name: docker-storage.service
+ ExecStart=/usr/sbin/mkfs.btrfs -f /dev/xvdb
+ - name: media-ephemeral.mount
command: start
content: |
[Unit]
+ Description=Mount ephemeral to /var/lib/docker
Requires=format-ephemeral.service
- Description=Mount ephemeral as /var/lib/docker
- [Service]
- Type=oneshot
- ExecStartPre=/usr/bin/systemctl stop docker
- ExecStartPre=/usr/bin/rm -rf /var/lib/docker/*
- ExecStart=/usr/bin/mkdir -p /media/ephemeral/docker
- ExecStart=/usr/bin/mkdir -p /var/lib/docker
- ExecStartPost=/usr/bin/mount -o bind /media/ephemeral/docker /var/lib/docker
- ExecStartPost=/usr/bin/systemctl start --no-block docker
+ Requires=docker.service
+ Before=docker.service
+ [Mount]
+ What=/dev/xvdb
+ Where=/var/lib/docker
+ Type=btrfs
```
Notice that we're starting all three of these units at the same time and using the power of systemd to work out the dependencies for us. In this case, `docker-storage.service` requires `format-ephemeral.service`, ensuring that our storage will always be formatted before it is bind mounted. Docker will refuse to start otherwise.
From 6267a60b224aa717900338ae01832561ae72df06 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 30 Apr 2014 15:43:51 -0700
Subject: [PATCH 0039/1291] fix(mounting-storage): correct dependencies
---
cluster-management/setup/mounting-storage/index.md | 1 -
1 file changed, 1 deletion(-)
diff --git a/cluster-management/setup/mounting-storage/index.md b/cluster-management/setup/mounting-storage/index.md
index fec070187..bbff098cc 100644
--- a/cluster-management/setup/mounting-storage/index.md
+++ b/cluster-management/setup/mounting-storage/index.md
@@ -54,7 +54,6 @@ coreos:
[Unit]
Description=Mount ephemeral to /var/lib/docker
Requires=format-ephemeral.service
- Requires=docker.service
Before=docker.service
[Mount]
What=/dev/xvdb
From 3e3fb29b844dd3810b23db654e6118c983200217 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 30 Apr 2014 22:34:52 -0700
Subject: [PATCH 0040/1291] fix(customizing-docker): remove socket dependency
---
launching-containers/building/customizing-docker/index.md | 2 --
1 file changed, 2 deletions(-)
diff --git a/launching-containers/building/customizing-docker/index.md b/launching-containers/building/customizing-docker/index.md
index 6dd81e77f..50020d4d7 100644
--- a/launching-containers/building/customizing-docker/index.md
+++ b/launching-containers/building/customizing-docker/index.md
@@ -118,8 +118,6 @@ coreos:
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
After=network.target
- Requires=docker-tcp.socket
-
[Service]
ExecStartPre=/bin/mount --make-rprivate /
# Run docker but don't have docker automatically restart
From dae7c6eed950fd3b98882c1b097b8d9c2c6d5083 Mon Sep 17 00:00:00 2001
From: Michael Marineau
Date: Thu, 1 May 2014 09:22:51 -0700
Subject: [PATCH 0041/1291] fix(*): Document using gpg detached signatures
instead of DIGESTS
As of release 298 we generate .sig files which are much easier to deal
with than the oddball DIGESTS format.
---
running-coreos/bare-metal/booting-with-pxe/index.md | 7 +++++--
.../distributors/notes-for-distributors/index.md | 7 +++----
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/running-coreos/bare-metal/booting-with-pxe/index.md b/running-coreos/bare-metal/booting-with-pxe/index.md
index e13e0528a..7e7b5a55e 100644
--- a/running-coreos/bare-metal/booting-with-pxe/index.md
+++ b/running-coreos/bare-metal/booting-with-pxe/index.md
@@ -71,13 +71,16 @@ You can view all of the [cloud-config options here]({{site.url}}/docs/cluster-ma
In the config above you can see that a Kernel image and a initramfs file is needed.
Download these two files into your tftp root.
-The extra `coreos_production_pxe.DIGESTS.asc` file can be used to [verify the others][verify-notes].
+The extra `coreos_production_pxe.vmlinuz.sig` and `coreos_production_pxe_image.cpio.gz.sig` files can be used to [verify the downloaded files][verify-notes].
```
cd /var/lib/tftpboot
wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_pxe.vmlinuz
+wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_pxe.vmlinuz.sig
wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_pxe_image.cpio.gz
-wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_pxe.DIGESTS.asc
+wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_pxe_image.cpio.gz.sig
+gpg --verify coreos_production_pxe.vmlinuz.sig
+gpg --verify coreos_production_pxe_image.cpio.gz.sig
```
PXE booted machines cannot currently update themselves.
diff --git a/sdk-distributors/distributors/notes-for-distributors/index.md b/sdk-distributors/distributors/notes-for-distributors/index.md
index 443a9f90c..90cef1307 100644
--- a/sdk-distributors/distributors/notes-for-distributors/index.md
+++ b/sdk-distributors/distributors/notes-for-distributors/index.md
@@ -14,12 +14,11 @@ Images of CoreOS are hosted at `http://storage.core-os.net/coreos/amd64-usr/`. A
If you are importing images for use inside of your environment it is recommended that you import from a URL in the following format `http://storage.core-os.net/coreos/amd64-usr/${CHANNEL}/`. For example to grab the alpha OpenStack version of CoreOS you can import `http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_openstack_image.img.bz2`. There is a `version.txt` file in this directory which you can use to label the image with a version number as well.
-It is recommended that you also verify files using the [CoreOS Image Signing Key][signing-key]. The digests are simply the image URL, replacing `_image.img.bz2` with `.DIGESTS.asc`. You can verify the digest with `gpg --verify` after importing the signing key. Then the image itself can be verified based on one of the hashes in `.DIGESTS.asc`. For example:
+It is recommended that you also verify files using the [CoreOS Image Signing Key][signing-key]. The GPG signature for each image is a detached `.sig` file that must be passed to `gpg --verify`. For example:
wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_openstack_image.img.bz2
- wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_openstack.DIGESTS.asc
- gpg --verify coreos_production_openstack.DIGESTS.asc
- sha512sum -c coreos_production_openstack.DIGESTS.asc
+ wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_openstack_image.img.bz2.sig
+ gpg --verify coreos_production_openstack_image.img.bz2.sig
[signing-key]: {{site.url}}/security/image-signing-key
From d80e929495380018a3c4b11bc3dbcea216037e22 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Fabr=C3=ADcio=20Godoy?=
Date: Wed, 9 Apr 2014 00:17:02 -0300
Subject: [PATCH 0042/1291] Added documentation to VirtualBox platform
Added document to run CoreOS on Oracle VirtualBox platform.
---
running-coreos/platforms/virtualbox/index.md | 123 +++++++++++++++++++
1 file changed, 123 insertions(+)
create mode 100644 running-coreos/platforms/virtualbox/index.md
diff --git a/running-coreos/platforms/virtualbox/index.md b/running-coreos/platforms/virtualbox/index.md
new file mode 100644
index 000000000..2047d4679
--- /dev/null
+++ b/running-coreos/platforms/virtualbox/index.md
@@ -0,0 +1,123 @@
+---
+layout: docs
+slug: virtualbox
+title: VirtualBox
+category: running_coreos
+sub_category: platforms
+weight: 7
+---
+
+# Running CoreOS on VirtualBox
+
+These instructions will walk you through running CoreOS on Oracle VM VirtualBox.
+
+## Building the virtual disk
+
+There is a script that simplify the VDI building. It downloads a bare-metal
+image, verifies it with GPG and convert the image to VirtualBox format.
+
+The script is located at
+[Github](https://github.com/coreos/scripts/blob/master/contrib/create-coreos-vdi
+"create-coreos-vdi").
+The running host must support VirtualBox tools.
+
+As first step, you must download and make it executable.
+
+```sh
+wget https://raw.github.com/coreos/scripts/master/contrib/create-coreos-vdi
+chmod +x create-coreos-vdi
+```
+
+To run the script you can specify a destination location and the CoreOS version.
+
+```sh
+./create-coreos-vdi -d /data/VirtualBox/Templates
+```
+
+If you want a specific version of CoreOS, you can find which versions is
+available to download on
+[CoreOS storage](http://storage.core-os.net/coreos/amd64-usr/index.html). Then
+just specify the desired version to the script.
+
+```sh
+./create-coreos-vdi -V 298.0.0
+```
+
+After the script is finished successfully, will be available at the specified
+destination location the CoreOS image or at current location. The file name will
+be something like "*coreos_production_298.0.0.vdi*".
+
+## Creating a config-drive
+
+Cloud-config can be specified by attaching a
+[config-drive]({{site.url}}/docs/cluster-management/setup/cloudinit-config-drive/)
+with the label `config-2`. This is commonly done through whatever interface
+allows for attaching cd-roms or new drives.
+
+Note that the config-drive standard was originally an Openstack feature, which
+is why you'll see strings containing `openstack`. This filepath needs to be
+retained, although CoreOS supports config-drive on all platforms.
+
+For more information on customization that can be done with cloud-config, head
+on over to the
+[cloud-config guide]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/).
+
+You need a config-drive to configure at least one SSH key to access the virtual
+machine. If you are in hurry you can create a basic config-drive with following
+steps.
+
+```sh
+wget https://raw.github.com/coreos/scripts/master/contrib/create-basic-configdrive
+chmod +x create-basic-configdrive
+./create-basic-configdrive -H my_vm01 -S ~/.ssh/id_rsa.pub
+```
+
+Will be created an ISO file named `my_vm01.iso` that will configure a virtual
+machine to accept your SSH key and set its name to my_vm01.
+
+## Deploying a new virtual machine on VirtualBox
+
+I recommend to use the built image as base image. Therefore you should clone the
+image for each new virtual machine and set it to desired size.
+
+```sh
+VBoxManage clonehd coreos_production_298.0.0.vdi my_vm01.vdi
+# Resize virtual disk to 10 GB
+VBoxManage modifyhd my_vm01.vdi --resize 10240
+```
+
+At boot time the CoreOS will detect that the volume size changed and will resize
+the filesystem according.
+
+Open VirtualBox Manager and go to menu Machine > New. Type the desired machine
+name and choose 'Linux' type and 'Linux 2.6 / 3.x (64 bit)' version.
+
+Next, choose the desired memory size. I recommend 1 GB for smooth experience.
+
+Next, choose 'Use an existing virtual hard drive file' and find the new cloned
+image.
+
+Click on 'Create' button to create the virtual machine.
+
+Next, go to settings from the created virtual machine. Then click on Storage tab
+and load the created config-drive into CD/DVD drive.
+
+Click on 'OK' button and the virtual machine will be ready to be started.
+
+## Logging in
+
+Networking can take a bit of time to come up under VirtualBox and you will need
+to know the IP in order to connect to it. Press enter a few times at the login
+prompt and you see an IP address pop up.
+
+Now you can login using your private SSH key.
+
+```sh
+ssh core@192.168.56.101
+```
+
+## Using CoreOS
+
+Now that you have a machine booted it is time to play around.
+Check out the [CoreOS Quickstart]({{site.url}}/docs/quickstart) guide or dig
+into [more specific topics]({{site.url}}/docs).
From 69a2af27dbab214b13c6325692f8bd1c4412e1ca Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 17 Apr 2014 16:06:43 -0700
Subject: [PATCH 0043/1291] feat(update-strategies): explain update strategies
---
.../setup/update-strategies/index.md | 77 +++++++++++++++++++
1 file changed, 77 insertions(+)
create mode 100644 cluster-management/setup/update-strategies/index.md
diff --git a/cluster-management/setup/update-strategies/index.md b/cluster-management/setup/update-strategies/index.md
new file mode 100644
index 000000000..4cfa7150a
--- /dev/null
+++ b/cluster-management/setup/update-strategies/index.md
@@ -0,0 +1,77 @@
+---
+layout: docs
+title: Update Strategies
+category: cluster_management
+sub_category: setting_up
+weight: 7
+---
+
+# Update Strategies
+
+The overarching goal of CoreOS is to secure the internet's backend infrastructure. We believe that [automatically updating]({{site.url}}/using-coreos/updates) the operating system is one of the best tools to achieve this goal.
+
+We realize that each CoreOS cluster has a unique tolerance for risk and the operational needs of your applications are complex. In order to meet everyone's needs, there are four update strategies that we have developed based on feedback during our alpha period.
+
+It's important to note that updates are always downloaded to the passive partition when they become available. A reboot is the last step of the update, where the active and passive partitions are swapped. These strategies control how that reboot occurs:
+
+| Strategy | Description |
+|--------------------|-------------|
+| `best-effort` | Default. If etcd is running, `etcd-lock`, otherwise simply `reboot`. |
+| `etcd-lock` | Reboot after first taking a distributed lock in etcd. |
+| `reboot` | Reboot immediately after an update is applied. |
+| `off` | Do not reboot after updates are applied. |
+
+## Strategy Options
+
+### Best Effort
+
+The default setting is for CoreOS to make a `best-effort` to determine if the machine is part of a cluster. Currently this logic is very simple: if etcd has started, assume that the machine is part of a cluster.
+
+If so, use the `etcd-lock` strategy.
+
+Otherwise, use the `reboot` strategy.
+
+### etcd-Lock
+
+The `etcd-lock` strategy mandates that each machine acquire and hold a reboot lock before it is allowed to reboot. The main goal behind this strategy is to allow for an update to be applied to a cluster quickly, without losing the quorum membership in etcd or rapidly reducing capacity for the services running on the cluster. The reboot lock is held until the machine releases it after a successful update.
+
+The number of machines allowed to reboot simultaneously is configurable via a command line utility:
+
+```
+$ locksmithctl set-max 4
+Old: 1
+New: 4
+```
+
+This setting is stored in etcd so it won't have to be configured for subsequent machines.
+
+To view the number of available slots and find out which machines in the cluster are holding locks, run:
+
+```
+$ locksmithctl status
+Available: 0
+Max: 1
+
+MACHINE ID
+69d27b356a94476da859461d3a3bc6fd
+```
+
+If needed, you can manually clear a lock by providing the machine ID:
+
+```
+locksmithctl unlock 69d27b356a94476da859461d3a3bc6fd
+```
+
+### Reboot Immediately
+
+The `reboot` strategy works exactly how it sounds: the machine is rebooted as soon as the update has been installed to the passive partition. If the applications running on your cluster are highly resilient, this strategy was made for you.
+
+### Off
+
+The `off` strategy is also very straightforward. The update will be installed onto the passive partion and await a reboot command to complete the update. We don't recommend this strategy unless you reboot frequently as part of your normal operations workflow
+
+## Updating PXE/iPXE Machines
+
+PXE/iPXE machines download a new copy of CoreOS every time they are started thus are dependent on the version of CoreOS they are served. If you don't automatically load new CoreOS images into your PXE/iPXE server, your machines will never have new features or security updates.
+
+An easy solution to this problem is to use iPXE and reference images [directly from the CoreOS storage site]({{site.url}}/docs/running-coreos/bare-metal/booting-with-ipxe/#setting-up-the-boot-script). The `alpha` URL is automatically pointed to the new version of CoreOS as it is released.
\ No newline at end of file
From 61ba844eb115744382176816a5af7b2d4e9471b5 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 7 May 2014 13:29:38 -0700
Subject: [PATCH 0044/1291] fix(update-strategies): add link to cloud-config
---
cluster-management/setup/update-strategies/index.md | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/cluster-management/setup/update-strategies/index.md b/cluster-management/setup/update-strategies/index.md
index 4cfa7150a..e69c2b1c7 100644
--- a/cluster-management/setup/update-strategies/index.md
+++ b/cluster-management/setup/update-strategies/index.md
@@ -23,6 +23,15 @@ It's important to note that updates are always downloaded to the passive partiti
## Strategy Options
+The update strategy is defined in [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/#coreos):
+
+```
+#cloud-config
+coreos:
+ update:
+ reboot-strategy: best-effort
+```
+
### Best Effort
The default setting is for CoreOS to make a `best-effort` to determine if the machine is part of a cluster. Currently this logic is very simple: if etcd has started, assume that the machine is part of a cluster.
From 05692a0b5daea5b3761e4d81d5ac28e22b3426d5 Mon Sep 17 00:00:00 2001
From: Janusz Lewandowski
Date: Thu, 8 May 2014 16:37:24 +0200
Subject: [PATCH 0045/1291] Update Vultr instructions.
Vultr now supports automatic iPXE chaining, so there is no need to play with the console any more.
Also, add an information about installation to the disk - as the lack of it made me confused.
---
running-coreos/cloud-providers/vultr/index.md | 34 ++++++-------------
1 file changed, 11 insertions(+), 23 deletions(-)
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index d855a97c4..514038a7a 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -14,14 +14,6 @@ CoreOS is currently in heavy development and actively being tested. These instr
* The location of your iPXE script (referenced later in the guide) is located at ```http://example.com/script.txt```
* You have a public + private key combination generated. Here's a helpful guide if you need to generate these keys: [How to set up SSH keys](https://www.digitalocean.com/community/articles/how-to-set-up-ssh-keys--2).
-## Create the VPS
-
-Create a new VPS (any server type and location of your choice), and then for the "Operating System" select "Custom". Click "Place Order".
-
-
-
-Once you receive the welcome email the VPS will be ready to use (typically less than 2-3 minutes).
-
## Create the script
The simplest option to boot up CoreOS is to load a script that contains the series of commands you'd otherwise need to manually type at the command line. This script needs to be publicly accessible (host this file on your own server). Save this script as a text file (.txt extension).
@@ -40,24 +32,18 @@ Make sure to replace `YOUR_PUBLIC_KEY_HERE` with your actual public key, it will
Additional reading can be found at [Booting CoreOS with iPXE](http://coreos.com/docs/running-coreos/bare-metal/booting-with-ipxe/) and [Embedded scripts for iPXE](http://ipxe.org/embed).
-## Getting CoreOS running
+## Create the VPS
-Once you have received the email indicating the VPS is ready, click "Manage" for that VPS in your Vultr account area. Under "Server Actions" Click on "View Console" which will open a new window, and show the iPXE command prompt.
+Create a new VPS (any server type and location of your choice), and then:
-Type the following commands:
+1. For the "Operating System" select "Custom",
+1. Select iPXE boot,
+1. Set the chain URL to the URL of your script (http://example.com/script.txt),
+1. Click "Place Order".
-```
-iPXE> dhcp
-```
-The output should end with "OK".
+
-then type:
-
-```
-iPXE> chain http://example.com/script.txt
-```
-
-You'll see several lines scroll past on the console as the kernel is loaded, and then the initrd is loaded. CoreOS will automatically then boot up, and you'll end up at a login prompt.
+Once you receive the welcome email the VPS will be ready to use (typically less than 2-3 minutes).
## Accessing the VPS
@@ -81,8 +67,10 @@ Last login: Thu Oct 17 11:42:04 UTC 2013 from 127.0.0.1 on pts/0
core@srv-n8uak ~ $
```
-
## Using CoreOS
Now that you have a cluster bootstrapped it is time to play around.
+
+It's currently running from RAM, based on the loaded image. You may want to [install it on the disk]({{site.url}}/docs/running-coreos/bare-metal/installing-to-disk). It's device name is /dev/vda, not sda.
+
Check out the [CoreOS Quickstart]({{site.url}}/docs/quickstart) guide or dig into [more specific topics]({{site.url}}/docs).
From d90b80046906a8ab33c6339c2ff2ee96576452ed Mon Sep 17 00:00:00 2001
From: Jonathan Boulle
Date: Thu, 8 May 2014 11:58:01 -0700
Subject: [PATCH 0046/1291] chore(vultr): clean up vultr doc
---
running-coreos/cloud-providers/vultr/index.md | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index 514038a7a..247077c69 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -36,10 +36,10 @@ Additional reading can be found at [Booting CoreOS with iPXE](http://coreos.com/
Create a new VPS (any server type and location of your choice), and then:
-1. For the "Operating System" select "Custom",
-1. Select iPXE boot,
-1. Set the chain URL to the URL of your script (http://example.com/script.txt),
-1. Click "Place Order".
+1. For the "Operating System" select "Custom"
+2. Select iPXE boot
+3. Set the chain URL to the URL of your script (http://example.com/script.txt)
+4. Click "Place Order"

@@ -47,9 +47,9 @@ Once you receive the welcome email the VPS will be ready to use (typically less
## Accessing the VPS
-You can now login to CoreOS, assuming the associated private key is in place on your local computer you'll immediately be logged in. You may need to specify the specific location using ```-i LOCATION```. If you need additional details on how to specify the location of your private key file see [here](http://www.cyberciti.biz/faq/force-ssh-client-to-use-given-private-key-identity-file/).
+You can now log in to CoreOS using the associated private key on your local computer. You may need to specify its location using ```-i LOCATION```. If you need additional details on how to specify the location of your private key file see [here](http://www.cyberciti.biz/faq/force-ssh-client-to-use-given-private-key-identity-file/).
-SSH to the IP of your VPS, and specify the "core" user specifically: ```ssh core@IP```
+SSH to the IP of your VPS, and specify the "core" user: ```ssh core@IP```
```
@@ -71,6 +71,6 @@ core@srv-n8uak ~ $
Now that you have a cluster bootstrapped it is time to play around.
-It's currently running from RAM, based on the loaded image. You may want to [install it on the disk]({{site.url}}/docs/running-coreos/bare-metal/installing-to-disk). It's device name is /dev/vda, not sda.
+CoreOS is currently running from RAM, based on the loaded image. You may want to [install it on the disk]({{site.url}}/docs/running-coreos/bare-metal/installing-to-disk). Note that when following these instructions on Vultr, the device name should be `/dev/vda` rather than `/dev/sda`.
Check out the [CoreOS Quickstart]({{site.url}}/docs/quickstart) guide or dig into [more specific topics]({{site.url}}/docs).
From 6ed4338760b758d7f1110cdce420bb4dfd3cb903 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 7 May 2014 17:05:28 -0700
Subject: [PATCH 0047/1291] feat(running-coreos): add channel instructions
---
.../bare-metal/booting-with-ipxe/index.md | 6 ++
.../bare-metal/booting-with-pxe/index.md | 11 ++--
.../bare-metal/installing-to-disk/index.md | 8 ++-
running-coreos/cloud-providers/ec2/index.md | 56 +++++++++++--------
.../google-compute-engine/index.md | 39 ++++++++++---
.../cloud-providers/rackspace/index.md | 38 ++++++++++---
running-coreos/platforms/eucalyptus/index.md | 6 ++
running-coreos/platforms/libvirt/index.md | 6 ++
running-coreos/platforms/openstack/index.md | 6 ++
running-coreos/platforms/qemu/index.md | 10 +++-
running-coreos/platforms/vmware/index.md | 6 ++
11 files changed, 146 insertions(+), 46 deletions(-)
diff --git a/running-coreos/bare-metal/booting-with-ipxe/index.md b/running-coreos/bare-metal/booting-with-ipxe/index.md
index 046d24e91..b61495c9e 100644
--- a/running-coreos/bare-metal/booting-with-ipxe/index.md
+++ b/running-coreos/bare-metal/booting-with-ipxe/index.md
@@ -18,6 +18,12 @@ This includes many cloud providers and physical hardware.
To illustrate iPXE in action we will qemu-kvm in this guide.
+### Choose a Channel
+
+CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+
+The channel is selected through the `set coreos-version` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
+
### Setting up the Boot Script
iPXE downloads a boot script from a publicly available URL.
diff --git a/running-coreos/bare-metal/booting-with-pxe/index.md b/running-coreos/bare-metal/booting-with-pxe/index.md
index 7e7b5a55e..2d62dc096 100644
--- a/running-coreos/bare-metal/booting-with-pxe/index.md
+++ b/running-coreos/bare-metal/booting-with-pxe/index.md
@@ -67,7 +67,13 @@ ssh_authorized_keys:
You can view all of the [cloud-config options here]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/).
-### Download the files
+### Choose a Channel
+
+CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+
+The channel is selected through the `storage.core-os.net` URLs below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
+
+PXE booted machines cannot currently update themselves. To update to the latest version of CoreOS download/verify these files again and reboot.
In the config above you can see that a Kernel image and a initramfs file is needed.
Download these two files into your tftp root.
@@ -83,9 +89,6 @@ gpg --verify coreos_production_pxe.vmlinuz.sig
gpg --verify coreos_production_pxe_image.cpio.gz.sig
```
-PXE booted machines cannot currently update themselves.
-To update to the latest version of CoreOS download/verify these files again and reboot.
-
[verify-notes]: {{site.url}}/docs/sdk-distributors/distributors/notes-for-distributors/#importing-images
## Booting the Box
diff --git a/running-coreos/bare-metal/installing-to-disk/index.md b/running-coreos/bare-metal/installing-to-disk/index.md
index daa35a21f..cc788d7fc 100644
--- a/running-coreos/bare-metal/installing-to-disk/index.md
+++ b/running-coreos/bare-metal/installing-to-disk/index.md
@@ -17,11 +17,17 @@ Essentially it downloads an image, verifies it with gpg and then copies it bit f
The script is self-contained and located [on Github here](https://raw.github.com/coreos/init/master/bin/coreos-install "coreos-install").
It is already installed if you are booting CoreOS via PXE but you can also use it from other Linux distributions.
+## Choose a Channel
+
+CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+
+When running on CoreOS the install script will attempt to install the same version (and channel) by default.
+
```
coreos-install -d /dev/sda
```
-When running on CoreOS the install script will attempt to install the same version. If you want to ensure you are installing the latest available version use the `-V` option:
+If you want to ensure you are installing the latest available version on a channel, use the `-V` option:
```
coreos-install -d /dev/sda -V alpha
diff --git a/running-coreos/cloud-providers/ec2/index.md b/running-coreos/cloud-providers/ec2/index.md
index 3b64f19e3..a81c2defb 100644
--- a/running-coreos/cloud-providers/ec2/index.md
+++ b/running-coreos/cloud-providers/ec2/index.md
@@ -8,22 +8,23 @@ cloud-formation-launch-logo: https://s3.amazonaws.com/cloudformation-examples/cl
---
{% capture cf_template %}{{ site.https-s3 }}/dist/aws/coreos-alpha.template{% endcapture %}
-# Running CoreOS {{ site.ami-version }} on EC2
+# Running CoreOS {{site.data.alpha-channel.ami-version}} on EC2
The current AMIs for all CoreOS channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but you can also follow the manual steps at the end of the article. You can direct questions to the [IRC channel][irc] or [mailing list][coreos-dev].
## Choosing a Channel
-CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updates) with different schedules per channel. You can [disable this feature]({{site.url}}/docs/cluster-management/debugging/prevent-reboot-after-update), although we don't recommend it. Release notes can currently be found on [Github](https://github.com/coreos/manifest/releases) but we're researching better options.
+CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updates) with different schedules per channel. You can [disable this feature]({{site.url}}/docs/cluster-management/debugging/prevent-reboot-after-update), although we don't recommend it. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes.
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.ami-version}}.
+
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
@@ -34,27 +35,36 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
- {% assign pairs = site.amis-all | split: "|" %}
- {% for item in pairs %}
-
- {% assign amis = item | split: "=" %}
- {% for item in amis limit:1 offset:0 %}
- {% assign region = item %}
- {% endfor %}
- {% for item in amis limit:1 offset:1 %}
- {% assign ami-id = item %}
+ {% for region in site.data.alpha-channel.amis %}
+
diff --git a/running-coreos/cloud-providers/google-compute-engine/index.md b/running-coreos/cloud-providers/google-compute-engine/index.md
index 6561b7663..88b379ce2 100644
--- a/running-coreos/cloud-providers/google-compute-engine/index.md
+++ b/running-coreos/cloud-providers/google-compute-engine/index.md
@@ -6,7 +6,7 @@ weight: 3
title: Google Compute Engine
---
-# Running CoreOS {{ site.version-string }} on Google Compute Engine
+# Running CoreOS {{ site.alpha-channel }} on Google Compute Engine
CoreOS on Google Compute Engine (GCE) is currently in heavy development and actively being tested. The current disk image is listed below and relies on GCE's recently announced [Advanced OS Support][gce-advanced-os]. Each time a new update is released, your machines will [automatically upgrade themselves]({{ site.url }}/using-coreos/updates).
@@ -15,13 +15,34 @@ Before proceeding, you will need to [install gcutil][gcutil-documentation] and c
[gce-advanced-os]: http://developers.google.com/compute/docs/transition-v1#customkernelbinaries
[gcutil-documentation]: https://developers.google.com/compute/docs/gcutil/
-## Image creation
-
-At the moment CoreOS images are not publicly listed in GCE and must be added to your own account from a raw disk image published in Google Cloud Storage:
-
-```
-gcutil --project= addimage --description="CoreOS {{ site.version-string }}" coreos-v{{ site.gce-version-id }} gs://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_gce.tar.gz
-```
+## Choosing a Channel
+
+CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updates) with different schedules per channel. You can [disable this feature]({{site.url}}/docs/cluster-management/debugging/prevent-reboot-after-update), although we don't recommend it. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes.
+
+
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
+
+
At the moment CoreOS images are not publicly listed in GCE and must be added to your own account from a raw disk image published in Google Cloud Storage:
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
+
+
At the moment CoreOS images are not publicly listed in GCE and must be added to your own account from a raw disk image published in Google Cloud Storage:
## Cloud-Config
@@ -51,7 +72,7 @@ coreos:
Create 3 instances from the image above using our cloud-config from `cloud-config.yaml`:
```
-gcutil --project= addinstance --image=coreos-v{{ site.gce-version-id }} --persistent_boot_disk --zone=us-central1-a --machine_type=n1-standard-1 --metadata_from_file=user-data:cloud-config.yaml core1 core2 core3
+gcutil --project= addinstance --image=coreos-v{{ site.beta-channel | replace:'.','-' }} --persistent_boot_disk --zone=us-central1-a --machine_type=n1-standard-1 --metadata_from_file=user-data:cloud-config.yaml core1 core2 core3
```
### Additional Storage
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index e96e663f0..ca1932068 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -6,23 +6,24 @@ sub_category: cloud_provider
weight: 5
---
-# Running CoreOS {{site.rackspace-version}} on Rackspace
+# Running CoreOS {{site.data.beta-channel.rackspace-version}} on Rackspace
CoreOS is currently in heavy development and actively being tested. These instructions will walk you through running CoreOS on the Rackspace Openstack cloud, which differs slightly from the generic Openstack instructions. There are two ways to launch a CoreOS cluster: launch an entire cluster with Heat or launch machines with Nova.
## Choosing a Channel
-CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updates) with different schedules per channel. You can [disable this feature]({{site.url}}/docs/cluster-management/debugging/prevent-reboot-after-update), although we don't recommend it. Release notes can currently be found on [Github](https://github.com/coreos/manifest/releases) but we're researching better options.
+CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updates) with different schedules per channel. You can [disable this feature]({{site.url}}/docs/cluster-management/debugging/prevent-reboot-after-update), although we don't recommend it. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes.
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.rackspace-version}}.
+
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.data.alpha-channel.rackspace-version}}.
@@ -35,7 +36,28 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
@@ -90,7 +112,7 @@ For more general information, check out [mounting storage on CoreOS]({{site.url}
## Launch with Nova
-We're going to install `rackspace-novaclient`, upload a keypair and boot the image id `{{site.rackspace-image-id}}`.
+We're going to install `rackspace-novaclient`, upload a keypair and boot the image id from above.
### Install Supernova Tool
@@ -141,7 +163,7 @@ Check you make sure the key is in your list by running `supernova production key
Boot a new server with our new keypair and specify optional cloud-config data.
```
-supernova production boot --image {{site.rackspace-image-id}} --flavor performance1-2 --key-name coreos-key --user-data ~/cloud_config.yml --config-drive true My_CoreOS_Server
+supernova production boot --image {{site.data.beta-channel.rackspace-image-id}} --flavor performance1-2 --key-name coreos-key --user-data ~/cloud_config.yml --config-drive true My_CoreOS_Server
```
You should now see the details of your new server in your terminal and it should also show up in the control panel:
diff --git a/running-coreos/platforms/eucalyptus/index.md b/running-coreos/platforms/eucalyptus/index.md
index 405253479..096e6b71e 100644
--- a/running-coreos/platforms/eucalyptus/index.md
+++ b/running-coreos/platforms/eucalyptus/index.md
@@ -16,6 +16,12 @@ These instructions will walk you through downloading CoreOS, bundling the image,
These steps will download the CoreOS image, uncompress it, convert it from qcow->raw, and then import it into Eucalyptus.
In order to convert the image you will need to install ```qemu-img``` with your favorite package manager.
+### Choosing a Channel
+
+CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+
+The channel is selected through the `storage.core-os.net` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
+
```
$ wget -q http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_openstack_image.img.bz2
$ bunzip2 coreos_production_openstack_image.img.bz2
diff --git a/running-coreos/platforms/libvirt/index.md b/running-coreos/platforms/libvirt/index.md
index 9364b9303..aca3c3050 100644
--- a/running-coreos/platforms/libvirt/index.md
+++ b/running-coreos/platforms/libvirt/index.md
@@ -22,6 +22,12 @@ In this guide, the example virtual machine we are creating is called coreos0 and
all files are stored in /var/lib/libvirt/images/coreos0. This is not a requirement — feel free
to substitute that path if you use another one.
+### Choosing a Channel
+
+CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+
+The channel is selected through the `storage.core-os.net` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
+
We start by downloading the most recent disk image:
mkdir -p /var/lib/libvirt/images/coreos0
diff --git a/running-coreos/platforms/openstack/index.md b/running-coreos/platforms/openstack/index.md
index ec1301d30..3721f98dc 100644
--- a/running-coreos/platforms/openstack/index.md
+++ b/running-coreos/platforms/openstack/index.md
@@ -17,6 +17,12 @@ it with the `glance` tool and running your first cluster with the `nova` tool.
These steps will download the CoreOS image, uncompress it and then import it
into the glance image store.
+### Choosing a Channel
+
+CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+
+The channel is selected through the `storage.core-os.net` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
+
```
$ wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_openstack_image.img.bz2
$ bunzip2 coreos_production_openstack_image.img.bz2
diff --git a/running-coreos/platforms/qemu/index.md b/running-coreos/platforms/qemu/index.md
index 0087996c1..d8eda30e4 100644
--- a/running-coreos/platforms/qemu/index.md
+++ b/running-coreos/platforms/qemu/index.md
@@ -69,7 +69,15 @@ Wiki][qemugen]. Usually this should be sufficient:
## Startup CoreOS
Once QEMU is installed you can download and start the latest CoreOS
-image. There are two files you need: the disk image (provided in qcow2
+image.
+
+### Choosing a Channel
+
+CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+
+The channel is selected through the `storage.core-os.net` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
+
+There are two files you need: the disk image (provided in qcow2
format) and the wrapper shell script to start QEMU.
mkdir coreos; cd coreos
diff --git a/running-coreos/platforms/vmware/index.md b/running-coreos/platforms/vmware/index.md
index 611bbc8de..ed7850567 100644
--- a/running-coreos/platforms/vmware/index.md
+++ b/running-coreos/platforms/vmware/index.md
@@ -18,6 +18,12 @@ If you are familiar with another VMware product you can use these instructions a
These steps will download the VMware image and extract the zip file. After that
you will need to launch the `coreos_developer_vmware_insecure.vmx` file to create a VM.
+### Choosing a Channel
+
+CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+
+The channel is selected through the `storage.core-os.net` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
+
This is a rough sketch that should work on OSX and Linux:
```
From e9e5ac6f360050dc55a8a8c696552d5e9b3dadd9 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 8 May 2014 12:11:01 -0700
Subject: [PATCH 0048/1291] fix(ec2,gce,rax): include additional tabs for
switching channels
---
running-coreos/cloud-providers/ec2/index.md | 185 +++++++++++++++---
.../google-compute-engine/index.md | 25 ++-
.../cloud-providers/rackspace/index.md | 50 +++--
3 files changed, 210 insertions(+), 50 deletions(-)
diff --git a/running-coreos/cloud-providers/ec2/index.md b/running-coreos/cloud-providers/ec2/index.md
index a81c2defb..fca4b32ef 100644
--- a/running-coreos/cloud-providers/ec2/index.md
+++ b/running-coreos/cloud-providers/ec2/index.md
@@ -8,7 +8,7 @@ cloud-formation-launch-logo: https://s3.amazonaws.com/cloudformation-examples/cl
---
{% capture cf_template %}{{ site.https-s3 }}/dist/aws/coreos-alpha.template{% endcapture %}
-# Running CoreOS {{site.data.alpha-channel.ami-version}} on EC2
+# Running CoreOS on EC2
The current AMIs for all CoreOS channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but you can also follow the manual steps at the end of the article. You can direct questions to the [IRC channel][irc] or [mailing list][coreos-dev].
@@ -140,7 +140,11 @@ If you would like to create multiple clusters you will need to change the "Stack
[us-east-latest-quicklaunch]: https://console.aws.amazon.com/ec2/home?region=us-east-1#launchAmi={{ami-us-east-1}} "{{ami-us-east-1}}"
-**TL;DR:** launch three instances of [{{ami-us-east-1}}][us-east-latest-quicklaunch] in **us-east-1** with a security group that has open port 22, 4001, and 7001 and the same "User Data" of each host. SSH uses the `core` user and you have [etcd][etcd-docs] and [docker][docker-docs] to play with.
+{% for region in site.data.alpha-channel.amis %}
+ {% if region.name == 'us-east-1' %}
+**TL;DR:** launch three instances of [{{region.ami-id}}](https://console.aws.amazon.com/ec2/home?region={{region.name}}#launchAmi={{region.ami-id}}) in **{{region.name}}** with a security group that has open port 22, 4001, and 7001 and the same "User Data" of each host. SSH uses the `core` user and you have [etcd][etcd-docs] and [docker][docker-docs] to play with.
+ {% endif %}
+{% endfor %}
### Creating the security group
@@ -172,37 +176,162 @@ First we need to create a security group to allow CoreOS instances to communicat
### Launching a test cluster
-We will be launching three instances, with a few parameters in the User Data, and selecting our security group.
-
-1. Open the quick launch by clicking [here][us-east-latest-quicklaunch] (shift+click for new tab)
- * For reference, the current us-east-1 is: [{{ami-us-east-1}}][us-east-latest-quicklaunch]
-2. On the second page of the wizard, launch 3 servers to test our clustering
- * Number of instances: 3
- * Click "Continue"
-3. Next, we need to specify a discovery URL, which contains a unique token that allows us to find other hosts in our cluster. If you're launching your first machine, generate one at [https://discovery.etcd.io/new](https://discovery.etcd.io/new) and add it to the metadata. You should re-use this key for each machine in the cluster.
+
We will be launching three instances, with a few parameters in the User Data, and selecting our security group.
+
+
+ {% for region in site.data.alpha-channel.amis %}
+ {% if region.name == 'us-east-1' %}
+ Open the quick launch wizard to boot {{region.ami-id}}.
+ {% endif %}
+ {% endfor %}
+
+
+ On the second page of the wizard, launch 3 servers to test our clustering
+
+
Number of instances: 3
+
Click "Continue"
+
+
+
+ Next, we need to specify a discovery URL, which contains a unique token that allows us to find other hosts in our cluster. If you're launching your first machine, generate one at https://discovery.etcd.io/new and add it to the metadata. You should re-use this key for each machine in the cluster.
+
+
+#cloud-config
-```
+coreos:
+ etcd:
+ # generate a new token from https://discovery.etcd.io/new
+ discovery: https://discovery.etcd.io/<token>
+ # multi-region and multi-cloud deployments need to use $public_ipv4
+ addr: $private_ipv4:4001
+ peer-addr: $private_ipv4:7001
+ units:
+ - name: etcd.service
+ command: start
+ - name: fleet.service
+ command: start
+
+
+ Back in the EC2 dashboard, paste this information verbatim into the "User Data" field.
+
+
Paste link into "User Data"
+
"Continue"
+
+
+
+ Storage Configuration
+
+
"Continue"
+
+
+
+ Tags
+
+
"Continue"
+
+
+
+ Create Key Pair
+
+
Choose a key of your choice, it will be added in addition to the one in the gist.
+
"Continue"
+
+
+
+ Choose one or more of your existing Security Groups
+
+
"coreos-testing" as above.
+
"Continue"
+
+
+
+ Launch!
+
+
+
+
+
We will be launching three instances, with a few parameters in the User Data, and selecting our security group.
+
+
+ {% for region in site.data.beta-channel.amis %}
+ {% if region.name == 'us-east-1' %}
+ Open the quick launch wizard to boot {{region.ami-id}}.
+ {% endif %}
+ {% endfor %}
+
+
+ On the second page of the wizard, launch 3 servers to test our clustering
+
+
Number of instances: 3
+
Click "Continue"
+
+
+
+ Next, we need to specify a discovery URL, which contains a unique token that allows us to find other hosts in our cluster. If you're launching your first machine, generate one at https://discovery.etcd.io/new and add it to the metadata. You should re-use this key for each machine in the cluster.
+
+
#cloud-config
coreos:
etcd:
- discovery_url: https://discovery.etcd.io/
- fleet:
- autostart: yes
-```
-4. Back in the EC2 dashboard, paste this information verbatim into the "User Data" field.
- * Paste link into "User Data"
- * "Continue"
-5. Storage Configuration
- * "Continue"
-6. Tags
- * "Continue"
-7. Create Key Pair
- * Choose a key of your choice, it will be added in addition to the one in the gist.
- * "Continue"
-8. Choose one or more of your existing Security Groups
- * "coreos-testing" as above.
-9. Launch!
+ # generate a new token from https://discovery.etcd.io/new
+ discovery: https://discovery.etcd.io/<token>
+ # multi-region and multi-cloud deployments need to use $public_ipv4
+ addr: $private_ipv4:4001
+ peer-addr: $private_ipv4:7001
+ units:
+ - name: etcd.service
+ command: start
+ - name: fleet.service
+ command: start
+
+
+ Back in the EC2 dashboard, paste this information verbatim into the "User Data" field.
+
+
Paste link into "User Data"
+
"Continue"
+
+
+
+ Storage Configuration
+
+
"Continue"
+
+
+
+ Tags
+
+
"Continue"
+
+
+
+ Create Key Pair
+
+
Choose a key of your choice, it will be added in addition to the one in the gist.
+
"Continue"
+
+
+
+ Choose one or more of your existing Security Groups
+
+
"coreos-testing" as above.
+
"Continue"
+
+
+
+ Launch!
+
+
+
+
+
### Automatic Rollback Limitations on EC2
diff --git a/running-coreos/cloud-providers/google-compute-engine/index.md b/running-coreos/cloud-providers/google-compute-engine/index.md
index 88b379ce2..15d6c5288 100644
--- a/running-coreos/cloud-providers/google-compute-engine/index.md
+++ b/running-coreos/cloud-providers/google-compute-engine/index.md
@@ -6,7 +6,7 @@ weight: 3
title: Google Compute Engine
---
-# Running CoreOS {{ site.alpha-channel }} on Google Compute Engine
+# Running CoreOS on Google Compute Engine
CoreOS on Google Compute Engine (GCE) is currently in heavy development and actively being tested. The current disk image is listed below and relies on GCE's recently announced [Advanced OS Support][gce-advanced-os]. Each time a new update is released, your machines will [automatically upgrade themselves]({{ site.url }}/using-coreos/updates).
@@ -30,16 +30,14 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
At the moment CoreOS images are not publicly listed in GCE and must be added to your own account from a raw disk image published in Google Cloud Storage:
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
At the moment CoreOS images are not publicly listed in GCE and must be added to your own account from a raw disk image published in Google Cloud Storage:
### Additional Storage
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index ca1932068..6d49786be 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -6,7 +6,7 @@ sub_category: cloud_provider
weight: 5
---
-# Running CoreOS {{site.data.beta-channel.rackspace-version}} on Rackspace
+# Running CoreOS on Rackspace
CoreOS is currently in heavy development and actively being tested. These instructions will walk you through running CoreOS on the Rackspace Openstack cloud, which differs slightly from the generic Openstack instructions. There are two ways to launch a CoreOS cluster: launch an entire cluster with Heat or launch machines with Nova.
@@ -15,7 +15,7 @@ CoreOS is currently in heavy development and actively being tested. These instr
CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updates) with different schedules per channel. You can [disable this feature]({{site.url}}/docs/cluster-management/debugging/prevent-reboot-after-update), although we don't recommend it. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes.
-
@@ -160,11 +160,22 @@ Check you make sure the key is in your list by running `supernova production key
### Boot a Server
-Boot a new server with our new keypair and specify optional cloud-config data.
-
-```
-supernova production boot --image {{site.data.beta-channel.rackspace-image-id}} --flavor performance1-2 --key-name coreos-key --user-data ~/cloud_config.yml --config-drive true My_CoreOS_Server
-```
+
You should now see the details of your new server in your terminal and it should also show up in the control panel:
@@ -226,13 +237,24 @@ source ~/.bash_profile
### Launch the Stack
-Launch the stack by providing the specified parameters. This command will reference the local file `data.yml` in the current working directory that contains the cloud-config parameters. `$(< data.yaml)` prints the contents of this file into our heat command:
-
-```
-heat stack-create Test --template-file https://coreos.com/dist/rackspace/heat-alpha.yaml -P key-name=coreos-key -P flavor='2 GB Performance' -P count=5 -P user-data="$(< data.yaml)" -P name="CoreOS-alpha"
-```
-
-You can view the [template here]({{site.url}}/dist/rackspace/heat-alpha.yaml).
+
Launch the stack by providing the specified parameters. This command will reference the local file `data.yml` in the current working directory that contains the cloud-config parameters. `$(< data.yaml)` prints the contents of this file into our heat command:
Launch the stack by providing the specified parameters. This command will reference the local file `data.yml` in the current working directory that contains the cloud-config parameters. `$(< data.yaml)` prints the contents of this file into our heat command:
## Using CoreOS
From a5cefe7b90de0e99f7bbfcaf65bcb7c82f5b0d40 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 8 May 2014 13:11:15 -0700
Subject: [PATCH 0049/1291] fix(bare-metal): improve channel selection info
---
.../bare-metal/booting-with-ipxe/index.md | 32 +++++++++++----
.../bare-metal/booting-with-pxe/index.md | 41 +++++++++++++------
.../bare-metal/installing-to-disk/index.md | 34 ++++++++++-----
3 files changed, 76 insertions(+), 31 deletions(-)
diff --git a/running-coreos/bare-metal/booting-with-ipxe/index.md b/running-coreos/bare-metal/booting-with-ipxe/index.md
index b61495c9e..02b164592 100644
--- a/running-coreos/bare-metal/booting-with-ipxe/index.md
+++ b/running-coreos/bare-metal/booting-with-ipxe/index.md
@@ -22,22 +22,38 @@ To illustrate iPXE in action we will qemu-kvm in this guide.
CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
-The channel is selected through the `set coreos-version` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
-
### Setting up the Boot Script
-iPXE downloads a boot script from a publicly available URL.
-You will need to host this URL somewhere public and replace the example SSH key with your own. You can also run a [custom iPXE server](https://github.com/kelseyhightower/coreos-ipxe-server).
-
-```
+
iPXE downloads a boot script from a publicly available URL. You will need to host this URL somewhere public and replace the example SSH key with your own. You can also run a custom iPXE server.
+
#!ipxe
set coreos-version alpha
set base-url http://storage.core-os.net/coreos/amd64-usr/${coreos-version}
kernel ${base-url}/coreos_production_pxe.vmlinuz sshkey="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAYQC2PxAKTLdczK9+RNsGGPsz0eC2pBlydBEcrbI7LSfiN7Bo5hQQVjki+Xpnp8EEYKpzu6eakL8MJj3E28wT/vNklT1KyMZrXnVhtsmOtBKKG/++odpaavdW2/AU0l7RZiE= coreos pxe demo"
initrd ${base-url}/coreos_production_pxe_image.cpio.gz
-boot
-```
+boot
+
+
+
iPXE downloads a boot script from a publicly available URL. You will need to host this URL somewhere public and replace the example SSH key with your own. You can also run a custom iPXE server.
An easy place to host this boot script is on [http://pastie.org](http://pastie.org). Be sure to reference the "raw" version of script, which is accessed by clicking on the clipboard in the top right.
diff --git a/running-coreos/bare-metal/booting-with-pxe/index.md b/running-coreos/bare-metal/booting-with-pxe/index.md
index 2d62dc096..d6606b3e0 100644
--- a/running-coreos/bare-metal/booting-with-pxe/index.md
+++ b/running-coreos/bare-metal/booting-with-pxe/index.md
@@ -71,15 +71,18 @@ You can view all of the [cloud-config options here]({{site.url}}/docs/cluster-ma
CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
-The channel is selected through the `storage.core-os.net` URLs below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
-
-PXE booted machines cannot currently update themselves. To update to the latest version of CoreOS download/verify these files again and reboot.
-
-In the config above you can see that a Kernel image and a initramfs file is needed.
-Download these two files into your tftp root.
-The extra `coreos_production_pxe.vmlinuz.sig` and `coreos_production_pxe_image.cpio.gz.sig` files can be used to [verify the downloaded files][verify-notes].
-
-```
+PXE booted machines cannot currently update themselves when new versions are released to a channel. To update to the latest version of CoreOS download/verify these files again and reboot.
+
+
## Booting the Box
diff --git a/running-coreos/bare-metal/installing-to-disk/index.md b/running-coreos/bare-metal/installing-to-disk/index.md
index cc788d7fc..0000a07a7 100644
--- a/running-coreos/bare-metal/installing-to-disk/index.md
+++ b/running-coreos/bare-metal/installing-to-disk/index.md
@@ -14,24 +14,36 @@ weight: 7
There is a simple installer that will destroy everything on the given target disk and install CoreOS.
Essentially it downloads an image, verifies it with gpg and then copies it bit for bit to disk.
-The script is self-contained and located [on Github here](https://raw.github.com/coreos/init/master/bin/coreos-install "coreos-install").
-It is already installed if you are booting CoreOS via PXE but you can also use it from other Linux distributions.
+The script is self-contained and located [on Github here](https://raw.github.com/coreos/init/master/bin/coreos-install "coreos-install") and can be run from any Linux distribution.
-## Choose a Channel
-
-CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
-
-When running on CoreOS the install script will attempt to install the same version (and channel) by default.
+If you have already booting CoreOS via PXE, the install script is already installed. By default the install script will attempt to install the same version and channel that was PXE-booted:
```
coreos-install -d /dev/sda
```
-If you want to ensure you are installing the latest available version on a channel, use the `-V` option:
+## Choose a Channel
-```
-coreos-install -d /dev/sda -V alpha
-```
+CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+
+
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
+
If you want to ensure you are installing the latest alpha version, use the -V option:
+
coreos-install -d /dev/sda -V alpha
+
+
+
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
+
If you want to ensure you are installing the latest beta version, use the -V option:
+
coreos-install -d /dev/sda -V beta
+
+
+
For reference here are the rest of the `coreos-install` options:
From 0d7baa5d42d20dcc16f1adc794c82f9c7a3bab60 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 8 May 2014 14:45:31 -0700
Subject: [PATCH 0050/1291] fix(running-coreos): remove references to master
channel
---
running-coreos/bare-metal/booting-with-ipxe/index.md | 2 +-
running-coreos/bare-metal/booting-with-pxe/index.md | 2 +-
running-coreos/bare-metal/installing-to-disk/index.md | 2 +-
running-coreos/platforms/eucalyptus/index.md | 2 +-
running-coreos/platforms/libvirt/index.md | 2 +-
running-coreos/platforms/openstack/index.md | 2 +-
running-coreos/platforms/qemu/index.md | 2 +-
running-coreos/platforms/vmware/index.md | 2 +-
8 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/running-coreos/bare-metal/booting-with-ipxe/index.md b/running-coreos/bare-metal/booting-with-ipxe/index.md
index 02b164592..d8e9847aa 100644
--- a/running-coreos/bare-metal/booting-with-ipxe/index.md
+++ b/running-coreos/bare-metal/booting-with-ipxe/index.md
@@ -20,7 +20,7 @@ To illustrate iPXE in action we will qemu-kvm in this guide.
### Choose a Channel
-CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
### Setting up the Boot Script
diff --git a/running-coreos/bare-metal/booting-with-pxe/index.md b/running-coreos/bare-metal/booting-with-pxe/index.md
index d6606b3e0..02faaa7be 100644
--- a/running-coreos/bare-metal/booting-with-pxe/index.md
+++ b/running-coreos/bare-metal/booting-with-pxe/index.md
@@ -69,7 +69,7 @@ You can view all of the [cloud-config options here]({{site.url}}/docs/cluster-ma
### Choose a Channel
-CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
PXE booted machines cannot currently update themselves when new versions are released to a channel. To update to the latest version of CoreOS download/verify these files again and reboot.
diff --git a/running-coreos/bare-metal/installing-to-disk/index.md b/running-coreos/bare-metal/installing-to-disk/index.md
index 0000a07a7..131b14804 100644
--- a/running-coreos/bare-metal/installing-to-disk/index.md
+++ b/running-coreos/bare-metal/installing-to-disk/index.md
@@ -24,7 +24,7 @@ coreos-install -d /dev/sda
## Choose a Channel
-CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
diff --git a/running-coreos/platforms/eucalyptus/index.md b/running-coreos/platforms/eucalyptus/index.md
index 096e6b71e..108b69463 100644
--- a/running-coreos/platforms/eucalyptus/index.md
+++ b/running-coreos/platforms/eucalyptus/index.md
@@ -18,7 +18,7 @@ In order to convert the image you will need to install ```qemu-img``` with your
### Choosing a Channel
-CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
The channel is selected through the `storage.core-os.net` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
diff --git a/running-coreos/platforms/libvirt/index.md b/running-coreos/platforms/libvirt/index.md
index aca3c3050..867fb88ab 100644
--- a/running-coreos/platforms/libvirt/index.md
+++ b/running-coreos/platforms/libvirt/index.md
@@ -24,7 +24,7 @@ to substitute that path if you use another one.
### Choosing a Channel
-CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
The channel is selected through the `storage.core-os.net` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
diff --git a/running-coreos/platforms/openstack/index.md b/running-coreos/platforms/openstack/index.md
index 3721f98dc..63f5d70d0 100644
--- a/running-coreos/platforms/openstack/index.md
+++ b/running-coreos/platforms/openstack/index.md
@@ -19,7 +19,7 @@ into the glance image store.
### Choosing a Channel
-CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
The channel is selected through the `storage.core-os.net` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
diff --git a/running-coreos/platforms/qemu/index.md b/running-coreos/platforms/qemu/index.md
index d8eda30e4..0dee1f641 100644
--- a/running-coreos/platforms/qemu/index.md
+++ b/running-coreos/platforms/qemu/index.md
@@ -73,7 +73,7 @@ image.
### Choosing a Channel
-CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
The channel is selected through the `storage.core-os.net` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
diff --git a/running-coreos/platforms/vmware/index.md b/running-coreos/platforms/vmware/index.md
index ed7850567..ead01dcd1 100644
--- a/running-coreos/platforms/vmware/index.md
+++ b/running-coreos/platforms/vmware/index.md
@@ -20,7 +20,7 @@ you will need to launch the `coreos_developer_vmware_insecure.vmx` file to creat
### Choosing a Channel
-CoreOS is released into master, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
+CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
The channel is selected through the `storage.core-os.net` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
From ed12f4f0d573c5e24a50814ff093f43f5ce18620 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 8 May 2014 16:31:57 -0700
Subject: [PATCH 0051/1291] feat(cluster-management): explain how to switch
channels
---
.../setup/switching-channels/index.md | 45 +++++++++++++++++++
1 file changed, 45 insertions(+)
create mode 100644 cluster-management/setup/switching-channels/index.md
diff --git a/cluster-management/setup/switching-channels/index.md b/cluster-management/setup/switching-channels/index.md
new file mode 100644
index 000000000..c2a4da628
--- /dev/null
+++ b/cluster-management/setup/switching-channels/index.md
@@ -0,0 +1,45 @@
+---
+layout: docs
+title: Switching Release Channels
+category: cluster_management
+sub_category: setting_up
+weight: 5
+---
+
+# Switching Release Channels
+
+CoreOS is released into beta and stable channels. New features and bug fixes are tested in the alpha channel and are promoted bit-for-bit to the beta channel if no additional bugs are found.
+
+## Create Update Config File
+
+You can switch machines between channels by creating `/etc/coreos/update.conf`:
+
+```
+GROUP=beta
+```
+
+## Restart Update Engine
+
+The last step is to restart the update engine in order for it to pick up the changed channel:
+
+```
+sudo systemctl restart update-engine
+```
+
+## Debugging
+
+After the update engine is restarted, the machine should check for an update within an hour. You can view the update engine log if you'd like to see the requests that are being made to the update service:
+
+```
+journalctl -f -u update-engine
+```
+
+For reference, you can find the current version:
+
+```
+cat /etc/os-release
+```
+
+## Release Information
+
+You can read more about the current releases and channels on the [releases page]({{site.url}}/releases).
\ No newline at end of file
From c7d8843e36fff34349e830f00fa077c3b9c093c4 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 8 May 2014 17:03:36 -0700
Subject: [PATCH 0052/1291] fix(ec2): use correct name on beta CF stack
---
running-coreos/cloud-providers/ec2/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/cloud-providers/ec2/index.md b/running-coreos/cloud-providers/ec2/index.md
index fca4b32ef..a002e61ea 100644
--- a/running-coreos/cloud-providers/ec2/index.md
+++ b/running-coreos/cloud-providers/ec2/index.md
@@ -62,7 +62,7 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
{% endfor %}
From deba5d13b140a6f36e314e26558013186f3139ea Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 8 May 2014 18:10:41 -0700
Subject: [PATCH 0053/1291] feat(gce): use public gce image path
---
.../google-compute-engine/index.md | 39 ++++---------------
1 file changed, 8 insertions(+), 31 deletions(-)
diff --git a/running-coreos/cloud-providers/google-compute-engine/index.md b/running-coreos/cloud-providers/google-compute-engine/index.md
index 15d6c5288..4479241ec 100644
--- a/running-coreos/cloud-providers/google-compute-engine/index.md
+++ b/running-coreos/cloud-providers/google-compute-engine/index.md
@@ -15,33 +15,6 @@ Before proceeding, you will need to [install gcutil][gcutil-documentation] and c
[gce-advanced-os]: http://developers.google.com/compute/docs/transition-v1#customkernelbinaries
[gcutil-documentation]: https://developers.google.com/compute/docs/gcutil/
-## Choosing a Channel
-
-CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updates) with different schedules per channel. You can [disable this feature]({{site.url}}/docs/cluster-management/debugging/prevent-reboot-after-update), although we don't recommend it. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes.
-
-
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
-
-
At the moment CoreOS images are not publicly listed in GCE and must be added to your own account from a raw disk image published in Google Cloud Storage:
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
-
-
At the moment CoreOS images are not publicly listed in GCE and must be added to your own account from a raw disk image published in Google Cloud Storage:
-
## Cloud-Config
CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config). You can provide cloud-config to CoreOS via the Google Cloud console's metadata field `user-data` or via a flag using `gcutil`.
@@ -65,9 +38,9 @@ coreos:
command: start
```
-## Instance creation
+## Choosing a Channel
-Create 3 instances from the image above using our cloud-config from `cloud-config.yaml`:
+CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updates) with different schedules per channel. You can [disable this feature]({{site.url}}/docs/cluster-management/debugging/prevent-reboot-after-update), although we don't recommend it. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes.
@@ -76,10 +49,14 @@ Create 3 instances from the image above using our cloud-config from `cloud-confi
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
+
Create 3 instances from the image above using our cloud-config from `cloud-config.yaml`:
From 19abd84d598ecd9a96f9370b9e27142de72dbad1 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 8 May 2014 18:27:41 -0700
Subject: [PATCH 0054/1291] fix(vultr): update incorrect ipxe flags
---
running-coreos/cloud-providers/vultr/index.md | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index 247077c69..200613668 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -22,9 +22,10 @@ A sample script will look like this :
```
#!ipxe
-set coreos-version dev-channel
-set base-url http://storage.core-os.net/coreos/amd64-generic/${coreos-version}
-kernel ${base-url}/coreos_production_pxe.vmlinuz root=squashfs: state=tmpfs: sshkey="YOUR_PUBLIC_KEY_HERE"
+
+set coreos-version alpha
+set base-url http://storage.core-os.net/coreos/amd64-usr/${coreos-version}
+kernel ${base-url}/coreos_production_pxe.vmlinuz sshkey="YOUR_PUBLIC_KEY_HERE"
initrd ${base-url}/coreos_production_pxe_image.cpio.gz
boot
```
From 24d29fd00cfbfd05257f17fcd3197bbea8f50d3c Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 8 May 2014 18:34:29 -0700
Subject: [PATCH 0055/1291] feat(debugging): remove reboot guide in favor of
locksmith guide
---
.../prevent-reboot-after-update/index.md | 57 -------------------
1 file changed, 57 deletions(-)
delete mode 100644 cluster-management/debugging/prevent-reboot-after-update/index.md
diff --git a/cluster-management/debugging/prevent-reboot-after-update/index.md b/cluster-management/debugging/prevent-reboot-after-update/index.md
deleted file mode 100644
index 5753bb749..000000000
--- a/cluster-management/debugging/prevent-reboot-after-update/index.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-layout: docs
-slug: guides
-title: Prevent Reboot After Update
-category: cluster_management
-sub_category: debugging
-weight: 8
----
-
-# Prevent Reboot After Update
-
-This is a temporary workaround to disable auto updates. As we move out of the alpha there will be a nicer method.
-
-There is a single simple script called `update-engine-reboot-manager` that does an automatic reboot after update-engine applies an update to your CoreOS machine. To stop automatic reboots after an update has been applied you need to stop this daemon.
-
-## Stop Reboots on a Single Machine
-
-```
-sudo systemctl stop update-engine-reboot-manager.service
-sudo systemctl mask update-engine-reboot-manager.service
-```
-
-## Stop Update Reboots with Cloud-Config
-
-You can use [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config) to run these commands on newly booted machines:
-
-```
-#cloud-config
-
-coreos:
- units:
- - name: stop-reboot-manager.service
- content: |
- [Unit]
- Description=stop update-engine-reboot-manager
-
- [Service]
- Type=oneshot
- ExecStart=/usr/bin/systemctl stop update-engine-reboot-manager.service
- ExecStartPost=/usr/bin/systemctl mask update-engine-reboot-manager.service
-
- [Install]
- WantedBy=multi-user.target
-```
-
-## Applying New Updates
-
-You can decide to update at any time by rebooting your machine.
-
-## Restart Update Reboots
-
-```
-sudo systemctl unmask update-engine-reboot-manager.service
-sudo systemctl start update-engine-reboot-manager.service
-```
-
-Have fun!
From 00d294d278ba87544cf5065d567a512857914e59 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 8 May 2014 18:54:38 -0700
Subject: [PATCH 0056/1291] fix(gce): fix unrendered backticks
---
running-coreos/cloud-providers/google-compute-engine/index.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/running-coreos/cloud-providers/google-compute-engine/index.md b/running-coreos/cloud-providers/google-compute-engine/index.md
index 4479241ec..ce1e312e7 100644
--- a/running-coreos/cloud-providers/google-compute-engine/index.md
+++ b/running-coreos/cloud-providers/google-compute-engine/index.md
@@ -50,12 +50,12 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
-
Create 3 instances from the image above using our cloud-config from `cloud-config.yaml`:
+
Create 3 instances from the image above using our cloud-config from cloud-config.yaml:
From f8796fac8b99b124e0cd23487d5db63e97d5f4bc Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 8 May 2014 20:08:57 -0700
Subject: [PATCH 0057/1291] fix(gce): revert previous instructions
---
.../google-compute-engine/index.md | 39 +++++++++++++++----
1 file changed, 31 insertions(+), 8 deletions(-)
diff --git a/running-coreos/cloud-providers/google-compute-engine/index.md b/running-coreos/cloud-providers/google-compute-engine/index.md
index ce1e312e7..15d6c5288 100644
--- a/running-coreos/cloud-providers/google-compute-engine/index.md
+++ b/running-coreos/cloud-providers/google-compute-engine/index.md
@@ -15,6 +15,33 @@ Before proceeding, you will need to [install gcutil][gcutil-documentation] and c
[gce-advanced-os]: http://developers.google.com/compute/docs/transition-v1#customkernelbinaries
[gcutil-documentation]: https://developers.google.com/compute/docs/gcutil/
+## Choosing a Channel
+
+CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updates) with different schedules per channel. You can [disable this feature]({{site.url}}/docs/cluster-management/debugging/prevent-reboot-after-update), although we don't recommend it. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes.
+
+
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
+
+
At the moment CoreOS images are not publicly listed in GCE and must be added to your own account from a raw disk image published in Google Cloud Storage:
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
+
+
At the moment CoreOS images are not publicly listed in GCE and must be added to your own account from a raw disk image published in Google Cloud Storage:
+
## Cloud-Config
CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config). You can provide cloud-config to CoreOS via the Google Cloud console's metadata field `user-data` or via a flag using `gcutil`.
@@ -38,9 +65,9 @@ coreos:
command: start
```
-## Choosing a Channel
+## Instance creation
-CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updates) with different schedules per channel. You can [disable this feature]({{site.url}}/docs/cluster-management/debugging/prevent-reboot-after-update), although we don't recommend it. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes.
+Create 3 instances from the image above using our cloud-config from `cloud-config.yaml`:
@@ -49,14 +76,10 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
-
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
-
Create 3 instances from the image above using our cloud-config from cloud-config.yaml:
From 1f72ac86460dc57ac02ffca4d920833cfcd5ff55 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 8 May 2014 20:16:15 -0700
Subject: [PATCH 0058/1291] fix(gce): fix typo
---
running-coreos/cloud-providers/google-compute-engine/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/cloud-providers/google-compute-engine/index.md b/running-coreos/cloud-providers/google-compute-engine/index.md
index 15d6c5288..529a1427c 100644
--- a/running-coreos/cloud-providers/google-compute-engine/index.md
+++ b/running-coreos/cloud-providers/google-compute-engine/index.md
@@ -30,7 +30,7 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
At the moment CoreOS images are not publicly listed in GCE and must be added to your own account from a raw disk image published in Google Cloud Storage:
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.data.alpha-channel.rackspace-version}}.
From 593c56e95f00e9749740ba58b112357f233162f4 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 9 May 2014 15:17:43 -0700
Subject: [PATCH 0062/1291] feat(iso): add ISO guide
---
running-coreos/platforms/iso/index.md | 47 +++++++++++++++++++++++++++
1 file changed, 47 insertions(+)
create mode 100644 running-coreos/platforms/iso/index.md
diff --git a/running-coreos/platforms/iso/index.md b/running-coreos/platforms/iso/index.md
new file mode 100644
index 000000000..6ca6c6837
--- /dev/null
+++ b/running-coreos/platforms/iso/index.md
@@ -0,0 +1,47 @@
+---
+layout: docs
+title: ISO
+category: running_coreos
+sub_category: platforms
+weight: 10
+---
+
+# Booting CoreOS from an ISO
+
+The latest CoreOS ISOs can be downloaded from the image storage site:
+
+
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
{% endfor %}
@@ -134,7 +135,7 @@ For more information about mounting storage, Amazon's [own documentation](http:/
To add more instances to the cluster, just launch more with the same cloud-config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.
## Multiple Clusters
-If you would like to create multiple clusters you will need to change the "Stack Name". You can find the direct [template file on S3]({{ cf_template }}).
+If you would like to create multiple clusters you will need to change the "Stack Name". You can find the direct [template file on S3]({{ cf_beta_template }}).
## Manual setup
From b24d90cfe78d5615dc3e6248141b1d86a1d7be17 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 13 May 2014 16:49:07 -0700
Subject: [PATCH 0065/1291] fix(running-coreos): fix minor yaml typo
---
running-coreos/cloud-providers/google-compute-engine/index.md | 2 +-
running-coreos/cloud-providers/rackspace/index.md | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/running-coreos/cloud-providers/google-compute-engine/index.md b/running-coreos/cloud-providers/google-compute-engine/index.md
index 529a1427c..32bb14359 100644
--- a/running-coreos/cloud-providers/google-compute-engine/index.md
+++ b/running-coreos/cloud-providers/google-compute-engine/index.md
@@ -92,7 +92,7 @@ Additional disks attached to instances can be mounted with a `.mount` unit. Each
#cloud-config
coreos:
units:
- - name media-backup.mount
+ - name: media-backup.mount
command: start
content: |
[Mount]
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index 22ef13a89..fec4b08ab 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -77,7 +77,7 @@ Certain server flavors have separate system and data disks. To utilize the data
#cloud-config
coreos:
units:
- - name media-data.mount
+ - name: media-data.mount
command: start
content: |
[Mount]
From f588f933af2affab2631e6d1febc76ee6cb80038 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 13 May 2014 16:56:52 -0700
Subject: [PATCH 0066/1291] feat(customizing-docker): configure http proxy for
docker
---
.../building/customizing-docker/index.md | 48 +++++++++++++++++++
1 file changed, 48 insertions(+)
diff --git a/launching-containers/building/customizing-docker/index.md b/launching-containers/building/customizing-docker/index.md
index 50020d4d7..3dd7423d1 100644
--- a/launching-containers/building/customizing-docker/index.md
+++ b/launching-containers/building/customizing-docker/index.md
@@ -127,3 +127,51 @@ coreos:
[Install]
WantedBy=multi-user.target
```
+
+## Use an HTTP Proxy
+
+If you're operating in a locked down networking environment, you can specify an HTTP proxy for docker to use via an environment variable. First, copy the existing unit from the read-only file system into the read/write file system, so we can edit it:
+
+```
+cp /usr/lib/systemd/system/docker.service /etc/systemd/system/
+```
+
+Add a line that sets the environment variable in the unit above the `ExecStart` command:
+
+```
+Environment="HTTP_PROXY=http://proxy.example.com:8080"
+```
+
+To apply the change, reload the unit and restart docker:
+
+```
+systemctl daemon-reload
+systemctl restart docker
+```
+
+### Cloud-Config
+
+The easiest way to use this proxy on all of your machines is via cloud-config:
+
+```
+#cloud-config
+
+coreos:
+ units:
+ - name: docker.service
+ command: restart
+ content: |
+ [Unit]
+ Description=Docker Application Container Engine
+ Documentation=http://docs.docker.io
+ After=network.target
+ [Service]
+ Environment="HTTP_PROXY=http://proxy.example.com:8080"
+ ExecStartPre=/bin/mount --make-rprivate /
+ # Run docker but don't have docker automatically restart
+ # containers. This is a job for systemd and unit files.
+ ExecStart=/usr/bin/docker -d -s=btrfs -r=false -H fd://
+
+ [Install]
+ WantedBy=multi-user.target
+```
From a8014a50b5b1a557b68ce324e3f7c85cc94669bf Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 14 May 2014 12:40:33 -0700
Subject: [PATCH 0067/1291] feat(quickstart): reference cloud-config
---
quickstart/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/quickstart/index.md b/quickstart/index.md
index 71ff892e7..fc1c5b445 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -20,7 +20,7 @@ ssh core@an.ip.compute-1.amazonaws.com
The first building block of CoreOS is service discovery with **etcd** ([docs][etcd-docs]). Data stored in etcd is distributed across all of your machines running CoreOS. For example, each of your app containers can announce itself to a proxy container, which would automatically know which machines should receive traffic. Building service discovery into your application allows you to add more machines and scale your services seamlessly.
-The API is easy to use. From a CoreOS machine, you can simply use curl to set and retrieve a key from etcd:
+If you used an example [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config) from a guide above, etcd is automatically started on boot. The API is easy to use. From a CoreOS machine, you can simply use curl to set and retrieve a key from etcd:
Set a key `message` with value `Hello world`:
From aee765fbb65704e22505d789a801c120b255f765 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 14 May 2014 14:16:48 -0700
Subject: [PATCH 0068/1291] feat(launching-containers): link to units repo
---
.../launching/launching-containers-fleet/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/launching-containers/launching/launching-containers-fleet/index.md b/launching-containers/launching/launching-containers-fleet/index.md
index 473fe3192..92442a28e 100644
--- a/launching-containers/launching/launching-containers-fleet/index.md
+++ b/launching-containers/launching/launching-containers-fleet/index.md
@@ -13,7 +13,7 @@ weight: 2
If you're not familiar with systemd units, check out our [Getting Started with systemd]({{site.url}}/docs/launching-containers/launching/getting-started-with-systemd) guide.
-This guide assumes you're running `fleetctl` locally from a CoreOS machine that's part of a CoreOS cluster. You can also [control your cluster remotely]({{site.url}}/docs/launching-containers/launching/fleet-using-the-client/#get-up-and-running).
+This guide assumes you're running `fleetctl` locally from a CoreOS machine that's part of a CoreOS cluster. You can also [control your cluster remotely]({{site.url}}/docs/launching-containers/launching/fleet-using-the-client/#get-up-and-running). All of the units referenced in this blog post are contained in the [unit-examples](https://github.com/coreos/unit-examples/tree/master/simple-fleet) repository. You can clone this onto your CoreOS box to make unit submission easier.
## Run a Container in the Cluster
From 1603e4773e8da0fc786a1499111a32aaeb968c00 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 14 May 2014 15:07:39 -0700
Subject: [PATCH 0069/1291] fix(customizing-docker): fix broken link
---
launching-containers/building/customizing-docker/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/launching-containers/building/customizing-docker/index.md b/launching-containers/building/customizing-docker/index.md
index 50020d4d7..5c4cb9baa 100644
--- a/launching-containers/building/customizing-docker/index.md
+++ b/launching-containers/building/customizing-docker/index.md
@@ -72,7 +72,7 @@ coreos:
## Use Attached Storage for Docker Images
-Docker containers can be very large and debugging a build process makes it easy to accumulate hundreds of containers. It's advantagous to use attached storage to expand your capacity for container images. Check out the guide to [mounting storage to your CoreOS machine]({{site.url}}/docs/cluster-management/setup/mounting-storage/index.md#use-attached-storage-for-docker) for an example of how to bind mount storage into `/var/lib/docker`.
+Docker containers can be very large and debugging a build process makes it easy to accumulate hundreds of containers. It's advantagous to use attached storage to expand your capacity for container images. Check out the guide to [mounting storage to your CoreOS machine]({{site.url}}/docs/cluster-management/setup/mounting-storage/#use-attached-storage-for-docker) for an example of how to bind mount storage into `/var/lib/docker`.
## Enabling the docker Debug Flag
From 66e9700a8658053b97fdf7062f31e383d8284c4a Mon Sep 17 00:00:00 2001
From: Michael Marineau
Date: Wed, 14 May 2014 16:01:36 -0700
Subject: [PATCH 0070/1291] fix(pxe): Note difference between tty0 and tty1
tty0 is normally used with the console= option (enable kernel output on
the VGA console) but coreos.autologin= should be tty1 or greater since
the login getty is on a virtual terminal, not the VGA console itself.
---
running-coreos/bare-metal/booting-with-pxe/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/bare-metal/booting-with-pxe/index.md b/running-coreos/bare-metal/booting-with-pxe/index.md
index 02faaa7be..206472439 100644
--- a/running-coreos/bare-metal/booting-with-pxe/index.md
+++ b/running-coreos/bare-metal/booting-with-pxe/index.md
@@ -30,7 +30,7 @@ When configuring the CoreOS pxelinux.cfg there are a few kernel options that may
- **root**: Use a local filesystem for root instead of one of two in-ram options above. The filesystem must be formatted in advance but may be completely blank, it will be initialized on boot. The filesystem may be specified by any of the usual ways including device, label, or UUID; e.g: `root=/dev/sda1`, `root=LABEL=ROOT` or `root=UUID=2c618316-d17a-4688-b43b-aa19d97ea821`.
- **sshkey**: Add the given SSH public key to the `core` user's authorized_keys file. Replace the example key below with your own (it is usually in `~/.ssh/id_rsa.pub`)
- **console**: Enable kernel output and a login prompt on a given tty. The default, `tty0`, generally maps to VGA. Can be used multiple times, e.g. `console=tty0 console=ttyS0`
-- **coreos.autologin**: Drop directly to a shell on a given console without prompting for a password. Useful for troubleshooting but use with caution. For any console that doesn't normally get a login prompt by default be sure to combine with the `console` option, e.g. `console=ttyS0 coreos.autologin=ttyS0`. Without any argument it enables access on all consoles. *Experimental*
+- **coreos.autologin**: Drop directly to a shell on a given console without prompting for a password. Useful for troubleshooting but use with caution. For any console that doesn't normally get a login prompt by default be sure to combine with the `console` option, e.g. `console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0`. Without any argument it enables access on all consoles. Note that for the VGA console the login prompts are on virtual terminals (`tty1`, `tty2`, etc), not the VGA console itself (`tty0`).
- **cloud-config-url**: CoreOS will attempt to download a cloud-config document and use it to provision your booted system. See the [coreos-cloudinit-project][cloudinit] for more information.
[cloudinit]: https://github.com/coreos/coreos-cloudinit
From b4cb4c18dfc1fc36fbe6733b4fb7282522157259 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 15 May 2014 10:38:36 -0700
Subject: [PATCH 0071/1291] fix(launching-containers): fix typo
---
.../launching/launching-containers-fleet/index.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/launching-containers/launching/launching-containers-fleet/index.md b/launching-containers/launching/launching-containers-fleet/index.md
index 473fe3192..67350111a 100644
--- a/launching-containers/launching/launching-containers-fleet/index.md
+++ b/launching-containers/launching/launching-containers-fleet/index.md
@@ -107,7 +107,7 @@ X-ConditionMachineOf=apache.1.service
This unit has a few interesting properties. First, it uses `BindsTo` to link the unit to our `apache.1.service` unit. When the Apache unit is stopped, this unit will stop as well, causing it to be removed from our `/services/website` directory in `etcd`. A TTL of 60 seconds is also being used here to remove the unit from the directory if our machine suddenly died for some reason.
-Second is `%H`, a variable built into systemd, that represents the hostname of the machine running this unit. Variable usage is coverd in our [Getting Started with systemd]({{site.url}}/docs/launching-containers/launching/getting-started-with-systemd/#unit-variables) guide as well as in [systemd documentation](http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Specifiers).
+Second is `%H`, a variable built into systemd, that represents the hostname of the machine running this unit. Variable usage is covered in our [Getting Started with systemd]({{site.url}}/docs/launching-containers/launching/getting-started-with-systemd/#unit-variables) guide as well as in [systemd documentation](http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Specifiers).
The third is a fleet-specific property called `X-ConditionMachineOf`. This property causes the unit to be placed onto the same machine that `apache.1.service` is running on.
@@ -144,4 +144,4 @@ If you're running in the cloud, many services have APIs that can be automated ba
#### More Information
Example Deployment with fleetfleet Unit Specifications
-fleet Configuration
\ No newline at end of file
+fleet Configuration
From ee4c0d40cb9d1ba009fe06b1a85774a8d4117128 Mon Sep 17 00:00:00 2001
From: Hugo Duncan
Date: Thu, 15 May 2014 15:44:54 -0400
Subject: [PATCH 0072/1291] Add an example binding docker to localhost port
Add an example of binding the port to localhost to prevent external access
---
launching-containers/building/customizing-docker/index.md | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/launching-containers/building/customizing-docker/index.md b/launching-containers/building/customizing-docker/index.md
index 5c4cb9baa..0a0c4495e 100644
--- a/launching-containers/building/customizing-docker/index.md
+++ b/launching-containers/building/customizing-docker/index.md
@@ -70,6 +70,12 @@ coreos:
ExecStart=/usr/bin/systemctl enable docker-tcp.socket
```
+To keep access to the port local, replace the `ListenStream` configuration above with:
+
+```
+ ListenStream=127.0.0.1:4243
+```
+
## Use Attached Storage for Docker Images
Docker containers can be very large and debugging a build process makes it easy to accumulate hundreds of containers. It's advantagous to use attached storage to expand your capacity for container images. Check out the guide to [mounting storage to your CoreOS machine]({{site.url}}/docs/cluster-management/setup/mounting-storage/#use-attached-storage-for-docker) for an example of how to bind mount storage into `/var/lib/docker`.
From 7542182a5c61e040f16f23f163fbb0f02bab4eb5 Mon Sep 17 00:00:00 2001
From: Hugo Duncan
Date: Thu, 15 May 2014 17:50:17 -0400
Subject: [PATCH 0073/1291] Add section header
---
launching-containers/building/customizing-docker/index.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/launching-containers/building/customizing-docker/index.md b/launching-containers/building/customizing-docker/index.md
index 0a0c4495e..3ba80cb21 100644
--- a/launching-containers/building/customizing-docker/index.md
+++ b/launching-containers/building/customizing-docker/index.md
@@ -73,6 +73,7 @@ coreos:
To keep access to the port local, replace the `ListenStream` configuration above with:
```
+ [Socket]
ListenStream=127.0.0.1:4243
```
From a1b0a9dfe4f935161b8ba1a8a1678248988e5133 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 16 May 2014 10:12:19 -0700
Subject: [PATCH 0074/1291] fix(running-coreos): bump vagrant/virtualbox
requirements
---
running-coreos/platforms/vagrant/index.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/running-coreos/platforms/vagrant/index.md b/running-coreos/platforms/vagrant/index.md
index 564ff85cc..1d038e2af 100644
--- a/running-coreos/platforms/vagrant/index.md
+++ b/running-coreos/platforms/vagrant/index.md
@@ -18,11 +18,11 @@ You can direct questions to the [IRC channel][irc] or [mailing list][coreos-dev]
Vagrant is a simple-to-use command line virtual machine manager. There are
install packages available for Windows, Linux and OSX. Find the latest
installer on the [Vagrant downloads page][vagrant]. Be sure to get
-version 1.5 or greater.
+version 1.6.3 or greater.
[vagrant]: http://www.vagrantup.com/downloads.html
-Vagrant can use either the free Virtualbox provider or the commerical VMware provider. Instructions for both are below. For the Virtualbox provider, version 4.0 or greater is required.
+Vagrant can use either the free Virtualbox provider or the commerical VMware provider. Instructions for both are below. For the Virtualbox provider, version 4.3.10 or greater is required.
## Clone Vagrant Repo
From 0f0a5d6ac6f069472db7ac981a7570b7fa1d078d Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 17 Apr 2014 13:59:54 -0700
Subject: [PATCH 0075/1291] feat(customizing-docker): dockercfg for
authentication
---
.../building/customizing-docker/index.md | 55 +++++++++++++++++++
1 file changed, 55 insertions(+)
diff --git a/launching-containers/building/customizing-docker/index.md b/launching-containers/building/customizing-docker/index.md
index d4c37b9b0..134236649 100644
--- a/launching-containers/building/customizing-docker/index.md
+++ b/launching-containers/building/customizing-docker/index.md
@@ -175,3 +175,58 @@ coreos:
[Install]
WantedBy=multi-user.target
```
+
+## Using a dockercfg File for Authentication
+
+A json file `.dockercfg` can be created in your home directory that holds authentication information for a public or private docker registry. The auth token is a base64 encoded string: `base64(:)`. Here's what an example looks like with credentials for docker's public index and a private index:
+
+```
+{
+ "https://index.docker.io/v1/": {
+ "auth": "xXxXxXxXxXx=",
+ "email": "username@example.com"
+ },
+ "https://index.example.com": {
+ "auth": "XxXxXxXxXxX=",
+ "email": "username@example.com"
+ }
+}
+```
+
+The last step is to tell your systemd units to run as the core user in order for docker to use the credentials we just set up. This is done in the service section of the unit:
+
+```
+[Unit]
+Description=My Container
+After=docker.service
+
+[Service]
+User=core
+ExecStart=/usr/bin/docker run busybox /bin/sh -c "while true; do echo Hello World; sleep 1; done"
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### Cloud-Config
+
+Since each machine in your cluster is going to have to pull images, cloud-config is the easiest way to write the config file to disk.
+
+```
+#cloud-config
+write_files:
+ - path: /home/core/.dockercfg
+ owner: core:core
+ permissions: 0644
+ content: |
+ {
+ "https://index.docker.io/v1/": {
+ "auth": "xXxXxXxXxXx=",
+ "email": "username@example.com"
+ },
+ "https://index.example.com": {
+ "auth": "XxXxXxXxXxX=",
+ "email": "username@example.com"
+ }
+ }
+```
\ No newline at end of file
From 31c0b4aa78562a7d8e56b0d6ac262453c18c9189 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 16 May 2014 14:05:16 -0700
Subject: [PATCH 0076/1291] fix(launching-containers): fix invalid yaml
---
.../building/customizing-docker/index.md | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/launching-containers/building/customizing-docker/index.md b/launching-containers/building/customizing-docker/index.md
index 134236649..8e3417921 100644
--- a/launching-containers/building/customizing-docker/index.md
+++ b/launching-containers/building/customizing-docker/index.md
@@ -223,10 +223,10 @@ write_files:
"https://index.docker.io/v1/": {
"auth": "xXxXxXxXxXx=",
"email": "username@example.com"
- },
- "https://index.example.com": {
- "auth": "XxXxXxXxXxX=",
- "email": "username@example.com"
- }
- }
-```
\ No newline at end of file
+ },
+ "https://index.example.com": {
+ "auth": "XxXxXxXxXxX=",
+ "email": "username@example.com"
+ }
+ }
+```
From 85165cbb525e1985311d9a96b86b3d48701cfd67 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 20 May 2014 14:13:29 -0700
Subject: [PATCH 0077/1291] feat(running-coreos): update GCE to use public
image
---
.../google-compute-engine/index.md | 37 ++++---------------
1 file changed, 7 insertions(+), 30 deletions(-)
diff --git a/running-coreos/cloud-providers/google-compute-engine/index.md b/running-coreos/cloud-providers/google-compute-engine/index.md
index 32bb14359..8614a37d1 100644
--- a/running-coreos/cloud-providers/google-compute-engine/index.md
+++ b/running-coreos/cloud-providers/google-compute-engine/index.md
@@ -15,33 +15,6 @@ Before proceeding, you will need to [install gcutil][gcutil-documentation] and c
[gce-advanced-os]: http://developers.google.com/compute/docs/transition-v1#customkernelbinaries
[gcutil-documentation]: https://developers.google.com/compute/docs/gcutil/
-## Choosing a Channel
-
-CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updates) with different schedules per channel. You can [disable this feature]({{site.url}}/docs/cluster-management/debugging/prevent-reboot-after-update), although we don't recommend it. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes.
-
-
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
-
-
At the moment CoreOS images are not publicly listed in GCE and must be added to your own account from a raw disk image published in Google Cloud Storage:
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
-
-
At the moment CoreOS images are not publicly listed in GCE and must be added to your own account from a raw disk image published in Google Cloud Storage:
-
## Cloud-Config
CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config). You can provide cloud-config to CoreOS via the Google Cloud console's metadata field `user-data` or via a flag using `gcutil`.
@@ -65,7 +38,9 @@ coreos:
command: start
```
-## Instance creation
+## Choosing a Channel
+
+CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updates) with different schedules per channel. You can [disable this feature]({{site.url}}/docs/cluster-management/debugging/prevent-reboot-after-update), although we don't recommend it. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes.
Create 3 instances from the image above using our cloud-config from `cloud-config.yaml`:
@@ -76,10 +51,12 @@ Create 3 instances from the image above using our cloud-config from `cloud-confi
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
From 3eb540f856497d342fb3034ece73078ae016c360 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 21 May 2014 10:44:44 -0700
Subject: [PATCH 0078/1291] fix(cluster-management): fix incorrect ordering
instructions
---
.../network-config-with-networkd/index.md | 20 +++++++++++--------
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/cluster-management/setup/network-config-with-networkd/index.md b/cluster-management/setup/network-config-with-networkd/index.md
index 6c2ed77a0..b678d28fc 100644
--- a/cluster-management/setup/network-config-with-networkd/index.md
+++ b/cluster-management/setup/network-config-with-networkd/index.md
@@ -33,28 +33,32 @@ sudo systemctl restart systemd-networkd
## Turn Off DHCP
-If you'd like to use DHCP on all interfaces except `enp2s0`, create two files that run in order of least specificity. Configure general settings in `10-dhcp.network`:
+If you'd like to use DHCP on all interfaces except `enp2s0`, create two files. They'll be checked in their ASCII sort order. Any interfaces matching during earlier files will be ignored during later files.
+
+#### 10-static.network
```
[Match]
-Name=en*
+Name=enp2s0
[Network]
-DHCP=yes
+Address=192.168.0.15/24
+Gateway=192.168.0.1
```
-Write your static configuration in `20-static.network`:
+Put your settings-of-last-resort in `20-dhcp.network`. For example, any interfaces matching `en*` that weren't matched in `10-static.network` will be configured with DHCP:
+
+#### 20-dhcp.network
```
[Match]
-Name=enp2s0
+Name=en*
[Network]
-Address=192.168.0.15/24
-Gateway=192.168.0.1
+DHCP=yes
```
-To apply the configuration, run `sudo systemctl restart systemd-networkd`.
+To apply the configuration, run `sudo systemctl restart systemd-networkd`. Check the status with `systemctl status systemd-networkd` and read the full log with `journalctl -u systemd-networkd`.
## Further Reading
From 0308d5f68eb2e4ca939bd8cccd1886fd091d13d8 Mon Sep 17 00:00:00 2001
From: Andy Fraley
Date: Wed, 21 May 2014 15:36:17 -0400
Subject: [PATCH 0079/1291] Added -L to curl example
Added -L to curl example so curl will follow redirects
---
running-coreos/platforms/vmware/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/platforms/vmware/index.md b/running-coreos/platforms/vmware/index.md
index ead01dcd1..95c5c0e11 100644
--- a/running-coreos/platforms/vmware/index.md
+++ b/running-coreos/platforms/vmware/index.md
@@ -27,7 +27,7 @@ The channel is selected through the `storage.core-os.net` below. Simply replace
This is a rough sketch that should work on OSX and Linux:
```
-curl -O http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_vmware_insecure.zip
+curl -LO http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_vmware_insecure.zip
unzip coreos_production_vmware_insecure.zip -d coreos_production_vmware_insecure
cd coreos_production_vmware_insecure
open coreos_production_vmware_insecure.vmx
From a9b13ca164999f306f315cdcc47d080296675497 Mon Sep 17 00:00:00 2001
From: Danny Berger
Date: Wed, 21 May 2014 13:37:50 -0600
Subject: [PATCH 0080/1291] fix(mounting-storage): rename docker ephemeral
.mount to match mount conventions
---
cluster-management/setup/mounting-storage/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cluster-management/setup/mounting-storage/index.md b/cluster-management/setup/mounting-storage/index.md
index bbff098cc..3876b16ce 100644
--- a/cluster-management/setup/mounting-storage/index.md
+++ b/cluster-management/setup/mounting-storage/index.md
@@ -48,7 +48,7 @@ coreos:
RemainAfterExit=yes
ExecStart=/usr/sbin/wipefs -f /dev/xvdb
ExecStart=/usr/sbin/mkfs.btrfs -f /dev/xvdb
- - name: media-ephemeral.mount
+ - name: var-lib-docker.mount
command: start
content: |
[Unit]
From 997c2b9bdcae331ea27644f26130b089db06d0a2 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 21 May 2014 12:59:01 -0700
Subject: [PATCH 0081/1291] fix(cluster-management): link to networkd ordering
docs
---
cluster-management/setup/network-config-with-networkd/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cluster-management/setup/network-config-with-networkd/index.md b/cluster-management/setup/network-config-with-networkd/index.md
index b678d28fc..a75a62cb1 100644
--- a/cluster-management/setup/network-config-with-networkd/index.md
+++ b/cluster-management/setup/network-config-with-networkd/index.md
@@ -33,7 +33,7 @@ sudo systemctl restart systemd-networkd
## Turn Off DHCP
-If you'd like to use DHCP on all interfaces except `enp2s0`, create two files. They'll be checked in their ASCII sort order. Any interfaces matching during earlier files will be ignored during later files.
+If you'd like to use DHCP on all interfaces except `enp2s0`, create two files. They'll be checked in lexical order, as described in the [full network docs](http://www.freedesktop.org/software/systemd/man/systemd-networkd.service.html). Any interfaces matching during earlier files will be ignored during later files.
#### 10-static.network
From 0a48e00f02c766244f1c6d500d191bcb66b20079 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 21 May 2014 15:02:13 -0700
Subject: [PATCH 0082/1291] feat(distributed-configuration): doc how to
customize etcd unit
---
.../customize-etcd-unit/index.md | 108 ++++++++++++++++++
1 file changed, 108 insertions(+)
create mode 100644 distributed-configuration/customize-etcd-unit/index.md
diff --git a/distributed-configuration/customize-etcd-unit/index.md b/distributed-configuration/customize-etcd-unit/index.md
new file mode 100644
index 000000000..9e6175acf
--- /dev/null
+++ b/distributed-configuration/customize-etcd-unit/index.md
@@ -0,0 +1,108 @@
+---
+layout: docs
+title: Customizing the etcd Unit
+category: distributed_configuration
+sub_category: configuration
+forkurl: https://github.com/coreos/etcd/blob/master/Documentation/configuration.md
+weight: 5
+---
+
+# Customizing the etcd Unit
+
+The etcd systemd unit can be customized by overriding the unit that ships with the default CoreOS settings. Common use-cases for doing this are covered below.
+
+## Use Client Certificates
+
+etcd supports client certificates as a way to provide secure communication between clients ↔ leader and internal traffic between etcd peers in the cluster. Configuring certificates for both scenarios is done through environment varaibles. We can use a systemd drop-in unit to augment the unit that ships with CoreOS.
+
+This site has a [good reference for how to generate self-signed key pairs](http://www.g-loaded.eu/2005/11/10/be-your-own-ca/) or you could use [etcd-ca](https://github.com/coreos/etcd-ca) to generate certs and keys.
+
+We need to create our drop-in unit in `/run/systemd/system/etcd.service.d/`. If you run `systemctl status etcd` you can see that CoreOS is already generating a few drop-in units for etcd as part of the OEM and cloudinit processes. To ensure that our drop-in runs after these, we name it `30-certificates.conf`
+
+#### 30-certificates.conf
+
+```
+[Service]
+# Client Env Vars
+Environment=ETCD_CA_FILE=/path/to/CA.pem
+Environment=ETCD_CERT_FILE=/path/to/server.crt
+Environment=ETCD_KEY_FILE=/path/to/server.key
+# Peer Env Vars
+Environment=ETCD_PEER_CA_FILE=/path/to/CA.pem
+Environment=ETCD_PEER_CERT_FILE=/path/to/peers.crt
+Environment=ETCD_PEER_KEY_FILE=/path/to/peers.key
+```
+
+You'll have to put these files on disk somewhere. To do this on each of your machines, the easiest way is with cloud-config.
+
+### Cloud-Config
+
+Cloud-config has a parameter that will place the contents of a file on disk. We're going to use this to add our drop-in unit as well as the certificate files.
+
+```
+#cloud-config
+
+write_files:
+ - path: /run/systemd/system/etcd.service.d/30-certificates.conf
+ permissions: 0644
+ content: |
+ [Service]
+ # Client Env Vars
+ Environment=ETCD_CA_FILE=/path/to/CA.pem
+ Environment=ETCD_CERT_FILE=/path/to/server.crt
+ Environment=ETCD_KEY_FILE=/path/to/server.key
+ # Peer Env Vars
+ Environment=ETCD_PEER_CA_FILE=/path/to/CA.pem
+ Environment=ETCD_PEER_CERT_FILE=/path/to/peers.crt
+ Environment=ETCD_PEER_KEY_FILE=/path/to/peers.key
+ - path: /path/to/CA.pem
+ permissions: 0644
+ content: |
+ -----BEGIN CERTIFICATE-----
+ MIIFNDCCAx6gAwIBAgIBATALBgkqhkiG9w0BAQUwLTEMMAoGA1UEBhMDVVNBMRAw
+ ...snip...
+ EtHaxYQRy72yZrte6Ypw57xPRB8sw1DIYjr821Lw05DrLuBYcbyclg==
+ -----END CERTIFICATE-----
+ - path: /path/to/server.crt
+ permissions: 0644
+ content: |
+ -----BEGIN CERTIFICATE-----
+ MIIFWTCCA0OgAwIBAgIBAjALBgkqhkiG9w0BAQUwLTEMMAoGA1UEBhMDVVNBMRAw
+ DgYDVQQKEwdldGNkLWNhMQswCQYDVQQLEwJDQTAeFw0xNDA1MjEyMTQ0MjhaFw0y
+ ...snip...
+ rdmtCVLOyo2wz/UTzvo7UpuxRrnizBHpytE4u0KgifGp1OOKY+1Lx8XSH7jJIaZB
+ a3m12FMs3AsSt7mzyZk+bH2WjZLrlUXyrvprI40=
+ -----END CERTIFICATE-----
+ - path: /path/to/server.key
+ permissions: 0644
+ content: |
+ -----BEGIN RSA PRIVATE KEY-----
+ Proc-Type: 4,ENCRYPTED
+ DEK-Info: DES-EDE3-CBC,069abc493cd8bda6
+
+ TBX9mCqvzNMWZN6YQKR2cFxYISFreNk5Q938s5YClnCWz3B6KfwCZtjMlbdqAakj
+ ...snip...
+ mgVh2LBerGMbsdsTQ268sDvHKTdD9MDAunZlQIgO2zotARY02MLV/Q5erASYdCxk
+ -----END RSA PRIVATE KEY-----
+ - path: /path/to/peers.crt
+ permissions: 0644
+ content: |
+ -----BEGIN CERTIFICATE-----
+ VQQLEwJDQTAeFw0xNDA1MjEyMTQ0MjhaFw0yMIIFWTCCA0OgAwIBAgIBAjALBgkq
+ DgYDVQQKEwdldGNkLWNhMQswCQYDhkiG9w0BAQUwLTEMMAoGA1UEBhMDVVNBMRAw
+ ...snip...
+ BHpytE4u0KgifGp1OOKY+1Lx8XSH7jJIaZBrdmtCVLOyo2wz/UTzvo7UpuxRrniz
+ St7mza3m12FMs3AsyZk+bH2WjZLrlUXyrvprI90=
+ -----END CERTIFICATE-----
+ - path: /path/to/peers.key
+ permissions: 0644
+ content: |
+ -----BEGIN RSA PRIVATE KEY-----
+ Proc-Type: 4,ENCRYPTED
+ DEK-Info: DES-EDE3-CBC,069abc493cd8bda6
+
+ SFreNk5Q938s5YTBX9mCqvzNMWZN6YQKR2cFxYIClnCWz3B6KfwCZtjMlbdqAakj
+ ...snip...
+ DvHKTdD9MDAunZlQIgO2zotmgVh2LBerGMbsdsTQ268sARY02MLV/Q5erASYdCxk
+ -----END RSA PRIVATE KEY-----
+```
\ No newline at end of file
From 53796b190abf24df0ed6353c83b4c11cc0ad7880 Mon Sep 17 00:00:00 2001
From: Michael Marineau
Date: Thu, 22 May 2014 13:39:51 -0700
Subject: [PATCH 0083/1291] fix(installing-to-disk): Document new
coreos-install channel option.
This replaces -V for selecting channel since it needs to be possible to
specify both a version and a channel.
Related: https://github.com/coreos/bugs/issues/29
---
running-coreos/bare-metal/installing-to-disk/index.md | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/running-coreos/bare-metal/installing-to-disk/index.md b/running-coreos/bare-metal/installing-to-disk/index.md
index 131b14804..ab8d7bdc5 100644
--- a/running-coreos/bare-metal/installing-to-disk/index.md
+++ b/running-coreos/bare-metal/installing-to-disk/index.md
@@ -34,13 +34,13 @@ CoreOS is released into alpha and beta channels. Releases to each channel serve
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
-
If you want to ensure you are installing the latest alpha version, use the -V option:
-
coreos-install -d /dev/sda -V alpha
+
If you want to ensure you are installing the latest alpha version, use the -C option:
+
coreos-install -d /dev/sda -C alpha
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
-
If you want to ensure you are installing the latest beta version, use the -V option:
-
coreos-install -d /dev/sda -V beta
+
If you want to ensure you are installing the latest beta version, use the -C option:
+
coreos-install -d /dev/sda -C beta
@@ -48,7 +48,8 @@ CoreOS is released into alpha and beta channels. Releases to each channel serve
For reference here are the rest of the `coreos-install` options:
-d DEVICE Install CoreOS to the given device.
- -V VERSION Version to install (e.g. alpha)
+ -V VERSION Version to install (e.g. current)
+ -C CHANNEL Release channel to use (e.g. beta)
-o OEM OEM type to install (e.g. openstack)
-c CLOUD Insert a cloud-init config to be executed on boot.
-t TMPDIR Temporary location with enough space to download images.
From 74a61d4b7cc67778fde8220baad789e4bc49ad26 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 22 May 2014 14:17:38 -0700
Subject: [PATCH 0084/1291] feat(running-coreos): minor GCE tweaks
---
.../cloud-providers/google-compute-engine/index.md | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/running-coreos/cloud-providers/google-compute-engine/index.md b/running-coreos/cloud-providers/google-compute-engine/index.md
index 8614a37d1..1acd64e10 100644
--- a/running-coreos/cloud-providers/google-compute-engine/index.md
+++ b/running-coreos/cloud-providers/google-compute-engine/index.md
@@ -8,8 +8,6 @@ title: Google Compute Engine
# Running CoreOS on Google Compute Engine
-CoreOS on Google Compute Engine (GCE) is currently in heavy development and actively being tested. The current disk image is listed below and relies on GCE's recently announced [Advanced OS Support][gce-advanced-os]. Each time a new update is released, your machines will [automatically upgrade themselves]({{ site.url }}/using-coreos/updates).
-
Before proceeding, you will need to [install gcutil][gcutil-documentation] and check that your GCE account/project has billing enabled (Settings → Billing). In each command below, be sure to insert your project name in place of ``.
[gce-advanced-os]: http://developers.google.com/compute/docs/transition-v1#customkernelbinaries
@@ -52,11 +50,11 @@ Create 3 instances from the image above using our cloud-config from `cloud-confi
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
From 2de487c4d500a0617f964ea0b48180767e495fb1 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 22 May 2014 14:30:41 -0700
Subject: [PATCH 0085/1291] fix(GCE): tweak device path
---
running-coreos/cloud-providers/google-compute-engine/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/cloud-providers/google-compute-engine/index.md b/running-coreos/cloud-providers/google-compute-engine/index.md
index 1acd64e10..0afc2c0c4 100644
--- a/running-coreos/cloud-providers/google-compute-engine/index.md
+++ b/running-coreos/cloud-providers/google-compute-engine/index.md
@@ -71,7 +71,7 @@ coreos:
command: start
content: |
[Mount]
- What=/dev/disk/by-id/google-database-backup
+ What=/dev/disk/by-id/scsi-0Google_PersistentDisk_database-backup
Where=/media/backup
Type=ext3
```
From 7023bfb0a3dd7bf0da79a545a97e9dee03c9716e Mon Sep 17 00:00:00 2001
From: Michael Marineau
Date: Thu, 22 May 2014 19:11:09 -0700
Subject: [PATCH 0086/1291] fix(sdk): Refresh doc on tagging/building releases.
Doesn't cover a number of things that are new but at least what it does
cover is no longer incorrect.
---
.../sdk/building-production-images/index.md | 80 +++++++++----------
1 file changed, 38 insertions(+), 42 deletions(-)
diff --git a/sdk-distributors/sdk/building-production-images/index.md b/sdk-distributors/sdk/building-production-images/index.md
index 50f93a192..1a2ef270a 100644
--- a/sdk-distributors/sdk/building-production-images/index.md
+++ b/sdk-distributors/sdk/building-production-images/index.md
@@ -22,12 +22,18 @@ repository which is usually organized like so:
[coreos-manifest]: https://github.com/coreos/manifest
-## Manual Builds
+## Tagging Releases
The first step of building a release is updating and tagging the release
in the manifest git repository. A typical release off of master involves
the following steps:
+ 1. Make sure you are on the master branch: `repo init -b master`
+ 2. Sync/checkout source, excluding local changes: `repo sync --detach`
+ 3. In the scripts directory: `./tag_release --push`
+
+That was far to easy, if you need to do it the hard way try this:
+
1. Make sure you are on the master branch: `repo init -b master`
2. Sync/checkout source, excluding local changes: `repo sync --detach`
3. Switch to the somewhat hidden manifests checkout: `cd .repo/manifests`
@@ -54,7 +60,10 @@ the following steps:
HEAD:build-$BUILD v$BUILD.$BRANCH.$PATCH`
If a release branch needs to be updated after master has moved on the
-procedure is similar but has a few key differences:
+procedure is similar.
+Unfortunately since tagging branched releases (not on master) is a bit
+tricker to get right the `tag_release` script cannot be used.
+The automated build will kick off after updating the `dev-channel` branch.
1. Check out the release instead of master: `repo init -b build-$BUILD
-m release.xml`
@@ -88,65 +97,52 @@ This will build an image that can be ran under KVM and uses near production
values.
Note: Add `COREOS_OFFICIAL=1` here if you are making a real release. That will
-change the version and enable uploads by default.
+change the version to leave off the build id suffix.
```
-./build_image prod
+./build_image prod --group alpha
```
The generated production image is bootable as-is by qemu but for a
-larger STATE partition or VMware images use `image_to_vm.sh` as
-described in the final output of `build_image1`.
+larger ROOT partition or VMware images use `image_to_vm.sh` as
+described in the final output of `build_image`.
## Automated Builds
Automated release builds are triggered by pushes to the `dev-channel`
-branch in the manifest repository. When cutting releases off of master
-you can skip the long process described above by using the `tag_release`
-script:
-
- 1. Make sure you are on the master branch: `repo init -b master`
- 2. Sync/checkout source, excluding local changes: `repo sync --detach`
- 3. In the scripts directory: `./tag_release --push`
-
-That's it! Automated builds will now kick off to generate a new SDK
-tarball and disk images for most of our supported platform types.
-Unfortunately since tagging branched releases (not on master) requires a
-bit more thought use the manual process described above. The automated
-build will still kick off after updating the `dev-channel` branch.
+branch in the manifest repository.
Note: In the future builds will be triggered by pushing new tags instead
-of using the `dev-channel` branch. Only using tags will mesh better with
-our current plans for adding more release channels.
-
-## Pushing updates to the dev-channel
-
-### Manual Builds
-
-To push an update to the dev channel track on api.core-os.net build a
-production images as described above and then use the following tool:
+of using the `dev-channel` branch; the branch only exists do to a limitation
+of the current buildbot deployment.
-```
-COREOS_OFFICIAL=1 ./core_upload_update --track dev-channel --image ../build/images/amd64-usr/latest/coreos_production_image.bin
-```
-
-### Automated builds
+## Pushing updates into roller
The automated build host does not have access to production signing keys
-so the final signing and push to api.core-os.net must be done elsewhere.
-The `au-generator.zip` archive provides the tools required to do this so
-a full SDK setup is not required. This does require gsutil to be
+so the final signing and push to roller must be done elsewhere.
+The `coreos_production_update.zip` archive provides the tools required to
+do this so a full SDK setup is not required. This does require gsutil to be
installed and configured.
+An update payload signed by the insecure development keys is generated
+automatically as `coreos_production_update.gz` and
+`coreos_production_update.meta`. If needed the raw filesystem image used
+to generate the payload is `coreos_production_update.bin.bz2`.
+As an example, to publish the insecurely signed payload:
```
-URL=gs://storage.core-os.net/coreos/amd64-usr/0000.0.0
+URL=gs://builds.release.core-os.net/alpha/amd64-usr/321.0.0
cd $(mktemp -d)
-gsutil cp $URL/au-generator.zip $URL/coreos_production_image.bin.bz2 ./
-unzip au-generator.zip
-bunzip2 coreos_production_image.bin.bz2
-COREOS_OFFICIAL=1 ./core_upload_update --track dev-channel --image coreos_production_image.bin
+gsutil -m cp $URL/coreos_production_update* ./
+gpg --verify coreos_production_update.zip.sig
+gpg --verify coreos_production_update.gz.sig
+gpg --verify coreos_production_update.meta.sig
+unzip coreos_production_update.zip
+ ./core_roller_upload --user @coreos.com --api_key
```
+Note: prefixing the command with a space will avoid recording your API key
+into your bash history if `$HISTCONTROL` is `ignorespace` or `ignoreboth`.
+
## Tips and Tricks
-We've compiled a [list of tips and tricks](/docs/sdk-distributors/sdk/tips-and-tricks) that can make working with the SDK a bit easier.
\ No newline at end of file
+We've compiled a [list of tips and tricks](/docs/sdk-distributors/sdk/tips-and-tricks) that can make working with the SDK a bit easier.
From d1c74dae3459ec49b517cadcca5315cc94738d4f Mon Sep 17 00:00:00 2001
From: Michael Marineau
Date: Thu, 22 May 2014 19:33:27 -0700
Subject: [PATCH 0087/1291] fix(*): Update image download URLs
---
.../bare-metal/booting-with-ipxe/index.md | 8 +++-----
.../bare-metal/booting-with-pxe/index.md | 16 ++++++++--------
running-coreos/cloud-providers/vultr/index.md | 3 +--
running-coreos/platforms/eucalyptus/index.md | 6 +++---
running-coreos/platforms/iso/index.md | 8 ++++----
running-coreos/platforms/libvirt/index.md | 4 ++--
running-coreos/platforms/openstack/index.md | 4 ++--
running-coreos/platforms/qemu/index.md | 6 +++---
running-coreos/platforms/vmware/index.md | 4 ++--
.../distributors/notes-for-distributors/index.md | 8 ++++----
10 files changed, 32 insertions(+), 35 deletions(-)
diff --git a/running-coreos/bare-metal/booting-with-ipxe/index.md b/running-coreos/bare-metal/booting-with-ipxe/index.md
index d8e9847aa..7b9ef47ec 100644
--- a/running-coreos/bare-metal/booting-with-ipxe/index.md
+++ b/running-coreos/bare-metal/booting-with-ipxe/index.md
@@ -35,8 +35,7 @@ CoreOS is released into alpha and beta channels. Releases to each channel serve
@@ -81,7 +79,7 @@ Immediatly iPXE should download your boot script URL and start grabbing the imag
```
${YOUR_BOOT_URL}... ok
-http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_pxe.vmlinuz... 98%
+http://alpha.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz... 98%
```
After a few moments of downloading CoreOS should boot normally.
diff --git a/running-coreos/bare-metal/booting-with-pxe/index.md b/running-coreos/bare-metal/booting-with-pxe/index.md
index 206472439..645582fec 100644
--- a/running-coreos/bare-metal/booting-with-pxe/index.md
+++ b/running-coreos/bare-metal/booting-with-pxe/index.md
@@ -84,10 +84,10 @@ PXE booted machines cannot currently update themselves when new versions are rel
The coreos_production_pxe.vmlinuz.sig and coreos_production_pxe_image.cpio.gz.sig files can be used to verify the downloaded files.
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index 200613668..371639321 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -23,8 +23,7 @@ A sample script will look like this :
```
#!ipxe
-set coreos-version alpha
-set base-url http://storage.core-os.net/coreos/amd64-usr/${coreos-version}
+set base-url http://alpha.release.core-os.net/amd64-usr/current
kernel ${base-url}/coreos_production_pxe.vmlinuz sshkey="YOUR_PUBLIC_KEY_HERE"
initrd ${base-url}/coreos_production_pxe_image.cpio.gz
boot
diff --git a/running-coreos/platforms/eucalyptus/index.md b/running-coreos/platforms/eucalyptus/index.md
index 108b69463..819e26502 100644
--- a/running-coreos/platforms/eucalyptus/index.md
+++ b/running-coreos/platforms/eucalyptus/index.md
@@ -20,10 +20,10 @@ In order to convert the image you will need to install ```qemu-img``` with your
CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
-The channel is selected through the `storage.core-os.net` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
+The channel is selected based on the URL below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
```
-$ wget -q http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_openstack_image.img.bz2
+$ wget -q http://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2
$ bunzip2 coreos_production_openstack_image.img.bz2
$ qemu-img convert -O raw coreos_production_openstack_image.img coreos_production_openstack_image.raw
$ euca-bundle-image -i coreos_production_openstack_image.raw -r x86_64 -d /var/tmp
@@ -74,4 +74,4 @@ core@10-0-0-3 ~ $
## Using CoreOS
Now that you have a machine booted it is time to play around.
-Check out the [CoreOS Quickstart]({{site.url}}/docs/quickstart) guide or dig into [more specific topics]({{site.url}}/docs).
\ No newline at end of file
+Check out the [CoreOS Quickstart]({{site.url}}/docs/quickstart) guide or dig into [more specific topics]({{site.url}}/docs).
diff --git a/running-coreos/platforms/iso/index.md b/running-coreos/platforms/iso/index.md
index 6ca6c6837..17792bbdc 100644
--- a/running-coreos/platforms/iso/index.md
+++ b/running-coreos/platforms/iso/index.md
@@ -20,8 +20,8 @@ The latest CoreOS ISOs can be downloaded from the image storage site:
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
All of the files necessary to verify the image can be found on the storage site.
diff --git a/running-coreos/platforms/libvirt/index.md b/running-coreos/platforms/libvirt/index.md
index 867fb88ab..780ded47a 100644
--- a/running-coreos/platforms/libvirt/index.md
+++ b/running-coreos/platforms/libvirt/index.md
@@ -26,13 +26,13 @@ to substitute that path if you use another one.
CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
-The channel is selected through the `storage.core-os.net` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
+The channel is selected based on the URL below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
We start by downloading the most recent disk image:
mkdir -p /var/lib/libvirt/images/coreos0
cd /var/lib/libvirt/images/coreos0
- wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_qemu_image.img.bz2
+ wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_qemu_image.img.bz2
bunzip2 coreos_production_qemu_image.img.bz2
## Virtual machine configuration
diff --git a/running-coreos/platforms/openstack/index.md b/running-coreos/platforms/openstack/index.md
index 63f5d70d0..f34765307 100644
--- a/running-coreos/platforms/openstack/index.md
+++ b/running-coreos/platforms/openstack/index.md
@@ -21,10 +21,10 @@ into the glance image store.
CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
-The channel is selected through the `storage.core-os.net` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
+The channel is selected based on the URL below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
```
-$ wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_openstack_image.img.bz2
+$ wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2
$ bunzip2 coreos_production_openstack_image.img.bz2
$ glance image-create --name CoreOS \
--container-format bare \
diff --git a/running-coreos/platforms/qemu/index.md b/running-coreos/platforms/qemu/index.md
index 0dee1f641..4ba9628c1 100644
--- a/running-coreos/platforms/qemu/index.md
+++ b/running-coreos/platforms/qemu/index.md
@@ -75,14 +75,14 @@ image.
CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
-The channel is selected through the `storage.core-os.net` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
+The channel is selected based on the URL below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
There are two files you need: the disk image (provided in qcow2
format) and the wrapper shell script to start QEMU.
mkdir coreos; cd coreos
- wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_qemu.sh
- wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_qemu_image.img.bz2 -O - | bzcat > coreos_production_qemu_image.img
+ wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_qemu.sh
+ wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_qemu_image.img.bz2 -O - | bzcat > coreos_production_qemu_image.img
chmod +x coreos_production_qemu.sh
Starting is as simple as:
diff --git a/running-coreos/platforms/vmware/index.md b/running-coreos/platforms/vmware/index.md
index 95c5c0e11..be945a1ac 100644
--- a/running-coreos/platforms/vmware/index.md
+++ b/running-coreos/platforms/vmware/index.md
@@ -22,12 +22,12 @@ you will need to launch the `coreos_developer_vmware_insecure.vmx` file to creat
CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
-The channel is selected through the `storage.core-os.net` below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
+The channel is selected based on the URL below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
This is a rough sketch that should work on OSX and Linux:
```
-curl -LO http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_vmware_insecure.zip
+curl -LO http://alpha.release.core-os.net/amd64-usr/current/coreos_production_vmware_insecure.zip
unzip coreos_production_vmware_insecure.zip -d coreos_production_vmware_insecure
cd coreos_production_vmware_insecure
open coreos_production_vmware_insecure.vmx
diff --git a/sdk-distributors/distributors/notes-for-distributors/index.md b/sdk-distributors/distributors/notes-for-distributors/index.md
index 90cef1307..6e0377c87 100644
--- a/sdk-distributors/distributors/notes-for-distributors/index.md
+++ b/sdk-distributors/distributors/notes-for-distributors/index.md
@@ -10,14 +10,14 @@ weight: 5
## Importing Images
-Images of CoreOS are hosted at `http://storage.core-os.net/coreos/amd64-usr/`. At this URL there are directories for each individual version of CoreOS but also images that have promoted to a channel like master, alpha, beta, etc.
+Images of CoreOS alpha releases are hosted at `http://alpha.release.core-os.net/amd64-usr/`. There are directories for releases by version as well as `current` with a copy of the latest version. Similarly, beta releases can be found at `http://beta.release.core-os.net/amd64-usr/`.
-If you are importing images for use inside of your environment it is recommended that you import from a URL in the following format `http://storage.core-os.net/coreos/amd64-usr/${CHANNEL}/`. For example to grab the alpha OpenStack version of CoreOS you can import `http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_openstack_image.img.bz2`. There is a `version.txt` file in this directory which you can use to label the image with a version number as well.
+If you are importing images for use inside of your environment it is recommended that you import from the `current` directory. For example to grab the alpha OpenStack version of CoreOS you can import `gs://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2`. There is a `version.txt` file in this directory which you can use to label the image with a version number.
It is recommended that you also verify files using the [CoreOS Image Signing Key][signing-key]. The GPG signature for each image is a detached `.sig` file that must be passed to `gpg --verify`. For example:
- wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_openstack_image.img.bz2
- wget http://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_openstack_image.img.bz2.sig
+ wget gs://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2
+ wget gs://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2.sig
gpg --verify coreos_production_openstack_image.img.bz2.sig
[signing-key]: {{site.url}}/security/image-signing-key
From 4a1d9ff53cf05773d80f4cbff1a5930d5b7b2d06 Mon Sep 17 00:00:00 2001
From: Burke Libbey
Date: Fri, 23 May 2014 00:30:33 -0400
Subject: [PATCH 0088/1291] fix(building-development-images): Update to reflect
reality
---
.../sdk/building-development-images/index.md | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/sdk-distributors/sdk/building-development-images/index.md b/sdk-distributors/sdk/building-development-images/index.md
index 3695f4aff..8b1f5a182 100644
--- a/sdk-distributors/sdk/building-development-images/index.md
+++ b/sdk-distributors/sdk/building-development-images/index.md
@@ -22,11 +22,12 @@ start_devserver --port 8080
NOTE: This port will need to be internet accessible.
-2. Run /usr/local/bin/gmerge from your VM and ensure that the settings in
- `/etc/lsb-release` point to your workstation IP/hostname and port
+2. Run `/usr/bin/gmerge` from your VM and ensure that the settings in
+ `/etc/coreos/update.conf` point to your workstation IP/hostname and port.
+ You'll need to set `DEVSERVER` and `COREOS_RELEASE_BOARD` (likely `amd64-usr`).
```
-/usr/local/bin/gmerge coreos-base/update_engine
+/usr/bin/gmerge coreos-base/update_engine
```
### Updating an Image with Update Engine
@@ -77,4 +78,4 @@ git push bump-go
## Tips and Tricks
-We've compiled a [list of tips and tricks](/docs/sdk-distributors/sdk/tips-and-tricks) that can make working with the SDK a bit easier.
\ No newline at end of file
+We've compiled a [list of tips and tricks](/docs/sdk-distributors/sdk/tips-and-tricks) that can make working with the SDK a bit easier.
From 388b871151691ae361d98c306c3a5c55a0c457d0 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 23 May 2014 12:01:02 -0700
Subject: [PATCH 0089/1291] fix(quickstart): clarity around example config
---
quickstart/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/quickstart/index.md b/quickstart/index.md
index fc1c5b445..f9d7b832d 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -20,7 +20,7 @@ ssh core@an.ip.compute-1.amazonaws.com
The first building block of CoreOS is service discovery with **etcd** ([docs][etcd-docs]). Data stored in etcd is distributed across all of your machines running CoreOS. For example, each of your app containers can announce itself to a proxy container, which would automatically know which machines should receive traffic. Building service discovery into your application allows you to add more machines and scale your services seamlessly.
-If you used an example [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config) from a guide above, etcd is automatically started on boot. The API is easy to use. From a CoreOS machine, you can simply use curl to set and retrieve a key from etcd:
+If you used an example [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config) from a guide linked in the first paragraph, etcd is automatically started on boot. The API is easy to use. From a CoreOS machine, you can simply use curl to set and retrieve a key from etcd:
Set a key `message` with value `Hello world`:
From 639b3c88a63d574c0a5b02e1b57d3a65a7783e27 Mon Sep 17 00:00:00 2001
From: Michael Marineau
Date: Fri, 23 May 2014 12:02:25 -0700
Subject: [PATCH 0090/1291] fix release doc
---
sdk-distributors/sdk/building-production-images/index.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/sdk-distributors/sdk/building-production-images/index.md b/sdk-distributors/sdk/building-production-images/index.md
index 1a2ef270a..358a73b25 100644
--- a/sdk-distributors/sdk/building-production-images/index.md
+++ b/sdk-distributors/sdk/building-production-images/index.md
@@ -32,7 +32,7 @@ the following steps:
2. Sync/checkout source, excluding local changes: `repo sync --detach`
3. In the scripts directory: `./tag_release --push`
-That was far to easy, if you need to do it the hard way try this:
+That was far too easy, if you need to do it the hard way try this:
1. Make sure you are on the master branch: `repo init -b master`
2. Sync/checkout source, excluding local changes: `repo sync --detach`
@@ -113,7 +113,7 @@ Automated release builds are triggered by pushes to the `dev-channel`
branch in the manifest repository.
Note: In the future builds will be triggered by pushing new tags instead
-of using the `dev-channel` branch; the branch only exists do to a limitation
+of using the `dev-channel` branch; the branch only exists due to a limitation
of the current buildbot deployment.
## Pushing updates into roller
@@ -141,7 +141,7 @@ unzip coreos_production_update.zip
```
Note: prefixing the command with a space will avoid recording your API key
-into your bash history if `$HISTCONTROL` is `ignorespace` or `ignoreboth`.
+in your bash history if `$HISTCONTROL` is `ignorespace` or `ignoreboth`.
## Tips and Tricks
From 2590340e1110c23a72c2cff48dd7d49261f65fbf Mon Sep 17 00:00:00 2001
From: Michael Marineau
Date: Fri, 23 May 2014 12:05:40 -0700
Subject: [PATCH 0091/1291] fix http urls
---
.../distributors/notes-for-distributors/index.md | 6 +++---
sdk-distributors/sdk/building-production-images/index.md | 2 +-
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/sdk-distributors/distributors/notes-for-distributors/index.md b/sdk-distributors/distributors/notes-for-distributors/index.md
index 6e0377c87..4e1241894 100644
--- a/sdk-distributors/distributors/notes-for-distributors/index.md
+++ b/sdk-distributors/distributors/notes-for-distributors/index.md
@@ -12,12 +12,12 @@ weight: 5
Images of CoreOS alpha releases are hosted at `http://alpha.release.core-os.net/amd64-usr/`. There are directories for releases by version as well as `current` with a copy of the latest version. Similarly, beta releases can be found at `http://beta.release.core-os.net/amd64-usr/`.
-If you are importing images for use inside of your environment it is recommended that you import from the `current` directory. For example to grab the alpha OpenStack version of CoreOS you can import `gs://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2`. There is a `version.txt` file in this directory which you can use to label the image with a version number.
+If you are importing images for use inside of your environment it is recommended that you import from the `current` directory. For example to grab the alpha OpenStack version of CoreOS you can import `http://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2`. There is a `version.txt` file in this directory which you can use to label the image with a version number.
It is recommended that you also verify files using the [CoreOS Image Signing Key][signing-key]. The GPG signature for each image is a detached `.sig` file that must be passed to `gpg --verify`. For example:
- wget gs://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2
- wget gs://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2.sig
+ wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2
+ wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2.sig
gpg --verify coreos_production_openstack_image.img.bz2.sig
[signing-key]: {{site.url}}/security/image-signing-key
diff --git a/sdk-distributors/sdk/building-production-images/index.md b/sdk-distributors/sdk/building-production-images/index.md
index 358a73b25..9b7c83656 100644
--- a/sdk-distributors/sdk/building-production-images/index.md
+++ b/sdk-distributors/sdk/building-production-images/index.md
@@ -130,7 +130,7 @@ to generate the payload is `coreos_production_update.bin.bz2`.
As an example, to publish the insecurely signed payload:
```
-URL=gs://builds.release.core-os.net/alpha/amd64-usr/321.0.0
+URL=http://builds.release.core-os.net/alpha/amd64-usr/321.0.0
cd $(mktemp -d)
gsutil -m cp $URL/coreos_production_update* ./
gpg --verify coreos_production_update.zip.sig
From 9519a0a901e0a5b1c0acf6415aa3374b7dba0f30 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 23 May 2014 12:11:44 -0700
Subject: [PATCH 0092/1291] fix(quickstart): suggest using a cluster
---
quickstart/index.md | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/quickstart/index.md b/quickstart/index.md
index f9d7b832d..a46492fd3 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -6,9 +6,11 @@ title: CoreOS Quick Start
# Quick Start
-If you don't have a CoreOS machine running, check out the guides on running CoreOS on [Vagrant][vagrant-guide], [Amazon EC2][ec2-guide], [QEMU/KVM][qemu-guide], [VMware][vmware-guide] and [OpenStack][openstack-guide]. With either of these guides you will have a machine up and running in a few minutes.
+If you don't have a CoreOS machine running, check out the guides on running CoreOS on [Vagrant][vagrant-guide], [Amazon EC2][ec2-guide], [QEMU/KVM][qemu-guide], [VMware][vmware-guide] and [OpenStack][openstack-guide]. With either of these guides you will have a machine up and running in a few minutes.
-CoreOS gives you three essential tools: service discovery, container management and process management. Let's try each of them out.
+It's highly recommended that you set up a cluster of at least 3 machines — it's not as much fun on a single machine. If you don't want to break the bank, [Vagrant][vagrant-guide] allows you to run an entire cluster on your laptop. For a cluster to be properly bootstrapped, you have to provide cloud-config via user-data, which is covered in each platform's guide.
+
+CoreOS gives you three essential tools: service discovery, container management and process management. Let's try each of them out.
First, connect to a CoreOS machine via SSH as the user `core`. For example, on Amazon, use:
From affb9d24f49f4137538dae7a5730652db44753b3 Mon Sep 17 00:00:00 2001
From: Lars Smit
Date: Thu, 29 May 2014 11:52:25 +0200
Subject: [PATCH 0093/1291] Fixed spelling error on reading the system log
---
cluster-management/debugging/reading-the-system-log/index.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/cluster-management/debugging/reading-the-system-log/index.md b/cluster-management/debugging/reading-the-system-log/index.md
index 9183e0ac5..131674836 100644
--- a/cluster-management/debugging/reading-the-system-log/index.md
+++ b/cluster-management/debugging/reading-the-system-log/index.md
@@ -26,7 +26,7 @@ Dec 22 00:10:21 localhost kernel: Linux version 3.11.7+ (buildbot@10.10.10.10) (
...
1000s more lines
```
-## Read Entires for a Specific Service
+## Read Entries for a Specific Service
Read entries generated by a specific unit:
@@ -73,4 +73,4 @@ journalctl -u apache.service -f
```
#### More Information
-Getting Started with systemd
\ No newline at end of file
+Getting Started with systemd
From 855fc2dc45021188d76bc8023b0908850adefbb3 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 29 May 2014 10:29:42 -0700
Subject: [PATCH 0094/1291] fix(rackspace): list alpha and beta channels
---
.../cloud-providers/rackspace/index.md | 26 +++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index fec4b08ab..7613c6312 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -17,10 +17,11 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.data.alpha-channel.rackspace-version}}.
@@ -41,6 +42,27 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
+
+
+
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.data.beta-channel.rackspace-version}}.
From 6cc9600d90532f4e51abbf15b435920598534539 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 29 May 2014 17:13:19 -0700
Subject: [PATCH 0095/1291] feat(cluster-management): add syntax highlighting
hints
---
.../debugging/install-debugging-tools/index.md | 10 +++++-----
.../debugging/reading-the-system-log/index.md | 12 ++++++------
.../scaling/adding-disk-space/index.md | 6 +++---
cluster-management/setup/adding-users/index.md | 8 ++++----
cluster-management/setup/mounting-storage/index.md | 4 ++--
.../setup/network-config-with-networkd/index.md | 8 ++++----
cluster-management/setup/switching-channels/index.md | 8 ++++----
cluster-management/setup/update-strategies/index.md | 8 ++++----
8 files changed, 32 insertions(+), 32 deletions(-)
diff --git a/cluster-management/debugging/install-debugging-tools/index.md b/cluster-management/debugging/install-debugging-tools/index.md
index 7ed9fb9a1..db8a493c3 100644
--- a/cluster-management/debugging/install-debugging-tools/index.md
+++ b/cluster-management/debugging/install-debugging-tools/index.md
@@ -15,13 +15,13 @@ You can use common debugging tools like tcpdump or strace with Toolbox. Using th
By default, Toolbox uses the stock Fedora docker container. To start using it, simply run:
-```
+```sh
/usr/bin/toolbox
```
You're now in the namespace of Fedora and can install any software you'd like via `yum`. For example, if you'd like to use `tcpdump`:
-```
+```sh
[root@srv-3qy0p ~]# yum install tcpdump
[root@srv-3qy0p ~]# tcpdump -i ens3
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
@@ -32,7 +32,7 @@ listening on ens3, link-type EN10MB (Ethernet), capture size 65535 bytes
Create a `.toolboxrc` in the user's home folder to use a specific docker image:
-```
+```sh
$ cat .toolboxrc
TOOLBOX_DOCKER_IMAGE=index.example.com/debug
TOOLBOX_USER=root
@@ -45,13 +45,13 @@ Pulling repository index.example.com/debug
Advanced users can SSH directly into a toolbox by setting up an `/etc/passwd` entry:
-```
+```sh
useradd bob -m -p '*' -s /usr/bin/toolbox
```
To test, SSH as bob:
-```
+```sh
ssh bob@hostname.example.com
______ ____ _____
diff --git a/cluster-management/debugging/reading-the-system-log/index.md b/cluster-management/debugging/reading-the-system-log/index.md
index 131674836..8cb2fa30f 100644
--- a/cluster-management/debugging/reading-the-system-log/index.md
+++ b/cluster-management/debugging/reading-the-system-log/index.md
@@ -13,7 +13,7 @@ weight: 5
## Read the Entire Journal
-```
+```sh
$ journalctl
-- Logs begin at Fri 2013-12-13 23:43:32 UTC, end at Sun 2013-12-22 12:28:45 UTC. --
@@ -30,7 +30,7 @@ Dec 22 00:10:21 localhost kernel: Linux version 3.11.7+ (buildbot@10.10.10.10) (
Read entries generated by a specific unit:
-```
+```sh
$ journalctl -u apache.service
-- Logs begin at Fri 2013-12-13 23:43:32 UTC, end at Sun 2013-12-22 12:32:52 UTC. --
@@ -42,7 +42,7 @@ Dec 22 12:32:39 localhost docker[9772]: apache2: Could not reliably determine th
Using the `--tunnel` flag ([docs](https://github.com/coreos/fleet/blob/master/Documentation/using-the-client.md#from-an-external-host)), you can remotely read the journal for a specific unit started via [fleet]({{site.url}}/using-coreos/clustering/). This command will figure out which machine the unit is currently running on, fetch the journal and output it:
-```
+```sh
$ fleetctl --tunnel 10.10.10.10 journal apache.service
-- Logs begin at Fri 2013-12-13 23:43:32 UTC, end at Sun 2013-12-22 12:32:52 UTC. --
@@ -56,7 +56,7 @@ Dec 22 12:32:39 localhost docker[9772]: apache2: Could not reliably determine th
Reading just the entires since the last boot is an easy way to troubleshoot services that are faiing to start properly:
-```
+```sh
journalctl --boot
```
@@ -64,11 +64,11 @@ journalctl --boot
You can tail the entire journal or just a specific service:
-```
+```sh
journalctl -f
```
-```
+```sh
journalctl -u apache.service -f
```
diff --git a/cluster-management/scaling/adding-disk-space/index.md b/cluster-management/scaling/adding-disk-space/index.md
index 2f146bd34..390a044b2 100644
--- a/cluster-management/scaling/adding-disk-space/index.md
+++ b/cluster-management/scaling/adding-disk-space/index.md
@@ -27,7 +27,7 @@ to use. It will work on raw, qcow2, vmdk, and most other formats. The
command accepts either an absolute size or a relative size by
by adding `+` prefix. Unit suffixes such as `G` or `M` are also supported.
-```
+```sh
# Increase the disk size by 5GB
qemu-img resize coreos_production_qemu_image.img +5G
```
@@ -41,7 +41,7 @@ be the absolute disk size, relative sizes are not supported so be
careful to only increase the size, not shrink it. The unit
suffixes `Gb` and `Mb` are supported.
-```
+```sh
# Set the disk size to 20GB
vmware-vdiskmanager -x 20Gb coreos_developer_vmware_insecure.vmx
```
@@ -58,7 +58,7 @@ format used for importing/exporting virtual machines.
If you have have no other options you can try converting the VMDK disk
image to a VDI image and configuring a new virtual machine with it:
-```
+```sh
VBoxManage clonehd old.vmdk new.vdi --format VDI
VBoxManage modifyhd new.vdi --resize 20480
```
diff --git a/cluster-management/setup/adding-users/index.md b/cluster-management/setup/adding-users/index.md
index 9db207860..9f5af65e5 100644
--- a/cluster-management/setup/adding-users/index.md
+++ b/cluster-management/setup/adding-users/index.md
@@ -14,7 +14,7 @@ You can create user accounts on a CoreOS machine manually with `useradd` or via
Managing users via cloud-config is preferred because it allows you to use the same configuration across many servers and the cloud-config file can be stored in a repo and versioned. In your cloud-config, you can specify many [different parameters]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/#users) for each user. Here's an example:
-```
+```yaml
#cloud-config
users:
@@ -33,13 +33,13 @@ Check out the entire [Customize with Cloud-Config]({{site.url}}/docs/cluster-man
If you'd like to add a user manually, SSH to the machine and use the `useradd` toll. To create the user `user`, run:
-```
+```sh
sudo useradd -p "*" -U -m user1 -G sudo
```
The `"*"` creates a user that cannot login with a password but can log in via SSH key. `-U` creates a group for the user, `-G` adds the user to the existing `sudo` group and `-m` creates a home directory. If you'd like to add a password for the user, run:
-```
+```sh
$ sudo passwd user1
New password:
Re-enter new password:
@@ -48,7 +48,7 @@ passwd: password changed.
To assign an SSH key, run:
-```
+```sh
update-ssh-keys -u user1 user1.pem
```
diff --git a/cluster-management/setup/mounting-storage/index.md b/cluster-management/setup/mounting-storage/index.md
index 3876b16ce..8908e652a 100644
--- a/cluster-management/setup/mounting-storage/index.md
+++ b/cluster-management/setup/mounting-storage/index.md
@@ -10,7 +10,7 @@ weight: 7
Many platforms provide attached storage, but it must be mounted for you to take advantage of it. You can easily do this via [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config) with a `.mount` unit. Here's an example that mounts an [EC2 ephemeral disk]({{site.url}}/docs/running-coreos/cloud-providers/ec2/#instance-storage):
-```
+```yaml
#cloud-config
coreos:
@@ -34,7 +34,7 @@ Docker containers can be very large and debugging a build process makes it easy
We're going to bind mount a btrfs device to `/var/lib/docker`, where docker stores images. We can do this on the fly when the machines starts up with a oneshot unit that formats the drive and another one that runs afterwards to mount it. Be sure to hardcode the correct device or look for a device by label:
-```
+```yaml
#cloud-config
coreos:
units
diff --git a/cluster-management/setup/network-config-with-networkd/index.md b/cluster-management/setup/network-config-with-networkd/index.md
index a75a62cb1..321acbace 100644
--- a/cluster-management/setup/network-config-with-networkd/index.md
+++ b/cluster-management/setup/network-config-with-networkd/index.md
@@ -16,7 +16,7 @@ Drop a file in `/etc/systemd/network/` or inject a file on boot via [cloud-confi
To configure a static IP on `enp2s0`, create `static.network`:
-```
+```ini
[Match]
Name=enp2s0
@@ -27,7 +27,7 @@ Gateway=192.168.0.1
Place the file in `/etc/systemd/network/`. To apply the configuration, run:
-```
+```sh
sudo systemctl restart systemd-networkd
```
@@ -37,7 +37,7 @@ If you'd like to use DHCP on all interfaces except `enp2s0`, create two files. T
#### 10-static.network
-```
+```ini
[Match]
Name=enp2s0
@@ -50,7 +50,7 @@ Put your settings-of-last-resort in `20-dhcp.network`. For example, any interfac
#### 20-dhcp.network
-```
+```ini
[Match]
Name=en*
diff --git a/cluster-management/setup/switching-channels/index.md b/cluster-management/setup/switching-channels/index.md
index c2a4da628..e30707ce6 100644
--- a/cluster-management/setup/switching-channels/index.md
+++ b/cluster-management/setup/switching-channels/index.md
@@ -14,7 +14,7 @@ CoreOS is released into beta and stable channels. New features and bug fixes are
You can switch machines between channels by creating `/etc/coreos/update.conf`:
-```
+```ini
GROUP=beta
```
@@ -22,7 +22,7 @@ GROUP=beta
The last step is to restart the update engine in order for it to pick up the changed channel:
-```
+```sh
sudo systemctl restart update-engine
```
@@ -30,13 +30,13 @@ sudo systemctl restart update-engine
After the update engine is restarted, the machine should check for an update within an hour. You can view the update engine log if you'd like to see the requests that are being made to the update service:
-```
+```sh
journalctl -f -u update-engine
```
For reference, you can find the current version:
-```
+```sh
cat /etc/os-release
```
diff --git a/cluster-management/setup/update-strategies/index.md b/cluster-management/setup/update-strategies/index.md
index e69c2b1c7..1fea426f7 100644
--- a/cluster-management/setup/update-strategies/index.md
+++ b/cluster-management/setup/update-strategies/index.md
@@ -25,7 +25,7 @@ It's important to note that updates are always downloaded to the passive partiti
The update strategy is defined in [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/#coreos):
-```
+```yaml
#cloud-config
coreos:
update:
@@ -46,7 +46,7 @@ The `etcd-lock` strategy mandates that each machine acquire and hold a reboot lo
The number of machines allowed to reboot simultaneously is configurable via a command line utility:
-```
+```sh
$ locksmithctl set-max 4
Old: 1
New: 4
@@ -56,7 +56,7 @@ This setting is stored in etcd so it won't have to be configured for subsequent
To view the number of available slots and find out which machines in the cluster are holding locks, run:
-```
+```sh
$ locksmithctl status
Available: 0
Max: 1
@@ -67,7 +67,7 @@ MACHINE ID
If needed, you can manually clear a lock by providing the machine ID:
-```
+```sh
locksmithctl unlock 69d27b356a94476da859461d3a3bc6fd
```
From 453eba2a3914200acd9534fb9ad1eb864caa09cb Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 29 May 2014 17:15:03 -0700
Subject: [PATCH 0096/1291] feat(distributed-configuration): add syntax
highlighting hints
---
.../customize-etcd-unit/index.md | 4 +-
.../getting-started-with-etcd/index.md | 42 +++++++++----------
2 files changed, 23 insertions(+), 23 deletions(-)
diff --git a/distributed-configuration/customize-etcd-unit/index.md b/distributed-configuration/customize-etcd-unit/index.md
index 9e6175acf..715dace54 100644
--- a/distributed-configuration/customize-etcd-unit/index.md
+++ b/distributed-configuration/customize-etcd-unit/index.md
@@ -21,7 +21,7 @@ We need to create our drop-in unit in `/run/systemd/system/etcd.service.d/`. If
#### 30-certificates.conf
-```
+```ini
[Service]
# Client Env Vars
Environment=ETCD_CA_FILE=/path/to/CA.pem
@@ -39,7 +39,7 @@ You'll have to put these files on disk somewhere. To do this on each of your mac
Cloud-config has a parameter that will place the contents of a file on disk. We're going to use this to add our drop-in unit as well as the certificate files.
-```
+```yaml
#cloud-config
write_files:
diff --git a/distributed-configuration/getting-started-with-etcd/index.md b/distributed-configuration/getting-started-with-etcd/index.md
index 5d9ad1bf4..85fd97a07 100644
--- a/distributed-configuration/getting-started-with-etcd/index.md
+++ b/distributed-configuration/getting-started-with-etcd/index.md
@@ -21,7 +21,7 @@ The HTTP-based API is easy to use. This guide will show both `etcdctl` and `curl
From a CoreOS machine, set a key `message` with value `Hello`:
-```
+```sh
$ etcdctl set /message Hello
Hello
```
@@ -33,12 +33,12 @@ $ curl -L -X PUT http://127.0.0.1:4001/v2/keys/message -d value="Hello"
Read the value of `message` back:
-```
+```sh
$ etcdctl get /message
Hello
```
-```
+```sh
$ curl -L http://127.0.0.1:4001/v2/keys/message
{"action":"get","node":{"key":"/message","value":"Hello","modifiedIndex":4,"createdIndex":4}}
```
@@ -47,12 +47,12 @@ If you followed a guide to set up more than one CoreOS machine, you can SSH into
To delete the key run:
-```
+```sh
$ etcdctl rm /message
```
-```
+```sh
$ curl -L -X DELETE http://127.0.0.1:4001/v2/keys/message
{"action":"delete","node":{"key":"/message","modifiedIndex":19,"createdIndex":4}}
```
@@ -69,26 +69,26 @@ Let's pretend we're setting up a service that consists of a few containers that
Directories are automatically created when a key is placed inside. Let's call our directory `foo-service` and create a key with information about a container:
-```
+```sh
$ etcdctl mkdir /foo-service
Cannot print key [/foo-service: Is a directory]
$ etcdctl set /foo-service/container1 localhost:1111
localhost:1111
```
-```
+```sh
$ curl -L -X PUT http://127.0.0.1:4001/v2/keys/foo-service/container1 -d value="localhost:1111"
{"action":"set","node":{"key":"/foo-service/container1","value":"localhost:1111","modifiedIndex":17,"createdIndex":17}}
```
Read the `foo-service` directory to see the entry:
-```
+```sh
$ etcdctl ls /foo-service
/foo-service/container1
```
-```
+```sh
$ curl -L http://127.0.0.1:4001/v2/keys/foo-service
{"action":"get","node":{"key":"/foo-service","dir":true,"nodes":[{"key":"/foo-service/container1","value":"localhost:1111","modifiedIndex":17,"createdIndex":17}],"modifiedIndex":17,"createdIndex":17}}
```
@@ -97,36 +97,36 @@ $ curl -L http://127.0.0.1:4001/v2/keys/foo-service
Now let's try watching the `foo-service` directory for changes, just like our proxy would have to. First, open up another shell on a CoreOS host in the cluster. In one window, start watching the directory and in the other window, add another key `container2` with the value `localhost:2222` into the directory. This command shouldn't output anything until the key has changed. Many events can trigger a change, including a new, updated, deleted or expired key.
-```
+```sh
$ etcdctl watch /foo-service --recursive
```
-```
+```sh
$ curl -L http://127.0.0.1:4001/v2/keys/foo-service?wait=true\&recursive=true
```
In the other window, let's pretend a new container has started and announced itself to the proxy by running:
-```
+```sh
$ etcdctl set /foo-service/container2 localhost:2222
localhost:2222
```
-```
+```sh
$ curl -L -X PUT http://127.0.0.1:4001/v2/keys/foo-service/container2 -d value="localhost:2222"
{"action":"set","node":{"key":"/foo-service/container2","value":"localhost:2222","modifiedIndex":23,"createdIndex":23}}
```
In the first window, you should get the notification that the key has changed. In a real application, this would trigger reconfiguration.
-```
+```sh
$ etcdctl watch /foo-service --recursive
localhost:2222
```
-```
+```sh
$ curl -L http://127.0.0.1:4001/v2/keys/foo-service?wait=true\&recursive=true
{"action":"set","node":{"key":"/foo-service/container2","value":"localhost:2222","modifiedIndex":23,"createdIndex":23}}
```
@@ -135,12 +135,12 @@ $ curl -L http://127.0.0.1:4001/v2/keys/foo-service?wait=true\&recursive=true
etcd can be used as a centralized coordination service and provides `TestAndSet` functionality as the building block of such a service. You must provide the previous value along with your new value. If the previous value matches the current value the operation will succeed.
-```
+```sh
$ etcdctl set /message "Hi" --swap-with-value "Hello"
Hi
```
-```
+```sh
$ curl -L -X PUT http://127.0.0.1:4001/v2/keys/message?prevValue=Hello -d value=Hi
{"action":"compareAndSwap","node":{"key":"/message","value":"Hi","modifiedIndex":28,"createdIndex":27}}
```
@@ -149,26 +149,26 @@ $ curl -L -X PUT http://127.0.0.1:4001/v2/keys/message?prevValue=Hello -d value=
You can optionally set a TTL for a key to expire in a certain number of seconds. Setting a TTL of 20 seconds:
-```
+```sh
$ etcdctl set /foo "Expiring Soon" --ttl 20
Expiring Soon
```
The `curl` response will contain an absolute timestamp of when the key will expire and a relative number of seconds until that timestamp:
-```
+```sh
$ curl -L -X PUT http://127.0.0.1:4001/v2/keys/foo?ttl=20 -d value=bar
{"action":"set","node":{"key":"/foo","value":"bar","expiration":"2014-02-10T19:54:49.357382223Z","ttl":20,"modifiedIndex":31,"createdIndex":31}}
```
If you request a key that has already expired, you will be returned a 100:
-```
+```sh
$ etcdctl get /foo
Error: 100: Key not found (/foo) [32]
```
-```
+```sh
$ curl -L http://127.0.0.1:4001/v2/keys/foo
{"errorCode":100,"message":"Key not found","cause":"/foo","index":32}
```
From 25ca380756cd49f35f82d6e4aedb9a62e50390e0 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 29 May 2014 17:19:26 -0700
Subject: [PATCH 0097/1291] feat(launching-containers): add syntax highlighting
hints
---
.../building/customizing-docker/index.md | 32 +++++++++----------
.../getting-started-with-docker/index.md | 26 +++++++--------
.../getting-started-with-systemd/index.md | 12 +++----
.../launching-containers-fleet/index.md | 18 +++++------
.../launching/overview-of-systemctl/index.md | 14 ++++----
5 files changed, 51 insertions(+), 51 deletions(-)
diff --git a/launching-containers/building/customizing-docker/index.md b/launching-containers/building/customizing-docker/index.md
index a0494f876..f3b956e1f 100644
--- a/launching-containers/building/customizing-docker/index.md
+++ b/launching-containers/building/customizing-docker/index.md
@@ -14,7 +14,7 @@ The docker systemd unit can be customized by overriding the unit that ships with
Create a file called `/etc/systemd/system/docker-tcp.socket` to make docker available on a tcp socket on port 4243.
-```
+```ini
[Unit]
Description=Docker Socket for the API
@@ -29,7 +29,7 @@ WantedBy=sockets.target
Then enable this new socket:
-```
+```sh
systemctl enable docker-tcp.socket
systemctl stop docker
systemctl start docker-tcp.socket
@@ -41,7 +41,7 @@ docker -H tcp://127.0.0.1:4243 ps
To enable the remote API on every CoreOS machine in a cluster, use [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config). We need to provide the new socket file and docker's socket activation support will automatically start using the socket:
-```
+```yaml
#cloud-config
coreos:
@@ -72,7 +72,7 @@ coreos:
To keep access to the port local, replace the `ListenStream` configuration above with:
-```
+```yaml
[Socket]
ListenStream=127.0.0.1:4243
```
@@ -85,26 +85,26 @@ Docker containers can be very large and debugging a build process makes it easy
First, copy the existing unit from the read-only file system into the read/write file system, so we can edit it:
-```
+```sh
cp /usr/lib/systemd/system/docker.service /etc/systemd/system/
```
Edit the `ExecStart` line to add the -D flag:
-```
+```ini
ExecStart=/usr/bin/docker -d -s=btrfs -r=false -H fd:// -D
```
Now lets tell systemd about the new unit and restart docker:
-```
+```sh
systemctl daemon-reload
systemctl restart docker
```
To test our debugging stream, run a docker command and then read the systemd journal, which should contain the output:
-```
+```sh
docker ps
journalctl -u docker
```
@@ -113,7 +113,7 @@ journalctl -u docker
If you need to modify a flag across many machines, you can provide the new unit with cloud-config:
-```
+```yaml
#cloud-config
coreos:
@@ -139,19 +139,19 @@ coreos:
If you're operating in a locked down networking environment, you can specify an HTTP proxy for docker to use via an environment variable. First, copy the existing unit from the read-only file system into the read/write file system, so we can edit it:
-```
+```sh
cp /usr/lib/systemd/system/docker.service /etc/systemd/system/
```
Add a line that sets the environment variable in the unit above the `ExecStart` command:
-```
+```ini
Environment="HTTP_PROXY=http://proxy.example.com:8080"
```
To apply the change, reload the unit and restart docker:
-```
+```sh
systemctl daemon-reload
systemctl restart docker
```
@@ -160,7 +160,7 @@ systemctl restart docker
The easiest way to use this proxy on all of your machines is via cloud-config:
-```
+```yaml
#cloud-config
coreos:
@@ -187,7 +187,7 @@ coreos:
A json file `.dockercfg` can be created in your home directory that holds authentication information for a public or private docker registry. The auth token is a base64 encoded string: `base64(:)`. Here's what an example looks like with credentials for docker's public index and a private index:
-```
+```json
{
"https://index.docker.io/v1/": {
"auth": "xXxXxXxXxXx=",
@@ -202,7 +202,7 @@ A json file `.dockercfg` can be created in your home directory that holds authen
The last step is to tell your systemd units to run as the core user in order for docker to use the credentials we just set up. This is done in the service section of the unit:
-```
+```ini
[Unit]
Description=My Container
After=docker.service
@@ -219,7 +219,7 @@ WantedBy=multi-user.target
Since each machine in your cluster is going to have to pull images, cloud-config is the easiest way to write the config file to disk.
-```
+```yaml
#cloud-config
write_files:
- path: /home/core/.dockercfg
diff --git a/launching-containers/building/getting-started-with-docker/index.md b/launching-containers/building/getting-started-with-docker/index.md
index 062194d27..f2411cea8 100644
--- a/launching-containers/building/getting-started-with-docker/index.md
+++ b/launching-containers/building/getting-started-with-docker/index.md
@@ -21,13 +21,13 @@ docker has a [straightforward CLI](http://docs.docker.io/en/latest/reference/com
Launching a container is simple as `docker run` + the image name you would like to run + the command to run within the container. If the image doesn't exist on your local machine, docker will attempt to fetch it from the public image registry. Later we'll explore how to use docker with a private registry. It's important to note that containers are designed to stop once the command executed within them has exited. For example, if you ran `/bin/echo hello world` as your command, the container will start, print hello world and then stop:
-```
+```sh
docker run ubuntu /bin/echo hello world
```
Let's launch an Ubuntu container and install Apache inside of it using the bash prompt:
-```
+```sh
docker run -t -i ubuntu /bin/bash
```
@@ -45,7 +45,7 @@ It's important to note that you can commit using any username and image name loc
Commit the container with the container ID, your username, and the name `apache`:
-```
+```sh
docker commit 72d468f455ea coreos/apache
```
@@ -55,13 +55,13 @@ The overlay filesystem works similar to git: our image now builds off of the `ub
Now we have our Ubuntu container with Apache running in one shell and an image of that container sitting on disk. Let's launch a new container based on that image but set it up to keep running indefinitely. The basic syntax looks like this, but we need to configure a few additional options that we'll fill in as we go:
-```
+```sh
docker run [options] [image] [process]
```
The first step is to tell docker that we want to run our `coreos/apache` image:
-```
+```sh
docker run [options] coreos/apache [process]
```
@@ -69,7 +69,7 @@ docker run [options] coreos/apache [process]
The most important option is to run the container in detached mode with the `-d` flag. This will output the container ID to show that the command was successful, but nothing else. At any time you can run `docker ps` in the other shell to view a list of the running containers. Our command now looks like:
-```
+```sh
docker run -d coreos/apache [process]
```
@@ -77,13 +77,13 @@ docker run -d coreos/apache [process]
We need to run the apache process in the foreground, since our container will stop when the process specified in the `docker run` command stops. We can do this with a flag `-D` when starting the apache2 process:
-```
+```sh
/usr/sbin/apache2ctl -D FOREGROUND
```
Let's add that to our command:
-```
+```sh
docker run -d coreos/apache /usr/sbin/apache2ctl -D FOREGROUND
```
@@ -97,7 +97,7 @@ Instead, create a systemd unit file to make systemd keep that container running.
The default apache install will be running on port 80. To give our container access to traffic over port 80, we use the `-p` flag and specify the port on the host that maps to the port inside the container. In our case we want 80 for each, so we include `-p 80:80` in our command:
-```
+```sh
docker run -d -p 80:80 coreos/apache /usr/sbin/apache2ctl -D FOREGROUND
```
@@ -107,25 +107,25 @@ You can now run this command on your CoreOS host to create the container. You sh
Earlier we downloaded the ubuntu image remotely from the docker public registry because it didn't exist on our local machine. We can also push local images to the public registry (or a private registry) very easily with the `push` command:
-```
+```sh
docker push coreos/apache
```
To push to a private repository the syntax is very similar. First, we must prefix our image with the host running our private registry instead of our username. List images by running `docker images` and insert the correct ID into the `tag` command:
-```
+```sh
docker tag f455ea72d468 registry.example.com:5000/apache
```
After tagging, the image needs to be pushed to the registry:
-```
+```sh
docker push registry.example.com:5000/apache
```
Once the image is done uploading, you should be able to start the exact same container on a different CoreOS host by running:
-```
+```sh
docker run -d -p 80:80 registry.example.com:5000/apache /usr/sbin/apache2ctl -D FOREGROUND
```
diff --git a/launching-containers/launching/getting-started-with-systemd/index.md b/launching-containers/launching/getting-started-with-systemd/index.md
index 845c25cac..bb34f5f77 100644
--- a/launching-containers/launching/getting-started-with-systemd/index.md
+++ b/launching-containers/launching/getting-started-with-systemd/index.md
@@ -23,7 +23,7 @@ Each target is actually a collection of symlinks to our unit files. This is spec
On CoreOS, unit files are located within the R/W filesystem at `/etc/systemd/system`. Let's create a simple unit named `hello.service`:
-```
+```ini
[Unit]
Description=My Service
After=docker.service
@@ -46,14 +46,14 @@ The description shows up in the systemd log and a few other places. Write someth
To start a new unit, we need to tell systemd to create the symlink and then start the file:
-```
+```sh
$ sudo systemctl enable /etc/systemd/system/hello.service
$ sudo systemctl start hello.service
```
To verify the unit started, you can see the list of containers running with `docker ps` and read the unit's output with `journalctl`:
-```
+```sh
$ journalctl -f -u hello.service
-- Logs begin at Fri 2014-02-07 00:05:55 UTC. --
Feb 11 17:46:26 localhost docker[23470]: Hello World
@@ -89,7 +89,7 @@ Since our container will be started in `ExecStart`, it makes sense for our etcd
When the service is told to stop, we need to stop the docker container using its `--name` from the run command. We also need to clean up our etcd key when the container exits or the unit is failed by using `ExecStopPost`.
-```
+```ini
[Unit]
Description=My Advanced Service
After=etcd.service
@@ -129,13 +129,13 @@ Since systemd is based on symlinks, there are a few interesting tricks you can l
In our earlier example we had to hardcode our IP address when registering within etcd:
-```
+```ini
ExecStartPost=/usr/bin/etcdctl set /domains/example.com/10.10.10.123:8081 running
```
We can enhance this by using `%H` and `%i` to dynamically announce the hostname and port. Specify the port after the `@` by using two unit files named `foo@123.service` and `foo@456.service`:
-```
+```ini
ExecStartPost=/usr/bin/etcdctl set /domains/example.com/%H:%i running
```
diff --git a/launching-containers/launching/launching-containers-fleet/index.md b/launching-containers/launching/launching-containers-fleet/index.md
index 7d6e1797c..c7fc3d185 100644
--- a/launching-containers/launching/launching-containers-fleet/index.md
+++ b/launching-containers/launching/launching-containers-fleet/index.md
@@ -19,7 +19,7 @@ This guide assumes you're running `fleetctl` locally from a CoreOS machine that'
Running a single container is very easy. All you need to do is provide a regular unit file without an `[Install]` section. Let's run the same unit from the [Getting Started with systemd]({{site.url}}/docs/launching-containers/launching/getting-started-with-systemd) guide. First save these contents as `myapp.service` on the CoreOS machine:
-```
+```ini
[Unit]
Description=MyApp
After=docker.service
@@ -31,13 +31,13 @@ ExecStart=/usr/bin/docker run busybox /bin/sh -c "while true; do echo Hello Worl
Run the start command to start up the container on the cluster:
-```
+```sh
$ fleetctl start myapp.service
```
Now list all of the units in the cluster to see the current status. The unit should have been scheduled to a machine in your cluster:
-```
+```sh
$ fleetctl list-units
UNIT LOAD ACTIVE SUB DESC MACHINE
myapp.service loaded active running MyApp c9de9451.../10.10.1.3
@@ -45,7 +45,7 @@ myapp.service loaded active running MyApp c9de9451.../10.10.1.3
You can view all of the machines in the cluster by running `list-machines`:
-```
+```sh
$ fleetctl list-machines
MACHINE IP METADATA
148a18ff-6e95-4cd8-92da-c9de9bb90d5a 10.10.1.1 -
@@ -59,7 +59,7 @@ The main benefit of using CoreOS is to have your services run in a highly availa
First, let's write a unit file that we'll run two copies of, named `apache.1.service` and `apache.2.service`:
-```
+```ini
[Unit]
Description=My Apache Frontend
After=docker.service
@@ -75,7 +75,7 @@ X-Conflicts=apache.*.service
The `X-Conflicts` attribute tells `fleet` that these two services can't be run on the same machine, giving us high availability. Let's start both units and verify that they're on two different machines:
-```
+```sh
$ fleetctl start apache.*
$ fleetctl list-units
UNIT LOAD ACTIVE SUB DESC MACHINE
@@ -92,7 +92,7 @@ How do we route requests to these containers? The best strategy is to run a "sid
The simplest sidekick example is for [service discovery](https://github.com/coreos/fleet/blob/master/Documentation/service-discovery.md). This unit blindly announces that our container has been started. We'll run one of these for each Apache unit that's already running. Make two copies of the unit called `apache-discovery.1.service` and `apache-discovery.2.service`. Be sure to change all instances of `apache.1.service` to `apache.2.service` when you create the second unit.
-```
+```ini
[Unit]
Description=Announce Apache1
BindsTo=apache.1.service
@@ -113,7 +113,7 @@ The third is a fleet-specific property called `X-ConditionMachineOf`. This prope
Let's verify that each unit was placed on to the same machine as the Apache service is is bound to:
-```
+```sh
$ fleetctl start apache-discovery.1.service
$ fleetctl list-units
UNIT LOAD ACTIVE SUB DESC MACHINE
@@ -126,7 +126,7 @@ apache-discovery.2.service loaded active running Announce Apache2 148a18f
Now let's verify that the service discovery is working correctly:
-```
+```sh
$ etcdctl ls /services/ --recursive
/services/website
/services/website/apache1
diff --git a/launching-containers/launching/overview-of-systemctl/index.md b/launching-containers/launching/overview-of-systemctl/index.md
index daa795b3e..231ccb0e8 100644
--- a/launching-containers/launching/overview-of-systemctl/index.md
+++ b/launching-containers/launching/overview-of-systemctl/index.md
@@ -15,7 +15,7 @@ weight: 5
The first step to troubleshooting with `systemctl` is to find the status of the item in question. If you have multiple `Exec` commands in your service file, you can see which one of them is failing and view the exit code. Here's a failing service that starts a private docker registry in a container:
-```
+```sh
$ sudo systemctl status custom-registry.service
custom-registry.service - Custom Registry Service
@@ -43,17 +43,17 @@ You can see that `Process: 10171 ExecStart=/usr/bin/docker` exited with `status=
Listing all of the processes running on the box is too much information, but you can pipe the output into grep to find the services you're looking for. Here's all service files and their status:
-```
+```sh
sudo systemctl list-units | grep .service
```
## Start or Stop a Service
-```
+```sh
sudo systemctl start apache.service
```
-```
+```sh
sudo systemctl stop apache.service
```
@@ -61,7 +61,7 @@ sudo systemctl stop apache.service
This will stop the process immediately:
-```
+```sh
sudo systemctl kill apache.service
```
@@ -69,13 +69,13 @@ sudo systemctl kill apache.service
Restarting a service is as easy as:
-```
+```sh
sudo systemctl restart apache.service
```
If you're restarting a service after you changed its service file, you will need to reload all of the service files before your changes take effect:
-```
+```sh
sudo systemctl daemon-reload
```
From f513615a6da710ad9c4efe7614ab6f39a6a7b19b Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 29 May 2014 17:20:12 -0700
Subject: [PATCH 0098/1291] feat(quickstart): add syntax highlighting hints
---
quickstart/index.md | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/quickstart/index.md b/quickstart/index.md
index a46492fd3..8f085a3f5 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -14,7 +14,7 @@ CoreOS gives you three essential tools: service discovery, container management
First, connect to a CoreOS machine via SSH as the user `core`. For example, on Amazon, use:
-```
+```sh
ssh core@an.ip.compute-1.amazonaws.com
```
@@ -26,13 +26,13 @@ If you used an example [cloud-config]({{site.url}}/docs/cluster-management/setup
Set a key `message` with value `Hello world`:
-```
+```sh
curl -L http://127.0.0.1:4001/v1/keys/message -d value="Hello world"
```
Read the value of `message` back:
-```
+```sh
curl -L http://127.0.0.1:4001/v1/keys/message
```
@@ -48,13 +48,13 @@ The second building block, **docker** ([docs][docker-docs]), is where your appli
Run a command in the container and then stop it:
-```
+```sh
docker run busybox /bin/echo hello world
```
Open a shell prompt inside the container:
-```
+```sh
docker run -i -t busybox /bin/sh
```
@@ -68,13 +68,13 @@ The third building block of CoreOS is **systemd** ([docs][systemd-docs]) and it
First, you will need to run all of this as `root` since you are modifying system state:
-```
+```sh
sudo -i
```
Create a file called `/etc/systemd/system/hello.service`:
-```
+```ini
[Unit]
Description=My Service
After=docker.service
@@ -94,20 +94,20 @@ See the [getting started with systemd]({{site.url}}/docs/launching-containers/la
Then run enable and start the unit:
-```
+```sh
sudo systemctl enable /etc/systemd/system/hello.service
sudo systemctl start hello.service
```
Your container is now started and is logging to the systemd journal. You can read the log by running:
-```
+```sh
journalctl -u hello.service -f
```
To stop the container, run:
-```
+```sh
sudo systemctl stop hello.service
```
From 77541a8baeb0beebb9d6481d31af85fbb5ea77bf Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 29 May 2014 17:46:06 -0700
Subject: [PATCH 0099/1291] feat(running-coreos): add syntax highlighting hints
---
.../bare-metal/booting-with-ipxe/index.md | 20 +-
.../bare-metal/booting-with-pxe/index.md | 30 +--
.../bare-metal/installing-to-disk/index.md | 30 +--
.../cloud-providers/brightbox/index.md | 18 +-
running-coreos/cloud-providers/ec2/index.md | 4 +-
.../google-compute-engine/index.md | 12 +-
.../cloud-providers/rackspace/index.md | 14 +-
running-coreos/cloud-providers/vultr/index.md | 4 +-
running-coreos/platforms/eucalyptus/index.md | 8 +-
running-coreos/platforms/libvirt/index.md | 189 ++++++++++--------
running-coreos/platforms/openstack/index.md | 12 +-
running-coreos/platforms/qemu/index.md | 59 ++++--
running-coreos/platforms/vagrant/index.md | 18 +-
running-coreos/platforms/vmware/index.md | 13 +-
14 files changed, 240 insertions(+), 191 deletions(-)
diff --git a/running-coreos/bare-metal/booting-with-ipxe/index.md b/running-coreos/bare-metal/booting-with-ipxe/index.md
index 7b9ef47ec..68834b8fa 100644
--- a/running-coreos/bare-metal/booting-with-ipxe/index.md
+++ b/running-coreos/bare-metal/booting-with-ipxe/index.md
@@ -32,23 +32,25 @@ CoreOS is released into alpha and beta channels. Releases to each channel serve
iPXE downloads a boot script from a publicly available URL. You will need to host this URL somewhere public and replace the example SSH key with your own. You can also run a custom iPXE server.
iPXE downloads a boot script from a publicly available URL. You will need to host this URL somewhere public and replace the example SSH key with your own. You can also run a custom iPXE server.
@@ -63,21 +65,21 @@ Note: the iPXE environment won't open https links, which means you can't use [ht
First, download and boot the iPXE image.
We will use `qemu-kvm` in this guide but use whatever process you normally use for booting an ISO on your platform.
-```
+```sh
wget http://boot.ipxe.org/ipxe.iso
qemu-kvm -m 1024 ipxe.iso --curses
```
Next press Ctrl+B to get to the iPXE prompt and type in the following commands:
-```
+```sh
iPXE> dhcp
iPXE> chain http://${YOUR_BOOT_URL}
```
Immediatly iPXE should download your boot script URL and start grabbing the images from the CoreOS storage site:
-```
+```sh
${YOUR_BOOT_URL}... ok
http://alpha.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz... 98%
```
diff --git a/running-coreos/bare-metal/booting-with-pxe/index.md b/running-coreos/bare-metal/booting-with-pxe/index.md
index 645582fec..e159fc324 100644
--- a/running-coreos/bare-metal/booting-with-pxe/index.md
+++ b/running-coreos/bare-metal/booting-with-pxe/index.md
@@ -38,7 +38,7 @@ When configuring the CoreOS pxelinux.cfg there are a few kernel options that may
This is an example pxelinux.cfg file that assumes CoreOS is the only option.
You should be able to copy this verbatim into `/var/lib/tftpboot/pxelinux.cfg/default` after providing a cloud-config URL:
-```
+```ini
default coreos
prompt 1
timeout 15
@@ -53,7 +53,7 @@ label coreos
Here's a common cloud-config example which should be located at the URL from above:
-```
+```yaml
#cloud-config
coreos:
units:
@@ -82,28 +82,28 @@ PXE booted machines cannot currently update themselves when new versions are rel
In the config above you can see that a Kernel image and a initramfs file is needed. Download these two files into your tftp root.
The coreos_production_pxe.vmlinuz.sig and coreos_production_pxe_image.cpio.gz.sig files can be used to verify the downloaded files.
@@ -114,7 +114,7 @@ After setting up the PXE server as outlined above you can start the target machi
The machine should grab the image from the server and boot into CoreOS.
If something goes wrong you can direct questions to the [IRC channel][irc] or [mailing list][coreos-dev].
-```
+```sh
This is localhost.unknown_domain (Linux x86_64 3.10.10+) 19:53:36
SSH host key: 24:2e:f1:3f:5f:9c:63:e5:8c:17:47:32:f4:09:5d:78 (RSA)
SSH host key: ed:84:4d:05:e3:7d:e3:d0:b9:58:90:58:3b:99:3a:4c (DSA)
@@ -128,7 +128,7 @@ The IP address for the machine should be printed out to the terminal for conveni
If it doesn't show up immediately, press enter a few times and it should show up.
Now you can simply SSH in using public key authentication:
-```
+```sh
ssh core@10.0.2.15
```
@@ -142,7 +142,7 @@ Once booted it is possible to [install CoreOS on a local disk][install-to-disk]
If you plan on using Docker we recommend using a local btrfs filesystem but ext4 is also available if supporting Docker is not required.
For example, to setup a btrfs root filesystem on `/dev/sda`:
-```
+```sh
cfdisk -z /dev/sda
touch "/usr.squashfs (deleted)" # work around a bug in mkfs.btrfs 3.12
mkfs.btrfs -L ROOT /dev/sda1
@@ -156,7 +156,7 @@ And add `root=/dev/sda1` or `root=LABEL=ROOT` to the kernel options as documente
Similar to the [OEM partition][oem] in CoreOS disk images, PXE images can be customized with a [cloud config][cloud-config] bundled in the initramfs. Simply create a `./usr/share/oem/` directory containing `cloud-config.yml` and append it to the cpio:
-```
+```sh
mkdir -p usr/share/oem
cp cloud-config.yml ./usr/share/oem
gzip -d coreos_production_pxe_image.cpio.gz
@@ -166,7 +166,7 @@ gzip coreos_production_pxe_image.cpio
Confirm the archive looks correct and has your `run` file inside of it:
-```
+```sh
gzip -dc coreos_production_pxe_image.cpio.gz | cpio -it
./
usr.squashfs
diff --git a/running-coreos/bare-metal/installing-to-disk/index.md b/running-coreos/bare-metal/installing-to-disk/index.md
index ab8d7bdc5..12db464b7 100644
--- a/running-coreos/bare-metal/installing-to-disk/index.md
+++ b/running-coreos/bare-metal/installing-to-disk/index.md
@@ -18,7 +18,7 @@ The script is self-contained and located [on Github here](https://raw.github.com
If you have already booting CoreOS via PXE, the install script is already installed. By default the install script will attempt to install the same version and channel that was PXE-booted:
-```
+```sh
coreos-install -d /dev/sda
```
@@ -35,24 +35,30 @@ CoreOS is released into alpha and beta channels. Releases to each channel serve
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
If you want to ensure you are installing the latest alpha version, use the -C option:
-
coreos-install -d /dev/sda -C alpha
+
+
coreos-install -d /dev/sda -C alpha
+
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
If you want to ensure you are installing the latest beta version, use the -C option:
-
coreos-install -d /dev/sda -C beta
+
+
coreos-install -d /dev/sda -C beta
+
For reference here are the rest of the `coreos-install` options:
- -d DEVICE Install CoreOS to the given device.
- -V VERSION Version to install (e.g. current)
- -C CHANNEL Release channel to use (e.g. beta)
- -o OEM OEM type to install (e.g. openstack)
- -c CLOUD Insert a cloud-init config to be executed on boot.
- -t TMPDIR Temporary location with enough space to download images.
+```
+-d DEVICE Install CoreOS to the given device.
+-V VERSION Version to install (e.g. current)
+-C CHANNEL Release channel to use (e.g. beta)
+-o OEM OEM type to install (e.g. openstack)
+-c CLOUD Insert a cloud-init config to be executed on boot.
+-t TMPDIR Temporary location with enough space to download images.
+```
## Cloud Config
@@ -61,7 +67,7 @@ The easiest way to configure accounts, add systemd units, and more is via cloud
Jump over to the [docs to learn about the supported features][cloud-config].
As an example this will install a ssh key for the default `core` user:
-```
+```yaml
#cloud-config
ssh_authorized_keys:
@@ -71,7 +77,7 @@ ssh_authorized_keys:
Pass this file to `coreos-install` via the `-c` option.
It will be installed to `/var/lib/coreos-install/user_data` and evaluated on every boot.
-```
+```sh
coreos-install -d /dev/sda -c ~/config
```
@@ -81,7 +87,7 @@ coreos-install -d /dev/sda -c ~/config
If cloud config doesn't handle something you need to do or you just want to take a look at the root btrfs filesystem before booting your new install just mount the ninth partition:
-```
+```sh
mount -o subvol=root /dev/sda9 /mnt/
```
diff --git a/running-coreos/cloud-providers/brightbox/index.md b/running-coreos/cloud-providers/brightbox/index.md
index 436bc000e..a42448ad6 100644
--- a/running-coreos/cloud-providers/brightbox/index.md
+++ b/running-coreos/cloud-providers/brightbox/index.md
@@ -15,7 +15,7 @@ instructions will walk you through running a CoreOS cluster on Brightbox. This g
First of all, let’s create a server group to put the new servers into:
-```
+```sh
$ brightbox groups create -n "coreos"
Creating a new server group
@@ -28,7 +28,7 @@ Creating a new server group
And then create a [firewall policy](http://brightbox.com/docs/guides/cli/firewall/) for the group using its identifier:
-```
+```sh
$ brightbox firewall-policies create -n "coreos" grp-cdl6h
id server_group name
@@ -41,7 +41,7 @@ $ brightbox firewall-policies create -n "coreos" grp-cdl6h
Now let’s define the firewall rules for this new policy. First we’ll allow ssh access in from anywhere:
-```
+```sh
$ brightbox firewall-rules create --source any --protocol tcp --dport 22 fwp-dw0n6
id protocol source sport destination dport icmp_type description
@@ -52,7 +52,7 @@ $ brightbox firewall-rules create --source any --protocol tcp --dport 22 fwp-dw0
And then we’ll allow the CoreOS etcd ports `7001` and `4001`, allowing access from only the other nodes in the group.
-```
+```sh
$ brightbox firewall-rules create --source grp-cdl6h --protocol tcp --dport 7001,4001 fwp-dw0n6
id protocol source sport destination dport icmp_type description
@@ -64,7 +64,7 @@ $ brightbox firewall-rules create --source grp-cdl6h --protocol tcp --dport 7001
And then allow all outgoing access from the servers in the group:
-```
+```sh
$ brightbox firewall-rules create --destination any fwp-dw0n6
id protocol source sport destination dport icmp_type description
@@ -77,7 +77,7 @@ $ brightbox firewall-rules create --destination any fwp-dw0n6
You can find it by listing all images and grepping for CoreOS:
-```
+```sh
$ brightbox images list | grep CoreOS
id owner type created_on status size name
@@ -91,7 +91,7 @@ Before building the cluster, we need to generate a unique identifier for it, whi
You can use any random string so we’ll use the `uuid` tool here to generate one:
-```
+```sh
$ TOKEN=`uuid`
$ echo $TOKEN
@@ -100,7 +100,7 @@ $ echo $TOKEN
Then build three servers using the image, in the server group we created and specifying the token as the user data:
-```
+```sh
$ brightbox servers create -i 3 --type small --name "coreos" --user-data $TOKEN --server-groups grp-cdl6h {{site.brightbox-id}}
Creating 3 small (typ-8fych) servers with image CoreOS {{site.brightbox-version}} ({{ site.brightbox-id }}) in groups grp-cdl6h with 0.05k of user data
@@ -119,7 +119,7 @@ Those servers should take just a minute to build and boot. They automatically in
If you’ve got ipv6 locally, you can ssh in directly:
-```
+```sh
$ ssh core@ipv6.srv-n8uak.gb1.brightbox.com
The authenticity of host 'ipv6.srv-n8uak.gb1.brightbox.com (2a02:1348:17c:423d:24:19ff:fef1:8f6)' can't be established.
RSA key fingerprint is 99:a5:13:60:07:5d:ac:eb:4b:f2:cb:c9:b2:ab:d7:21.
diff --git a/running-coreos/cloud-providers/ec2/index.md b/running-coreos/cloud-providers/ec2/index.md
index 9858158b9..1e0f3cadc 100644
--- a/running-coreos/cloud-providers/ec2/index.md
+++ b/running-coreos/cloud-providers/ec2/index.md
@@ -80,7 +80,7 @@ CoreOS allows you to configure machine parameters, launch systemd units on start
The most common cloud-config for EC2 looks like:
-```
+```yaml
#cloud-config
coreos:
@@ -116,7 +116,7 @@ coreos:
Ephemeral disks and additional EBS volumes attached to instances can be mounted with a `.mount` unit. Amazon's block storage devices are attached differently [depending on the instance type](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#InstanceStoreDeviceNames). Here's the cloud-config to mount the first ephemeral disk, `xvdb` on most instance types:
-```
+```yaml
#cloud-config
coreos:
units:
diff --git a/running-coreos/cloud-providers/google-compute-engine/index.md b/running-coreos/cloud-providers/google-compute-engine/index.md
index 0afc2c0c4..466d76789 100644
--- a/running-coreos/cloud-providers/google-compute-engine/index.md
+++ b/running-coreos/cloud-providers/google-compute-engine/index.md
@@ -19,7 +19,7 @@ CoreOS allows you to configure machine parameters, launch systemd units on start
The most common cloud-config for GCE looks like:
-```
+```yaml
#cloud-config
coreos:
@@ -63,7 +63,7 @@ Create 3 instances from the image above using our cloud-config from `cloud-confi
Additional disks attached to instances can be mounted with a `.mount` unit. Each disk can be accessed via `/dev/disk/by-id/google-`. Here's the cloud-config to mount a disk called `database-backup`:
-```
+```yaml
#cloud-config
coreos:
units:
@@ -85,13 +85,15 @@ To add more instances to the cluster, just launch more with the same cloud-confi
You can log in your CoreOS instances using:
- gcutil --project= ssh --ssh_user=core
+```sh
+gcutil --project= ssh --ssh_user=core
+```
## Modify Existing Cloud-Config
To modify an existing instance's cloud-config, read the `metadata-fingerprint` and provide it to the `setinstancemetadata` command along with your new `cloud-config.yaml`:
-```
+```sh
$ gcutil --project=coreos-gce-testing getinstance core2
INFO: Zone for core2 detected as us-central1-a.
@@ -140,7 +142,7 @@ INFO: Zone for core2 detected as us-central1-a.
+------------------------+-----------------------------------------------------+
```
-```
+```sh
gcutil --project= setinstancemetadata core2 --metadata_from_file=user-data:cloud-config.yaml --fingerprint="tgFMD53d3kI="
```
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index 7613c6312..bf2464033 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -72,7 +72,7 @@ CoreOS allows you to configure machine parameters, launch systemd units on start
The most common Rackspace cloud-config looks like:
-```
+```yaml
#cloud-config
coreos:
@@ -95,7 +95,7 @@ coreos:
Certain server flavors have separate system and data disks. To utilize the data disks or a Cloud Block Storage volume, they must be mounted with a `.mount` unit.
-```
+```yaml
#cloud-config
coreos:
units:
@@ -118,7 +118,7 @@ We're going to install `rackspace-novaclient`, upload a keypair and boot the ima
If you don't have `pip` installed, install it by running `sudo easy_install pip`. Now let's use `pip` to install Supernova, a tool that lets you easily switch Rackspace regions. Be sure to install these in the order listed:
-```
+```sh
sudo pip install keyring
sudo pip install rackspace-novaclient
sudo pip install supernova
@@ -128,7 +128,7 @@ sudo pip install supernova
Edit your config file (`~/.supernova`) to store your Rackspace username, API key (referenced as `OS_PASSWORD`) and some other settings. The `OS_TENANT_NAME` should be set to your Rackspace account ID, which is displayed in the upper right-hand corner of the cloud control panel UI.
-```
+```ini
[production]
OS_AUTH_URL = https://identity.api.rackspacecloud.com/v2.0/
OS_USERNAME = username
@@ -144,7 +144,7 @@ We're ready to create a keypair then boot a server with it.
For this guide, I'm assuming you already have a public key you use for your CoreOS servers. Note that only RSA keypairs are supported. Load the public key to Rackspace:
-```
+```sh
supernova production keypair-add --pub-key ~/.ssh/coreos.pub coreos-key
```
@@ -220,7 +220,7 @@ First, install the [Heat CLI](https://github.com/openstack/python-heatclient) wi
Second, verify that you're exporting your credentials for the CLI to use in your `~/bash_profile`:
-```
+```sh
export OS_AUTH_URL=https://identity.api.rackspacecloud.com/v2.0/
export OS_USERNAME=
export OS_TENANT_ID=
@@ -231,7 +231,7 @@ export OS_AUTH_SYSTEM=rackspace
If you have credentials already set up for use with the Nova CLI, they may conflict due to oddities in these tools. Re-source your credientials:
-```
+```sh
source ~/.bash_profile
```
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index 371639321..8f9338418 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -20,7 +20,7 @@ The simplest option to boot up CoreOS is to load a script that contains the seri
A sample script will look like this :
-```
+```ini
#!ipxe
set base-url http://alpha.release.core-os.net/amd64-usr/current
@@ -52,7 +52,7 @@ You can now log in to CoreOS using the associated private key on your local comp
SSH to the IP of your VPS, and specify the "core" user: ```ssh core@IP```
-```
+```sh
$ ssh core@IP
The authenticity of host 'IP (2a02:1348:17c:423d:24:19ff:fef1:8f6)' can't be established.
RSA key fingerprint is 99:a5:13:60:07:5d:ac:eb:4b:f2:cb:c9:b2:ab:d7:21.
diff --git a/running-coreos/platforms/eucalyptus/index.md b/running-coreos/platforms/eucalyptus/index.md
index 819e26502..646bf6cf5 100644
--- a/running-coreos/platforms/eucalyptus/index.md
+++ b/running-coreos/platforms/eucalyptus/index.md
@@ -22,7 +22,7 @@ CoreOS is released into alpha and beta channels. Releases to each channel serve
The channel is selected based on the URL below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
-```
+```sh
$ wget -q http://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2
$ bunzip2 coreos_production_openstack_image.img.bz2
$ qemu-img convert -O raw coreos_production_openstack_image.img coreos_production_openstack_image.raw
@@ -40,7 +40,7 @@ emi-E4A33D45
Now generate the ssh key that will be injected into the image for the `core`
user and boot it up!
-```
+```sh
$ euca-create-keypair coreos > core.pem
$ euca-run-instances emi-E4A33D45 -k coreos -t m1.medium -g default
...
@@ -49,7 +49,7 @@ $ euca-run-instances emi-E4A33D45 -k coreos -t m1.medium -g default
Your first CoreOS instance should now be running. The only thing left to do is
find the IP and SSH in.
-```
+```sh
$ euca-describe-instances | grep coreos
RESERVATION r-BCF44206 498025213678 group-1380012085
INSTANCE i-22444094 emi-E4A33D45 euca-10-0-1-61.cloud.home euca-172-16-0-56.cloud.internal running coreos 0
@@ -59,7 +59,7 @@ INSTANCE i-22444094 emi-E4A33D45 euca-10-0-1-61.cloud.home
Finally SSH into it, note that the user is `core`:
-```
+```sh
$ chmod 400 core.pem
$ ssh -i core.pem core@10.0.1.61
______ ____ _____
diff --git a/running-coreos/platforms/libvirt/index.md b/running-coreos/platforms/libvirt/index.md
index 780ded47a..172e09b84 100644
--- a/running-coreos/platforms/libvirt/index.md
+++ b/running-coreos/platforms/libvirt/index.md
@@ -30,70 +30,74 @@ The channel is selected based on the URL below. Simply replace `alpha` with `bet
We start by downloading the most recent disk image:
- mkdir -p /var/lib/libvirt/images/coreos0
- cd /var/lib/libvirt/images/coreos0
- wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_qemu_image.img.bz2
- bunzip2 coreos_production_qemu_image.img.bz2
+```sh
+mkdir -p /var/lib/libvirt/images/coreos0
+cd /var/lib/libvirt/images/coreos0
+wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_qemu_image.img.bz2
+bunzip2 coreos_production_qemu_image.img.bz2
+```
## Virtual machine configuration
Now create /tmp/coreos0.xml with the following contents:
-
- coreos0
- 1048576
- 1048576
- 1
-
- hvm
-
-
-
-
-
-
-
-
- destroy
- restart
- restart
-
- /usr/bin/qemu-kvm
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+```xml
+
+ coreos0
+ 1048576
+ 1048576
+ 1
+
+ hvm
+
+
+
+
+
+
+
+
+ destroy
+ restart
+ restart
+
+ /usr/bin/qemu-kvm
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
You can change any of these parameters later.
@@ -101,17 +105,21 @@ You can change any of these parameters later.
Now create a config drive file system to configure CoreOS itself:
- mkdir -p /var/lib/libvirt/images/coreos0/configdrive/openstack/latest
- touch /var/lib/libvirt/images/coreos0/configdrive/openstack/latest/user_data
+```sh
+mkdir -p /var/lib/libvirt/images/coreos0/configdrive/openstack/latest
+touch /var/lib/libvirt/images/coreos0/configdrive/openstack/latest/user_data
+```
The `user_data` file may contain a script for a [cloud config][cloud-config]
file. We recommend using ssh keys to log into the VM so at a minimum the
contents of `user_data` should look something like this:
- #cloud-config
+```yaml
+#cloud-config
- ssh_authorized_keys:
- - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq.......
+ssh_authorized_keys:
+ - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq.......
+ ```
[cloud-config]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config
@@ -122,23 +130,24 @@ example the VM will be attached directly to the local network via a bridge
on the host's eth0 and the local network. To configure a static address
add a [networkd unit][systemd-network] to `user_data`:
+```yaml
+#cloud-config
- #cloud-config
+ssh_authorized_keys:
+ - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq.......
- ssh_authorized_keys:
- - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq.......
+coreos:
+ units:
+ - name: 10-ens3.network
+ content: |
+ [Match]
+ MACAddress=52:54:00:fe:b3:c0
- coreos:
- units:
- - name: 10-ens3.network
- content: |
- [Match]
- MACAddress=52:54:00:fe:b3:c0
-
- [Network]
- Address=203.0.113.2/24
- Gateway=203.0.113.1
- DNS=8.8.8.8
+ [Network]
+ Address=203.0.113.2/24
+ Gateway=203.0.113.1
+ DNS=8.8.8.8
+```
[systemd-network]: http://www.freedesktop.org/software/systemd/man/systemd.network.html
@@ -147,26 +156,34 @@ add a [networkd unit][systemd-network] to `user_data`:
Now import the XML as new VM into your libvirt instance and start it:
- virsh create /tmp/coreos0.xml
+```sh
+virsh create /tmp/coreos0.xml
+```
Once the virtual machine has started you can log in via SSH:
- ssh core@203.0.113.2
+```sh
+ssh core@203.0.113.2
+```
### SSH Config
To simplify this and avoid potential host key errors in the future add
the following to `~/.ssh/config`:
- Host coreos0
- HostName 203.0.113.2
- User core
- StrictHostKeyChecking no
- UserKnownHostsFile /dev/null
+```ini
+Host coreos0
+HostName 203.0.113.2
+User core
+StrictHostKeyChecking no
+UserKnownHostsFile /dev/null
+```
Now you can log in to the virtual machine with:
- ssh coreos0
+```sh
+ssh coreos0
+```
## Using CoreOS
diff --git a/running-coreos/platforms/openstack/index.md b/running-coreos/platforms/openstack/index.md
index f34765307..c18a57bc4 100644
--- a/running-coreos/platforms/openstack/index.md
+++ b/running-coreos/platforms/openstack/index.md
@@ -23,7 +23,7 @@ CoreOS is released into alpha and beta channels. Releases to each channel serve
The channel is selected based on the URL below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
-```
+```sh
$ wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2
$ bunzip2 coreos_production_openstack_image.img.bz2
$ glance image-create --name CoreOS \
@@ -64,7 +64,7 @@ In order for this to work your OpenStack cloud provider must support [config dri
The most common cloud-config for Openstack looks like:
-```
+```yaml
#cloud-config
coreos:
@@ -88,7 +88,7 @@ ssh_authorized_keys:
Boot the machines with the `nova` CLI, referencing the image ID from the import step above and your `cloud-config.yaml`:
-```
+```sh
nova boot \
--user-data ./cloud-config.yaml \
--image cdf3874c-c27f-4816-bc8c-046b240e0edd \
@@ -103,7 +103,7 @@ To use config drive you may need to add `--config-drive=true` to command above.
Your first CoreOS cluster should now be running. The only thing left to do is
find an IP and SSH in.
-```
+```sh
$ nova list
+--------------------------------------+-----------------+--------+------------+-------------+-------------------+
| ID | Name | Status | Task State | Power State | Networks |
@@ -116,7 +116,7 @@ $ nova list
Finally SSH into an instance, note that the user is `core`:
-```
+```sh
$ chmod 400 core.pem
$ ssh -i core.pem core@10.0.0.3
______ ____ _____
@@ -136,7 +136,7 @@ with the others.
Example:
-```
+```sh
nova boot \
--user-data ./cloud-config.yaml \
--image cdf3874c-c27f-4816-bc8c-046b240e0edd \
diff --git a/running-coreos/platforms/qemu/index.md b/running-coreos/platforms/qemu/index.md
index 4ba9628c1..5a29feaa9 100644
--- a/running-coreos/platforms/qemu/index.md
+++ b/running-coreos/platforms/qemu/index.md
@@ -34,7 +34,9 @@ Linux. It should be available on just about any distro.
Documentation for [Debian][qemudeb] has more details but to get started
all you need is:
- sudo apt-get install qemu-system-x86 qemu-utils
+```sh
+sudo apt-get install qemu-system-x86 qemu-utils
+```
[qemudeb]: https://wiki.debian.org/QEMU
@@ -42,7 +44,9 @@ all you need is:
The Fedora wiki has a [quick howto][qemufed] but the basic install is easy:
- sudo yum install qemu-system-x86 qemu-img
+```sh
+sudo yum install qemu-system-x86 qemu-img
+```
[qemufed]: https://fedoraproject.org/wiki/How_to_use_qemu
@@ -50,7 +54,9 @@ The Fedora wiki has a [quick howto][qemufed] but the basic install is easy:
This is all you need to get started:
- sudo pacman -S qemu
+```sh
+sudo pacman -S qemu
+```
More details can be found on [Arch's QEMU wiki page](https://wiki.archlinux.org/index.php/Qemu).
@@ -60,8 +66,10 @@ As to be expected Gentoo can be a little more complicated but all the
required kernel options and USE flags are covered in the [Gentoo
Wiki][qemugen]. Usually this should be sufficient:
- echo app-emulation/qemu qemu_softmmu_targets_x86_64 virtfs xattr >> /etc/portage/package.use
- emerge -av app-emulation/qemu
+```sh
+echo app-emulation/qemu qemu_softmmu_targets_x86_64 virtfs xattr >> /etc/portage/package.use
+emerge -av app-emulation/qemu
+```
[qemugen]: http://wiki.gentoo.org/wiki/QEMU
@@ -80,14 +88,18 @@ The channel is selected based on the URL below. Simply replace `alpha` with `bet
There are two files you need: the disk image (provided in qcow2
format) and the wrapper shell script to start QEMU.
- mkdir coreos; cd coreos
- wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_qemu.sh
- wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_qemu_image.img.bz2 -O - | bzcat > coreos_production_qemu_image.img
- chmod +x coreos_production_qemu.sh
+```sh
+mkdir coreos; cd coreos
+wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_qemu.sh
+wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_qemu_image.img.bz2 -O - | bzcat > coreos_production_qemu_image.img
+chmod +x coreos_production_qemu.sh
+```
Starting is as simple as:
- ./coreos_production_qemu.sh -nographic
+```sh
+./coreos_production_qemu.sh -nographic
+```
### SSH Keys
@@ -98,7 +110,9 @@ look for public keys in ssh-agent if available and at the default
locations `~/.ssh/id_dsa.pub` or `~/.ssh/id_rsa.pub`. If you need to
provide an alternate location use the -a option:
- ./coreos_production_qemu.sh -a ~/.ssh/authorized_keys -- -nographic
+```sh
+./coreos_production_qemu.sh -a ~/.ssh/authorized_keys -- -nographic
+```
Note: Options such as `-a` for the wrapper script must be specified before
any options for QEMU. To make the separation between the two explicit
@@ -107,24 +121,29 @@ you can use `--` but that isn't required. See
Once the virtual machine has started you can log in via SSH:
- ssh -l core -p 2222 localhost
+```sh
+ssh -l core -p 2222 localhost
+```
### SSH Config
To simplify this and avoid potential host key errors in the future add
the following to `~/.ssh/config`:
- Host coreos
- HostName localhost
- Port 2222
- User core
- StrictHostKeyChecking no
- UserKnownHostsFile /dev/null
+```sh
+Host coreos
+HostName localhost
+Port 2222
+User core
+StrictHostKeyChecking no
+UserKnownHostsFile /dev/null
+```
Now you can log in to the virtual machine with:
- ssh coreos
-
+```sh
+ssh coreos
+```
## Using CoreOS
diff --git a/running-coreos/platforms/vagrant/index.md b/running-coreos/platforms/vagrant/index.md
index 1d038e2af..398bedea8 100644
--- a/running-coreos/platforms/vagrant/index.md
+++ b/running-coreos/platforms/vagrant/index.md
@@ -31,7 +31,7 @@ Now that you have Vagrant installed you can bring up a CoreOS instance.
The following commands will clone a repository that contains the CoreOS Vagrantfile. This file tells
Vagrant where it can find the latest disk image of CoreOS. Vagrant will download the image the first time you attempt to start the VM.
-```
+```sh
git clone https://github.com/coreos/coreos-vagrant.git
cd coreos-vagrant
```
@@ -42,7 +42,7 @@ CoreOS allows you to configure machine parameters, launch systemd units on start
The most common cloud-config for Vagrant looks like:
-```
+```yaml
#cloud-config
coreos:
@@ -78,13 +78,13 @@ Make sure you provide a fresh discovery URL in your `user-data` if you wish to b
Start the machine(s):
-```
+```sh
vagrant up
```
List the status of the running machines:
-```
+```sh
$ vagrant status
Current machine states:
@@ -99,7 +99,7 @@ VM, run `vagrant status NAME`.
Connect to one of the machines:
-```
+```sh
vagrant ssh core-01
```
@@ -107,7 +107,7 @@ vagrant ssh core-01
If you have purchased the [VMware Vagrant provider](http://www.vagrantup.com/vmware), run the following commands:
-```
+```sh
vagrant up --provider vmware_fusion
vagrant ssh core-01
```
@@ -116,7 +116,7 @@ vagrant ssh core-01
Optionally, you can share a folder from your laptop into the virtual machine. This is useful for easily getting code and Dockerfiles into CoreOS.
-```
+```ini
config.vm.network "private_network", ip: "172.12.8.150"
config.vm.synced_folder ".", "/home/core/share", id: "core", :nfs => true, :mount_options => ['nolock,vers=3,udp']
```
@@ -129,14 +129,14 @@ CoreOS is a rolling release distribution and versions that are out of date will
If you want to start from the most up to date version you will need to make sure that you have the latest box file of CoreOS.
Simply remove the old box file and Vagrant will download the latest one the next time you `vagrant up`.
-```
+```sh
vagrant box remove coreos-alpha vmware_fusion
vagrant box remove coreos-alpha virtualbox
```
If you'd like to download the box separately, you can download the URL contained in the Vagrantfile and add it manually:
-```
+```sh
vagrant box add coreos-alpha
```
diff --git a/running-coreos/platforms/vmware/index.md b/running-coreos/platforms/vmware/index.md
index be945a1ac..050d6f1ef 100644
--- a/running-coreos/platforms/vmware/index.md
+++ b/running-coreos/platforms/vmware/index.md
@@ -26,7 +26,7 @@ The channel is selected based on the URL below. Simply replace `alpha` with `bet
This is a rough sketch that should work on OSX and Linux:
-```
+```sh
curl -LO http://alpha.release.core-os.net/amd64-usr/current/coreos_production_vmware_insecure.zip
unzip coreos_production_vmware_insecure.zip -d coreos_production_vmware_insecure
cd coreos_production_vmware_insecure
@@ -37,15 +37,18 @@ open coreos_production_vmware_insecure.vmx
* follow the steps above to download and extract the coreos_production_vmware_insecure.zip
* download and run the [OVF Tool 3.5.0 installer](https://developercenter.vmware.com/tool/ovf) Requires VMware account login but the download is free. Available for Linux, OSX & Windows for both 32 & 64 bit architectures.
* convert VM to OVF from the extract dir
-```
+
+```sh
cd coreos_developer_vmware_insecure
mkdir coreos
ovftool coreos_production_vmware_insecure.vmx coreos/coreos.insecure.ovf
```
+
NOTE: This uses defaults and creates a single core, 1024MB type 4 VM when deployed. To change before deployment, see ovftool --help or manually edit the coreos.insecure.ovf If you do manually edit the OVF file, you will also need to recalculate the SHA1 and update the coreos.insecure.mf accordingly
The above step creates the following files in ../coreos/:
-```
+
+```sh
coreos.insecure-disk1.vmdk
coreos.insecure.ovf
coreos.insecure.mf
@@ -83,7 +86,7 @@ In this case the IP is `10.0.1.81`.
Now you can login using the shared and insecure private SSH key.
-```
+```sh
cd coreos_developer_vmware_insecure
ssh -i insecure_ssh_key core@10.0.1.81
```
@@ -94,7 +97,7 @@ We highly recommend that you disable the original insecure OEM SSH key and
replace it with your own. This is a simple two step process: first, add your
public key, and then remove the original OEM one.
-```
+```sh
cat ~/.ssh/id_rsa.pub | ssh core@10.0.1.81 -i insecure_ssh_key update-ssh-keys -a user
ssh core@10.0.1.81 update-ssh-keys -D oem
```
From 9bd5d105f3aaaa632990205e30ad7276c6d21677 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 29 May 2014 17:48:18 -0700
Subject: [PATCH 0100/1291] feat(sdk-distributors): add syntax highlighting
hints
---
.../notes-for-distributors/index.md | 8 ++++---
.../sdk/building-development-images/index.md | 16 ++++++-------
.../sdk/building-production-images/index.md | 4 ++--
.../sdk/modifying-coreos/index.md | 24 +++++++++----------
sdk-distributors/sdk/tips-and-tricks/index.md | 10 ++++----
5 files changed, 32 insertions(+), 30 deletions(-)
diff --git a/sdk-distributors/distributors/notes-for-distributors/index.md b/sdk-distributors/distributors/notes-for-distributors/index.md
index 4e1241894..9bd6a8c55 100644
--- a/sdk-distributors/distributors/notes-for-distributors/index.md
+++ b/sdk-distributors/distributors/notes-for-distributors/index.md
@@ -16,9 +16,11 @@ If you are importing images for use inside of your environment it is recommended
It is recommended that you also verify files using the [CoreOS Image Signing Key][signing-key]. The GPG signature for each image is a detached `.sig` file that must be passed to `gpg --verify`. For example:
- wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2
- wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2.sig
- gpg --verify coreos_production_openstack_image.img.bz2.sig
+```sh
+wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2
+wget http://alpha.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2.sig
+gpg --verify coreos_production_openstack_image.img.bz2.sig
+```
[signing-key]: {{site.url}}/security/image-signing-key
diff --git a/sdk-distributors/sdk/building-development-images/index.md b/sdk-distributors/sdk/building-development-images/index.md
index 3695f4aff..3113741e4 100644
--- a/sdk-distributors/sdk/building-development-images/index.md
+++ b/sdk-distributors/sdk/building-development-images/index.md
@@ -16,7 +16,7 @@ target VM.
1. On your workstation start the dev server inside the SDK chroot:
-```
+```sh
start_devserver --port 8080
```
@@ -25,7 +25,7 @@ NOTE: This port will need to be internet accessible.
2. Run /usr/local/bin/gmerge from your VM and ensure that the settings in
`/etc/lsb-release` point to your workstation IP/hostname and port
-```
+```sh
/usr/local/bin/gmerge coreos-base/update_engine
```
@@ -34,26 +34,26 @@ NOTE: This port will need to be internet accessible.
If you want to test that an image you built can successfully upgrade a running
VM you can use the `--image` argument to the devserver. Here is an example:
-```
+```sh
start_devserver --image ../build/images/amd64-usr/latest/chromiumos_image.bin
```
From the target virtual machine you run:
-```
+```sh
update_engine_client -update -omaha_url http://$WORKSTATION_HOSTNAME:8080/update
```
If the update fails you can check the logs of the update engine by running:
-```
+```sh
journalctl -u update-engine -o cat
```
If you want to download another update you may need to clear the reboot
pending status:
-```
+```sh
update_engine_client -reset_status
```
@@ -63,13 +63,13 @@ There is a utility script called `update_ebuilds` that can pull from Gentoo's
CVS tree directly into your local portage-stable tree. Here is an example usage
bumping go to the latest version:
-```
+```sh
./update_ebuilds --commit dev-lang/go
```
To create a Pull Request after the bump run:
-```
+```sh
cd ~/trunk/src/third_party/portage-stable
git checkout -b 'bump-go'
git push bump-go
diff --git a/sdk-distributors/sdk/building-production-images/index.md b/sdk-distributors/sdk/building-production-images/index.md
index 9b7c83656..7175ad3c2 100644
--- a/sdk-distributors/sdk/building-production-images/index.md
+++ b/sdk-distributors/sdk/building-production-images/index.md
@@ -99,7 +99,7 @@ values.
Note: Add `COREOS_OFFICIAL=1` here if you are making a real release. That will
change the version to leave off the build id suffix.
-```
+```sh
./build_image prod --group alpha
```
@@ -129,7 +129,7 @@ automatically as `coreos_production_update.gz` and
to generate the payload is `coreos_production_update.bin.bz2`.
As an example, to publish the insecurely signed payload:
-```
+```sh
URL=http://builds.release.core-os.net/alpha/amd64-usr/321.0.0
cd $(mktemp -d)
gsutil -m cp $URL/coreos_production_update* ./
diff --git a/sdk-distributors/sdk/modifying-coreos/index.md b/sdk-distributors/sdk/modifying-coreos/index.md
index ba9547f17..b7a899858 100644
--- a/sdk-distributors/sdk/modifying-coreos/index.md
+++ b/sdk-distributors/sdk/modifying-coreos/index.md
@@ -38,7 +38,7 @@ System requirements to get started:
You also need a proper git setup:
-```
+```sh
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
```
@@ -51,7 +51,7 @@ git config --global user.name "Your Name"
repositories that makes up CoreOS. Pull down the code and add it to your
path:
-```
+```sh
git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git
export PATH="$PATH":`pwd`/depot_tools
```
@@ -64,20 +64,20 @@ need to reset your $PATH manually each time you open a new shell.
Create a project directory. This will hold all of your git repos and the SDK
chroot. A few gigs of space will be necessary.
-```
+```sh
mkdir coreos; cd coreos
```
Initialize the .repo directory with the manifest that describes all of the git
repos required to get started.
-```
+```sh
repo init -u https://github.com/coreos/manifest.git -g minilayout --repo-url https://chromium.googlesource.com/external/repo.git
```
Synchronize all of the required git repos from the manifest.
-```
+```sh
repo sync
```
@@ -86,7 +86,7 @@ repo sync
Download and enter the SDK chroot which contains all of the compilers and
tooling.
-```
+```sh
./chromite/bin/cros_sdk
```
@@ -96,32 +96,32 @@ entries that are bind mounted into the chroot.
Set up the "core" user's password.
-```
+```sh
./set_shared_user_password.sh
```
Target amd64-usr for this image:
-```
+```sh
echo amd64-usr > .default_board
```
Setup a board root filesystem in /build/${BOARD}:
-```
+```sh
./setup_board
```
Build all of the target binary packages:
-```
+```sh
./build_packages
```
Build an image based on the built binary packages along with the developer
overlay:
-```
+```sh
./build_image --noenable_rootfs_verification dev
```
@@ -139,7 +139,7 @@ systemd-rest, allows you to stop and start units via HTTP. The other is a
small server that you can play with shutting off and on called
motd-http. You can try these daemons with:
-```
+```sh
curl http://127.0.0.1:8000
curl http://127.0.0.1:8080/units/motd-http.service/stop/replace
curl http://127.0.0.1:8000
diff --git a/sdk-distributors/sdk/tips-and-tricks/index.md b/sdk-distributors/sdk/tips-and-tricks/index.md
index b0c09dfd9..ee19b1780 100644
--- a/sdk-distributors/sdk/tips-and-tricks/index.md
+++ b/sdk-distributors/sdk/tips-and-tricks/index.md
@@ -20,7 +20,7 @@ weight: 7
Using `repo forall` you can search across all of the git repos at once:
-```
+```sh
repo forall -c git grep 'CONFIG_EXTRA_FIRMWARE_DIR'
```
@@ -31,7 +31,7 @@ Note: You need git 1.7.10 or newer to use the credential helper
Turn on the credential helper and git will save your password in memory
for some time:
-```
+```sh
git config --global credential.helper cache
```
@@ -44,7 +44,7 @@ this.
Get a view into what the base system will contain and why it will contain those
things with the emerge tree view:
-```
+```sh
emerge-amd64-usr --emptytree -p -v --tree coreos-base/coreos-dev
```
@@ -53,7 +53,7 @@ emerge-amd64-usr --emptytree -p -v --tree coreos-base/coreos-dev
You will be booting lots of VMs with on the fly ssh key generation. Add
this in your `$HOME/.ssh/config` to stop the annoying fingerprint warnings.
-```
+```ini
Host 127.0.0.1
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
@@ -68,7 +68,7 @@ including loop devices used to contruct CoreOS disk images. If the daemon
responsible for this happens to be ``udisks`` then you can disable this
behavior with the following udev rule:
-```
+```sh
echo 'SUBSYSTEM=="block", KERNEL=="ram*|loop*", ENV{UDISKS_PRESENTATION_HIDE}="1", ENV{UDISKS_PRESENTATION_NOPOLICY}="1"' > /etc/udev/rules.d/85-hide-loop.rules
udevadm control --reload
```
From 49581150cbbf20b5bc81426d83e9a92d79aa833b Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 29 May 2014 17:54:23 -0700
Subject: [PATCH 0101/1291] fix(running-coreos): remove hints from embedded
html for now
---
.../bare-metal/booting-with-ipxe/index.md | 14 ++++++--------
.../bare-metal/booting-with-pxe/index.md | 16 ++++++++--------
.../bare-metal/installing-to-disk/index.md | 8 ++------
3 files changed, 16 insertions(+), 22 deletions(-)
diff --git a/running-coreos/bare-metal/booting-with-ipxe/index.md b/running-coreos/bare-metal/booting-with-ipxe/index.md
index 68834b8fa..f30538197 100644
--- a/running-coreos/bare-metal/booting-with-ipxe/index.md
+++ b/running-coreos/bare-metal/booting-with-ipxe/index.md
@@ -32,25 +32,23 @@ CoreOS is released into alpha and beta channels. Releases to each channel serve
iPXE downloads a boot script from a publicly available URL. You will need to host this URL somewhere public and replace the example SSH key with your own. You can also run a custom iPXE server.
iPXE downloads a boot script from a publicly available URL. You will need to host this URL somewhere public and replace the example SSH key with your own. You can also run a custom iPXE server.
diff --git a/running-coreos/bare-metal/installing-to-disk/index.md b/running-coreos/bare-metal/installing-to-disk/index.md
index 12db464b7..1580f2151 100644
--- a/running-coreos/bare-metal/installing-to-disk/index.md
+++ b/running-coreos/bare-metal/installing-to-disk/index.md
@@ -35,16 +35,12 @@ CoreOS is released into alpha and beta channels. Releases to each channel serve
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
If you want to ensure you are installing the latest alpha version, use the -C option:
-
-
coreos-install -d /dev/sda -C alpha
-
+
coreos-install -d /dev/sda -C alpha
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
If you want to ensure you are installing the latest beta version, use the -C option:
-
-
coreos-install -d /dev/sda -C beta
-
+
coreos-install -d /dev/sda -C beta
From 9db448ba95c0b7bdb713c896d1209eca58686482 Mon Sep 17 00:00:00 2001
From: Michael Marineau
Date: Fri, 30 May 2014 14:30:20 -0700
Subject: [PATCH 0102/1291] fix(building-development-images): Clean up text a
little
- no absolute paths
- bug fixed so no need to set the board name
---
sdk-distributors/sdk/building-development-images/index.md | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/sdk-distributors/sdk/building-development-images/index.md b/sdk-distributors/sdk/building-development-images/index.md
index bb5f06a6a..89118a2c5 100644
--- a/sdk-distributors/sdk/building-development-images/index.md
+++ b/sdk-distributors/sdk/building-development-images/index.md
@@ -20,14 +20,13 @@ target VM.
start_devserver --port 8080
```
-NOTE: This port will need to be internet accessible.
+NOTE: This port will need to be Internet accessible if your VM is remote.
-2. Run `/usr/bin/gmerge` from your VM and ensure that the settings in
+2. Run `gmerge` from your VM and ensure that the `DEVSERVER` setting in
`/etc/coreos/update.conf` point to your workstation IP/hostname and port.
- You'll need to set `DEVSERVER` and `COREOS_RELEASE_BOARD` (likely `amd64-usr`).
```sh
-/usr/bin/gmerge coreos-base/update_engine
+gmerge coreos-base/update_engine
```
### Updating an Image with Update Engine
From 9d2b67eb4181e2b6a245b0031fc184b491ca146c Mon Sep 17 00:00:00 2001
From: Cole Gleason
Date: Wed, 28 May 2014 15:45:36 -0700
Subject: [PATCH 0103/1291] Remove outdated info on testing image after boot.
---
sdk-distributors/sdk/modifying-coreos/index.md | 13 -------------
1 file changed, 13 deletions(-)
diff --git a/sdk-distributors/sdk/modifying-coreos/index.md b/sdk-distributors/sdk/modifying-coreos/index.md
index b7a899858..7fdd46ab2 100644
--- a/sdk-distributors/sdk/modifying-coreos/index.md
+++ b/sdk-distributors/sdk/modifying-coreos/index.md
@@ -133,19 +133,6 @@ a bootable vm will be printed. Run the `image_to_vm.sh` command.
Once you build an image you can launch it with KVM (instructions will
print out after `image_to_vm.sh` runs).
-To demo the general direction we are starting in now the OS starts two
-small daemons that you can access over an HTTP interface. The first,
-systemd-rest, allows you to stop and start units via HTTP. The other is a
-small server that you can play with shutting off and on called
-motd-http. You can try these daemons with:
-
-```sh
-curl http://127.0.0.1:8000
-curl http://127.0.0.1:8080/units/motd-http.service/stop/replace
-curl http://127.0.0.1:8000
-curl http://127.0.0.1:8080/units/motd-http.service/start/replace
-```
-
## Making Changes
### git and repo
From e5eaca699ce37c703386b33836e13648b6d2c37b Mon Sep 17 00:00:00 2001
From: bhiles
Date: Mon, 2 Jun 2014 11:45:37 -0700
Subject: [PATCH 0104/1291] Add doc link to vagrant clustering explanation
---
running-coreos/platforms/vagrant/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/platforms/vagrant/index.md b/running-coreos/platforms/vagrant/index.md
index 398bedea8..3dadbec74 100644
--- a/running-coreos/platforms/vagrant/index.md
+++ b/running-coreos/platforms/vagrant/index.md
@@ -143,4 +143,4 @@ vagrant box add coreos-alpha
## Using CoreOS
Now that you have a machine booted it is time to play around.
-Check out the [CoreOS Quickstart]({{site.url}}/docs/quickstart) guide or dig into [more specific topics]({{site.url}}/docs).
+Check out the [CoreOS Quickstart]({{site.url}}/docs/quickstart) guide, learn about [CoreOS clustering with Vagrant]({{site.url}}/blog/coreos-clustering-with-vagrant/), or dig into [more specific topics]({{site.url}}/docs).
From 20d46ee9c5db08937c3c53207517dac65f82d0b5 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Mon, 2 Jun 2014 14:32:59 -0700
Subject: [PATCH 0105/1291] feat(launching-containers): link to new fleet unit
doc
---
.../launching/launching-containers-fleet/index.md | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/launching-containers/launching/launching-containers-fleet/index.md b/launching-containers/launching/launching-containers-fleet/index.md
index c7fc3d185..7c817f0d0 100644
--- a/launching-containers/launching/launching-containers-fleet/index.md
+++ b/launching-containers/launching/launching-containers-fleet/index.md
@@ -9,9 +9,9 @@ weight: 2
# Launching Containers with fleet
-`fleet` is a cluster manager that controls `systemd` at the cluster level. To run your services in the cluster, you must submit regular systemd units combined with a few fleet-specific properties.
+`fleet` is a cluster manager that controls `systemd` at the cluster level. To run your services in the cluster, you must submit regular systemd units combined with a few [fleet-specific properties]({{site.url}}/docs/launching-containers/launching/fleet-unit-files/).
-If you're not familiar with systemd units, check out our [Getting Started with systemd]({{site.url}}/docs/launching-containers/launching/getting-started-with-systemd) guide.
+If you're not familiar with systemd units, check out our [Getting Started with systemd]({{site.url}}/docs/launching-containers/launching/getting-started-with-systemd) guide.
This guide assumes you're running `fleetctl` locally from a CoreOS machine that's part of a CoreOS cluster. You can also [control your cluster remotely]({{site.url}}/docs/launching-containers/launching/fleet-using-the-client/#get-up-and-running). All of the units referenced in this blog post are contained in the [unit-examples](https://github.com/coreos/unit-examples/tree/master/simple-fleet) repository. You can clone this onto your CoreOS box to make unit submission easier.
@@ -73,7 +73,9 @@ ExecStop=/usr/bin/docker stop apache
X-Conflicts=apache.*.service
```
-The `X-Conflicts` attribute tells `fleet` that these two services can't be run on the same machine, giving us high availability. Let's start both units and verify that they're on two different machines:
+The `X-Conflicts` attribute tells `fleet` that these two services can't be run on the same machine, giving us high availability. A full list of options for this section can be found in the [fleet units guide]({{site.url}}/docs/launching-containers/launching/fleet-unit-files/).
+
+Let's start both units and verify that they're on two different machines:
```sh
$ fleetctl start apache.*
@@ -109,7 +111,7 @@ This unit has a few interesting properties. First, it uses `BindsTo` to link the
Second is `%H`, a variable built into systemd, that represents the hostname of the machine running this unit. Variable usage is covered in our [Getting Started with systemd]({{site.url}}/docs/launching-containers/launching/getting-started-with-systemd/#unit-variables) guide as well as in [systemd documentation](http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Specifiers).
-The third is a fleet-specific property called `X-ConditionMachineOf`. This property causes the unit to be placed onto the same machine that `apache.1.service` is running on.
+The third is a [fleet-specific property]({{site.url}}/docs/launching-containers/launching/fleet-unit-files/) called `X-ConditionMachineOf`. This property causes the unit to be placed onto the same machine that `apache.1.service` is running on.
Let's verify that each unit was placed on to the same machine as the Apache service is is bound to:
@@ -143,5 +145,5 @@ If you're running in the cloud, many services have APIs that can be automated ba
#### More Information
Example Deployment with fleet
-fleet Unit Specifications
+fleet Unit Specificationsfleet Configuration
From b45a23b1e5837c7222e6e759cc538b0483b4184c Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 3 Jun 2014 15:56:46 -0700
Subject: [PATCH 0106/1291] fix(running-coreos): fix bad link
---
running-coreos/bare-metal/booting-with-ipxe/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/bare-metal/booting-with-ipxe/index.md b/running-coreos/bare-metal/booting-with-ipxe/index.md
index f30538197..146171c9e 100644
--- a/running-coreos/bare-metal/booting-with-ipxe/index.md
+++ b/running-coreos/bare-metal/booting-with-ipxe/index.md
@@ -94,7 +94,7 @@ CoreOS can be completely installed on disk or run from RAM but store user data o
## Adding a Custom OEM
-Similar to the [OEM partition][oem] in CoreOS disk images, iPXE images can be customized with a [cloud config][cloud-config] bundled in the initramfs. You can view the [instructions on the PXE docs]({{site.url/docs/bare-metal/booting-with-pxe/#adding-a-custom-oem}}).
+Similar to the [OEM partition][oem] in CoreOS disk images, iPXE images can be customized with a [cloud config][cloud-config] bundled in the initramfs. You can view the [instructions on the PXE docs]({{site.url}}/docs/bare-metal/booting-with-pxe/#adding-a-custom-oem).
[oem]: {{site.url}}/docs/sdk-distributors/distributors/notes-for-distributors/#image-customization
[cloud-config]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/
From 4c7e498ac9aefe0973ffab05afaa2e14a0c5a42a Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 3 Jun 2014 10:56:58 -0700
Subject: [PATCH 0107/1291] feat(running-coreos): split vagrant clustering and
single machine
---
running-coreos/platforms/vagrant/index.md | 145 +++++++++++++++++++---
1 file changed, 126 insertions(+), 19 deletions(-)
diff --git a/running-coreos/platforms/vagrant/index.md b/running-coreos/platforms/vagrant/index.md
index 3dadbec74..eddc17398 100644
--- a/running-coreos/platforms/vagrant/index.md
+++ b/running-coreos/platforms/vagrant/index.md
@@ -9,16 +9,13 @@ weight: 5
# Running CoreOS on Vagrant
-CoreOS is currently in heavy development and actively being tested. These instructions will bring up a single CoreOS instance under Vagrant.
+Running CoreOS with Vagrant is the easiest way to bring up a single machine or virtualize an entire cluster on your laptop. Since the true power of CoreOS can be seen with a cluster, we're going to concentrate on that. Instructions for a single machine can be found [towards the end](#single-machine) of the guide.
You can direct questions to the [IRC channel][irc] or [mailing list][coreos-dev].
## Install Vagrant and Virtualbox
-Vagrant is a simple-to-use command line virtual machine manager. There are
-install packages available for Windows, Linux and OSX. Find the latest
-installer on the [Vagrant downloads page][vagrant]. Be sure to get
-version 1.6.3 or greater.
+Vagrant is a simple-to-use command line virtual machine manager. There are install packages available for Windows, Linux and OSX. Find the latest installer on the [Vagrant downloads page][vagrant]. Be sure to get version 1.6.3 or greater.
[vagrant]: http://www.vagrantup.com/downloads.html
@@ -28,19 +25,22 @@ Vagrant can use either the free Virtualbox provider or the commerical VMware pro
Now that you have Vagrant installed you can bring up a CoreOS instance.
-The following commands will clone a repository that contains the CoreOS Vagrantfile. This file tells
-Vagrant where it can find the latest disk image of CoreOS. Vagrant will download the image the first time you attempt to start the VM.
+The following commands will clone a repository that contains the CoreOS Vagrantfile. This file tells Vagrant where it can find the latest disk image of CoreOS. Vagrant will download the image the first time you attempt to start the VM.
```sh
git clone https://github.com/coreos/coreos-vagrant.git
cd coreos-vagrant
```
-## Cloud-Config
+## Starting a Cluster
-CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features][cloud-config-docs]. You can provide cloud-config data to your CoreOS Vagrant VM by editing the `user-data` file inside of the cloned directory.
+To start our cluster, we need to provide some config parameters in cloud-config format via the `user-data` file and set the number of machines in the cluster in `config.rb`.
-The most common cloud-config for Vagrant looks like:
+### Cloud-Config
+
+CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features][cloud-config-docs]. You can provide cloud-config data to your CoreOS Vagrant VM by editing the `user-data` file inside of the cloned directory. A sample file `user-data.sample` exists as a base and must be renamed to `user-data` for it to be processed.
+
+Our cluster will use an etcd [discovery URL]({{site.url}}/docs/cluster-management/setup/etcd-cluster-discovery/) to bootstrap the cluster of machines and elect an initial etcd leader. Be sure to replace `` with your own URL from [https://discovery.etcd.io/new](https://discovery.etcd.io/new):
```yaml
#cloud-config
@@ -48,7 +48,7 @@ The most common cloud-config for Vagrant looks like:
coreos:
etcd:
#generate a new token for each unique cluster from https://discovery.etcd.io/new
- #discovery: https://discovery.etcd.io/
+ discovery: https://discovery.etcd.io/
addr: $public_ipv4:4001
peer-addr: $public_ipv4:7001
units:
@@ -64,17 +64,44 @@ coreos:
[Service]
Environment=FLEET_PUBLIC_IP=$public_ipv4
ExecStart=/usr/bin/fleet
-
```
[cloud-config-docs]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config
-## Startup CoreOS
-
-With Vagrant, you can start a single machine or an entire cluster. Launching a CoreOS cluster on Vagrant is as simple as configuring `$num_instances` in a `config.rb` file to 3 (or more!) and running `vagrant up`.
-Make sure you provide a fresh discovery URL in your `user-data` if you wish to bootstrap etcd in your cluster.
-
-### Using Vagrant's default VirtualBox Provider
+### Startup CoreOS
+
+The `config.rb.sample` file contains a few useful settings about your Vagrant envrionment and most importantly, how many machines you'd like in your cluster.
+
+CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updates) with different schedules per channel. Select the channel you'd like to use for this cluster below. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes.
+
+
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
+
Rename the file to config.rb and modify a few lines:
+
config.rb
+
# Size of the CoreOS cluster created by Vagrant
+$num_instances=3
+
# Official CoreOS channel from which updates should be downloaded
+$update_channel=alpha
+
+
+
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
+
Rename the file to config.rb then uncomment and modify:
+
config.rb
+
# Size of the CoreOS cluster created by Vagrant
+$num_instances=3
+
# Official CoreOS channel from which updates should be downloaded
+$update_channel=beta
+
+
+
+
+#### Start Machines Using Vagrant's default VirtualBox Provider
Start the machine(s):
@@ -103,7 +130,87 @@ Connect to one of the machines:
vagrant ssh core-01
```
-### Using Vagrant's VMware Provider
+#### Start Machines Using Vagrant's VMware Provider
+
+If you have purchased the [VMware Vagrant provider](http://www.vagrantup.com/vmware), run the following commands:
+
+```sh
+vagrant up --provider vmware_fusion
+vagrant ssh core-01
+```
+
+## Single Machine
+
+To start a single machine, we need to provide some config parameters in cloud-config format via the `user-data` file.
+
+### Cloud-Config
+
+This cloud-config starts etcd and fleet when the machine is booted:
+
+```yaml
+#cloud-config
+
+coreos:
+ etcd:
+ addr: $public_ipv4:4001
+ peer-addr: $public_ipv4:7001
+ units:
+ - name: etcd.service
+ command: start
+ - name: fleet.service
+ command: start
+ runtime: no
+ content: |
+ [Unit]
+ Description=fleet
+
+ [Service]
+ Environment=FLEET_PUBLIC_IP=$public_ipv4
+ ExecStart=/usr/bin/fleet
+```
+
+### Startup CoreOS
+
+The `config.rb.sample` file contains a few useful settings about your Vagrant envrionment. We're going to set the CoreOS channel that we'd like the machine to track.
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
+
Rename the file to config.rb then uncomment and modify:
+
config.rb
+
# Official CoreOS channel from which updates should be downloaded
+$update_channel=alpha
+
+
+
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
+
Rename the file to config.rb then uncomment and modify:
+
config.rb
+
# Official CoreOS channel from which updates should be downloaded
+$update_channel=beta
+
+
+
+
+#### Start Machines Using Vagrant's default VirtualBox Provider
+
+Start the machine(s):
+
+```sh
+vagrant up
+```
+
+Connect to the machine:
+
+```sh
+vagrant ssh core-01
+```
+
+#### Start Machines Using Vagrant's VMware Provider
If you have purchased the [VMware Vagrant provider](http://www.vagrantup.com/vmware), run the following commands:
From 924d0f93259709ab372f5d209879844427ce6dfe Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 4 Jun 2014 13:40:46 -0700
Subject: [PATCH 0108/1291] feat(quickstart): remove systemd and add fleet
section
---
quickstart/index.md | 66 +++++++++++++++++++++++++++------------------
1 file changed, 40 insertions(+), 26 deletions(-)
diff --git a/quickstart/index.md b/quickstart/index.md
index 8f085a3f5..d65eb79a6 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -15,7 +15,19 @@ CoreOS gives you three essential tools: service discovery, container management
First, connect to a CoreOS machine via SSH as the user `core`. For example, on Amazon, use:
```sh
-ssh core@an.ip.compute-1.amazonaws.com
+$ ssh -A core@an.ip.compute-1.amazonaws.com
+CoreOS (beta)
+```
+
+The `-A` forwards your ssh-agent to the machine, which is needed for the fleet section of this guide.
+
+If you're using Vagrant, you'll need to connect a bit differently:
+
+```sh
+$ ssh-add ~/.vagrant.d/insecure_private_key
+Identity added: /Users/core/.vagrant.d/insecure_private_key (/Users/core/.vagrant.d/insecure_private_key)
+$ vagrant ssh core-01 -- -A
+CoreOS (beta)
```
## Service Discovery with etcd
@@ -62,58 +74,60 @@ docker run -i -t busybox /bin/sh
View Complete GuideRead docker Docs
-## Process Management with systemd
+## Process Management with fleet
-The third building block of CoreOS is **systemd** ([docs][systemd-docs]) and it is installed on each CoreOS machine. You should use systemd to manage the life cycle of your docker containers. The configuration format for systemd is straightforward. In the example below, the Ubuntu container is set up to print text after each reboot:
+The third building block of CoreOS is **fleet**, a distributed init system for your cluster. You should use fleet to manage the life cycle of your docker containers.
-First, you will need to run all of this as `root` since you are modifying system state:
+Fleet works by receiving [systemd unit files]({{site.url}}/docs/launching-containers/launching/getting-started-with-systemd/) and scheduling them onto machines in the cluster based on declared conflicts and other preferences encoded in the unit file. Using the `fleetctl` tool, you can query the status of a unit, remotely access its logs and more.
-```sh
-sudo -i
-```
+First, let's construct a simple systemd unit that runs a docker container. Save this as `hello.service` in the home directory:
-Create a file called `/etc/systemd/system/hello.service`:
+#### home.service
```ini
[Unit]
Description=My Service
After=docker.service
-Requires=docker.service
[Service]
-Restart=always
-RestartSec=10s
ExecStart=/bin/bash -c '/usr/bin/docker start -a hello || /usr/bin/docker run --name hello busybox /bin/sh -c "while true; do echo Hello World; sleep 1; done"'
ExecStop=/usr/bin/docker stop -t 1 hello
-
-[Install]
-WantedBy=multi-user.target
```
-See the [getting started with systemd]({{site.url}}/docs/launching-containers/launching/getting-started-with-systemd) page for more information on the format of this file.
+The [Getting Started with systemd]({{site.url}}/docs/launching-containers/launching/getting-started-with-systemd) guide explains the format of this file in more detail.
-Then run enable and start the unit:
+Then start the unit:
```sh
-sudo systemctl enable /etc/systemd/system/hello.service
-sudo systemctl start hello.service
+$ fleetctl start hello.service
+Job hello.service launched on 8145ebb7.../172.17.8.105
```
-Your container is now started and is logging to the systemd journal. You can read the log by running:
+Your container has been started somewhere on the cluster. To verify the status, run:
```sh
-journalctl -u hello.service -f
+$ fleetctl status hello.service
+● hello.service - My Service
+ Loaded: loaded (/run/fleet/units/hello.service; linked-runtime)
+ Active: active (running) since Wed 2014-06-04 19:04:13 UTC; 44s ago
+ Main PID: 27503 (bash)
+ CGroup: /system.slice/hello.service
+ ├─27503 /bin/bash -c /usr/bin/docker start -a hello || /usr/bin/docker run --name hello busybox /bin/sh -c "while true; do echo Hello World; sleep 1; done"
+ └─27509 /usr/bin/docker run --name hello busybox /bin/sh -c while true; do echo Hello World; sleep 1; done
+
+Jun 04 19:04:57 core-01 bash[27503]: Hello World
+..snip...
+Jun 04 19:05:06 core-01 bash[27503]: Hello World
```
To stop the container, run:
```sh
-sudo systemctl stop hello.service
+fleetctl destroy hello.service
```
-#### More Detailed Information
-View Complete Guide
-Read systemd Website
+Fleet has many more features that you can explore in the guides below.
-#### Chaos Monkey
-During our alpha period, Chaos Monkey (i.e. random reboots) is built in and will give you plenty of opportunities to test out systemd. CoreOS machines will automatically reboot after an update is applied unless you [configure them not to]({{site.url}}/docs/cluster-management/debugging/prevent-reboot-after-update).
+#### More Detailed Information
+View Complete Guide
+View Getting Started with systemd Guide
From a6f8d334fdc153e460e3a580c99ec65681547c5d Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 4 Jun 2014 15:47:30 -0700
Subject: [PATCH 0109/1291] feat(launching-containers): document fleet metadata
scheduling
---
.../launching-containers-fleet/index.md | 56 +++++++++++++++++++
1 file changed, 56 insertions(+)
diff --git a/launching-containers/launching/launching-containers-fleet/index.md b/launching-containers/launching/launching-containers-fleet/index.md
index 7c817f0d0..0a0395027 100644
--- a/launching-containers/launching/launching-containers-fleet/index.md
+++ b/launching-containers/launching/launching-containers-fleet/index.md
@@ -143,6 +143,62 @@ If you're running in the cloud, many services have APIs that can be automated ba
+## Schedule Based on Machine Metadata
+
+Applications with complex and specific requirements can target a subset of the cluster for scheduling via machine metadata. Powerful deployment topologies can be achieved — schedule units based on the machine's region, rack location, disk speed or anything else you can think of.
+
+Metadata can be provided via [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/#coreos) or a [config file](https://github.com/coreos/fleet/blob/master/Documentation/configuration.md). Here's an example config file:
+
+```ini
+# Comma-delimited key/value pairs that are published to the fleet registry.
+# This data can be referenced in unit files to affect scheduling decisions.
+# An example could look like: metadata="region=us-west,az=us-west-1"
+metadata="platform=metal,provider=rackspace,region=east,disk=ssd"
+```
+
+Metadata can be viewed in the machine list when configured:
+
+```sh
+$ fleetctl list-machines
+MACHINE IP METADATA
+29db5063... 172.17.8.101 disk=ssd,platform=metal,provider=rackspace,region=east
+ebb97ff7... 172.17.8.102 disk=ssd,platform=cloud,provider=rackspace,region=east
+f823e019... 172.17.8.103 disk=ssd,platform=cloud,provider=amazon,region=east
+```
+
+The unit file for a service that does a lot of disk I/O but doesn't care where it runs could look like:
+
+```ini
+[X-Fleet]
+X-ConditionMachineMetadata=disk=ssd
+```
+
+If you wanted to ensure very high availability you could have 3 unit files that must be scheduled across providers but in the same region:
+
+```ini
+[X-Fleet]
+X-Conflicts=webapp*
+X-ConditionMachineMetadata=provider=rackspace
+X-ConditionMachineMetadata=platform=metal
+X-ConditionMachineMetadata=region=east
+```
+
+```ini
+[X-Fleet]
+X-Conflicts=webapp*
+X-ConditionMachineMetadata=provider=rackspace
+X-ConditionMachineMetadata=platform=cloud
+X-ConditionMachineMetadata=region=east
+```
+
+```ini
+[X-Fleet]
+X-Conflicts=webapp*
+X-ConditionMachineMetadata=provider=amazon
+X-ConditionMachineMetadata=platform=cloud
+X-ConditionMachineMetadata=region=east
+```
+
#### More Information
Example Deployment with fleetfleet Unit Specifications
From a4dfbaef8559e969cafe380efb6863aa9bb9e6aa Mon Sep 17 00:00:00 2001
From: Alex Ethier
Date: Sat, 7 Jun 2014 22:58:36 -0700
Subject: [PATCH 0110/1291] fix(running-coreos/platforms/vagrant): channel
value should be quoted.
When defining a CoreOS channel in config.rb, the value should be
quoted.
---
running-coreos/platforms/vagrant/index.md | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/running-coreos/platforms/vagrant/index.md b/running-coreos/platforms/vagrant/index.md
index eddc17398..48f72632e 100644
--- a/running-coreos/platforms/vagrant/index.md
+++ b/running-coreos/platforms/vagrant/index.md
@@ -87,7 +87,7 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
# Size of the CoreOS cluster created by Vagrant
$num_instances=3
# Official CoreOS channel from which updates should be downloaded
-$update_channel=alpha
+$update_channel='alpha'
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
@@ -96,7 +96,7 @@ $update_channel=alpha
# Size of the CoreOS cluster created by Vagrant
$num_instances=3
# Official CoreOS channel from which updates should be downloaded
-$update_channel=beta
+$update_channel='beta'
@@ -184,14 +184,14 @@ The `config.rb.sample` file contains a few useful settings about your Vagrant en
Rename the file to config.rb then uncomment and modify:
config.rb
# Official CoreOS channel from which updates should be downloaded
-$update_channel=alpha
+$update_channel='alpha'
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
Rename the file to config.rb then uncomment and modify:
config.rb
# Official CoreOS channel from which updates should be downloaded
-$update_channel=beta
+$update_channel='beta'
From 0c433566cbd9fef5e8caf531630f39f19178b16b Mon Sep 17 00:00:00 2001
From: Mohammed Naser
Date: Mon, 9 Jun 2014 10:28:53 -0400
Subject: [PATCH 0111/1291] Added VEXXHOST Cloud Documentation
---
.../cloud-providers/vexxhost/index.md | 158 ++++++++++++++++++
1 file changed, 158 insertions(+)
create mode 100644 running-coreos/cloud-providers/vexxhost/index.md
diff --git a/running-coreos/cloud-providers/vexxhost/index.md b/running-coreos/cloud-providers/vexxhost/index.md
new file mode 100644
index 000000000..3878fa01f
--- /dev/null
+++ b/running-coreos/cloud-providers/vexxhost/index.md
@@ -0,0 +1,158 @@
+---
+layout: docs
+title: VEXXHOST Cloud
+category: running_coreos
+sub_category: cloud_provider
+weight: 5
+---
+
+# Running CoreOS on VEXXHOST
+
+CoreOS is currently in heavy development and actively being tested. The
+following instructions will walk you through setting up the `nova` tool with
+your appropriate credentials and launching your first cluster using the
+CLI tools.
+
+VEXXHOST is a Canadian OpenStack cloud computing provider based in Canada. In
+order to get started, you must have an active account on the VEXXHOST
+[public cloud computing][cloud-compute] service.
+
+[cloud-compute]: http://vexxhost.com/cloud-computing
+
+### Choosing a Channel
+
+CoreOS is released into alpha and beta channels. Releases to each channel serve
+as a release-candidate for the next channel. For example, a bug-free alpha
+release is promoted bit-for-bit to the beta channel.
+
+CoreOS releases are automatically built and deployed on the VEXXHOST cloud,
+therefore it is best to launch your clusters with the following naming pattern:
+CoreOS _Channel_ _Version_. For example, the image name of the latest alpha
+release will be "CoreOS Alpha {{site.alpha-channel}}".
+
+
+## Cloud-Config
+
+CoreOS allows you to configure machine parameters, launch systemd units on
+startup and more via [cloud-config][cloud-config]. We're going to provide the
+`cloud-config` data via the `user-data` flag.
+
+[cloud-config]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config
+
+At the moment, you cannot supply the `user-data` using the CloudConsole control
+panel therefore you must use the CLI to deploy your cluster on the VEXXHOST
+cloud.
+
+A sample common `cloud-config` file will look something like the following:
+
+```yaml
+#cloud-config
+
+coreos:
+ etcd:
+ # generate a new token for each unique cluster from https://discovery.etcd.io/new
+ discovery: https://discovery.etcd.io/
+ # multi-region and multi-cloud deployments need to use $public_ipv4
+ addr: $private_ipv4:4001
+ peer-addr: $private_ipv4:7001
+ units:
+ - name: etcd.service
+ command: start
+ - name: fleet.service
+ command: start
+```
+
+## Launch Cluster
+
+You will need to install `python-novaclient` which supplies the OpenStack CLI
+tools as well as a keypair to use in order to access your CoreOS cluster.
+
+### Install OpenStack CLI tools
+
+If you don't have `pip` installed, install it by running `sudo easy_install pip`.
+Now let's use `pip` to install `python-novaclient`.
+
+```sh
+$ sudo pip install python-novaclient
+```
+
+### Add API Credentials
+
+You will need to have your API credentials configured on the machine that you're
+going to be launching your cluster from. The easiest way to do this is by
+logging into the CloudConsole control panel and clicking on "API Credentials".
+
+From there, you must create a file on your system with the contents of the
+`openrc` file provided. Once done, you will need to `source` that file in your
+shell prior to running any API commands. You can test that everything is running
+properly by running the following command:
+
+```sh
+$ source openrc
+$ nova credentials
+```
+
+### Create Keypair
+
+You can import an existing public key by using the `nova keypair-add` command,
+however for this guide, we will be creating a new keypair and storing the
+private key for it locally and use it to access our CoreOS cluster.
+
+```sh
+$ nova keypair-add coreos-key > coreos.pem
+```
+
+### Create Servers
+
+You should now be ready to launch the servers which will create your CoreOS
+cluster using the `nova` CLI command.
+
+
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
+
+Once that's done, your cluster should be up and running. You can list the
+created servers and SSH into a server using your private key.
+
+```sh
+$ nova list
++--------------------------------------+-----------------+--------+------------+-------------+---------------------------------------+
+| ID | Name | Status | Task State | Power State | Networks |
++--------------------------------------+-----------------+--------+------------+-------------+---------------------------------------+
+| a1df1d98-622f-4f3b-adef-cb32f3e2a94d | coreos-a1df1d98 | ACTIVE | None | Running | public=162.253.x.x; private=10.20.x.x |
+| db13c6a7-a474-40ff-906e-2447cbf89440 | coreos-db13c6a7 | ACTIVE | None | Running | public=162.253.x.x; private=10.20.x.x |
+| f70b739d-9ad8-4b0b-bb74-4d715205ff0b | coreos-f70b739d | ACTIVE | None | Running | public=162.253.x.x; private=10.20.x.x |
++--------------------------------------+-----------------+--------+------------+-------------+---------------------------------------+
+$ nova ssh --login core -i core.pem coreos-a1df1d98
+CoreOS (alpha)
+core@a1df1d98-622f-4f3b-adef-cb32f3e2a94d ~ $
+```
+
+## Adding More Machines
+
+Adding new instances to the cluster is as easy as launching more with the same
+cloud-config. New instances will join the cluster assuming they can communicate
+with the others.
+
+## Multiple Clusters
+
+If you would like to create multiple clusters you'll need to generate and use a
+new discovery token. Change the token value on the etcd discovery parameter in the cloud-config, and boot new instances.
+
+## Using CoreOS
+
+Now that you have instances booted it is time to play around.
+Check out the [CoreOS Quickstart]({{site.url}}/docs/quickstart) guide or dig into [more specific topics]({{site.url}}/docs).
From c68b8240c6b90e195505c743dd90897318819f1c Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Mon, 9 Jun 2014 11:07:38 -0700
Subject: [PATCH 0112/1291] fix(quickstart): update running CoreOS link targets
---
quickstart/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/quickstart/index.md b/quickstart/index.md
index d65eb79a6..55cc022ac 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -6,7 +6,7 @@ title: CoreOS Quick Start
# Quick Start
-If you don't have a CoreOS machine running, check out the guides on running CoreOS on [Vagrant][vagrant-guide], [Amazon EC2][ec2-guide], [QEMU/KVM][qemu-guide], [VMware][vmware-guide] and [OpenStack][openstack-guide]. With either of these guides you will have a machine up and running in a few minutes.
+If you don't have a CoreOS machine running, check out the guides on [running CoreOS]({{site.url}}/docs/#running-coreos) on most cloud providers ([EC2]({{site.url}}/docs/running-coreos/cloud-providers/ec2), [Rackspace]({{site.url}}/docs/running-coreos/cloud-providers/rackspace), [GCE]({{site.url}}/docs/running-coreos/cloud-providers/google-compute-engine)), virtualization platforms ([Vagrant]({{site.url}}/docs/running-coreos/platforms/vagrant), [VMware]({{site.url}}/docs/running-coreos/platforms/vmware), [OpenStack]({{site.url}}/docs/running-coreos/platforms/openstack), [QEMU/KVM]({{site.url}}/docs/running-coreos/platforms/qemu)) and bare metal servers ([PXE]({{site.url}}/docs/running-coreos/bare-metal/booting-with-pxe), [iPXE]({{site.url}}/docs/running-coreos/bare-metal/booting-with-ipxe), [ISO]({{site.url}}/docs/running-coreos/platforms/iso), [Installer]({{site.url}}/docs/running-coreos/bare-metal/installing-to-disk)). With any of these guides you will have machines up and running in a few minutes.
It's highly recommended that you set up a cluster of at least 3 machines — it's not as much fun on a single machine. If you don't want to break the bank, [Vagrant][vagrant-guide] allows you to run an entire cluster on your laptop. For a cluster to be properly bootstrapped, you have to provide cloud-config via user-data, which is covered in each platform's guide.
From 2c2d7831d304ed581ae389f5f137fcce6fe6cc26 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Mon, 9 Jun 2014 13:47:25 -0700
Subject: [PATCH 0113/1291] feat(running-coreos): tweak intro
---
running-coreos/cloud-providers/vexxhost/index.md | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/running-coreos/cloud-providers/vexxhost/index.md b/running-coreos/cloud-providers/vexxhost/index.md
index 3878fa01f..70fb31ce5 100644
--- a/running-coreos/cloud-providers/vexxhost/index.md
+++ b/running-coreos/cloud-providers/vexxhost/index.md
@@ -8,18 +8,17 @@ weight: 5
# Running CoreOS on VEXXHOST
-CoreOS is currently in heavy development and actively being tested. The
-following instructions will walk you through setting up the `nova` tool with
-your appropriate credentials and launching your first cluster using the
-CLI tools.
-
VEXXHOST is a Canadian OpenStack cloud computing provider based in Canada. In
order to get started, you must have an active account on the VEXXHOST
[public cloud computing][cloud-compute] service.
+The following instructions will walk you through setting up the `nova` tool with
+your appropriate credentials and launching your first cluster using the
+CLI tools.
+
[cloud-compute]: http://vexxhost.com/cloud-computing
-### Choosing a Channel
+## Choosing a Channel
CoreOS is released into alpha and beta channels. Releases to each channel serve
as a release-candidate for the next channel. For example, a bug-free alpha
@@ -31,7 +30,7 @@ CoreOS _Channel_ _Version_. For example, the image name of the latest alpha
release will be "CoreOS Alpha {{site.alpha-channel}}".
-## Cloud-Config
+### Cloud-Config
CoreOS allows you to configure machine parameters, launch systemd units on
startup and more via [cloud-config][cloud-config]. We're going to provide the
From 8ef5135ea83969e98dbd97895f7a553de9c3c80a Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 10 Jun 2014 10:13:00 -0700
Subject: [PATCH 0114/1291] fix(quickstart): fix typo
---
quickstart/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/quickstart/index.md b/quickstart/index.md
index 55cc022ac..57f5646bc 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -82,7 +82,7 @@ Fleet works by receiving [systemd unit files]({{site.url}}/docs/launching-contai
First, let's construct a simple systemd unit that runs a docker container. Save this as `hello.service` in the home directory:
-#### home.service
+#### hello.service
```ini
[Unit]
From cc5c91e69de39c0f7fe1f3549c946ab993717d53 Mon Sep 17 00:00:00 2001
From: ppickfor
Date: Tue, 10 Jun 2014 20:00:31 -0700
Subject: [PATCH 0115/1291] feat(sdk): tips for adding packages
How to add new packages to coreos and some tips for working with emerge
---
sdk-distributors/sdk/tips-and-tricks/index.md | 45 +++++++++++++++++++
1 file changed, 45 insertions(+)
diff --git a/sdk-distributors/sdk/tips-and-tricks/index.md b/sdk-distributors/sdk/tips-and-tricks/index.md
index ee19b1780..45461c1ce 100644
--- a/sdk-distributors/sdk/tips-and-tricks/index.md
+++ b/sdk-distributors/sdk/tips-and-tricks/index.md
@@ -24,6 +24,51 @@ Using `repo forall` you can search across all of the git repos at once:
repo forall -c git grep 'CONFIG_EXTRA_FIRMWARE_DIR'
```
+## Add new upstream package
+
+Before making modificaions use `repo start` to create a new branch for the changes.
+
+To add a new package fetch the Gentoo package from upstream and add the package as a dependency of coreos-base/coreos
+
+If any files in the upstream package will be changed the package can be fetched from upstream Gentoo directly into `src/third_party/coreos-overlay` it may be necessary to create any missing directrories in the path too.
+
+e.g.
+`~/trunk/src/third_party/coreos-overlay $ mkdir -p sys-block/open-iscsi && rsync -av rsync://rsync.gentoo.org/gentoo-portage/sys-block/open-iscsi/ sys-block/open-iscsi/`
+The tailing / prevents rsync from creating the directory for the package so you dont end up with `sys-block/open-iscsi/open-iscsi`
+Remember to add the new files to git.
+
+If the new package does not need to be modified the package should be placed in `src/third_party/portage-stable`
+
+You can use `scripts/update_ebuilds` to fetch packages into `src/third_party/portage-stable` and add the files to git.
+You should specify the category and the packagename.
+e.g.
+`./update_ebuilds sys-block/open-iscsi`
+
+If the package needs to be modified it must be moved out of `src/third_party/portage-stable` to `src/third_party/coreos-overlay`
+
+To include the new packaage as a dependency of coreos add the package to the end of the RDEPEND environment variable in `coreos-base/coreos/coreos-0.0.1.ebuild` then increment the revision of coreos by renaming the softlink `git mv coreos-base/coreos/coreos-0.0.1-r237.ebuild coreos-base/coreos/coreos-0.0.1-r238.ebuild`
+
+The new package will now be built and installed as part of the normal build flow.
+
+Add and commit the changes to git using Angularjs format see [CONTRIBUTING.md]
+[CONTRIBUTING.md]: https://github.com/coreos/etcd/blob/master/CONTRIBUTING.md
+
+Push the changes to your github fork and create a pull request.
+
+### Ebuild Tips
+
+- Manualy merge a package to the chroot to test build `emerge-amd64-usr packagename`
+- Manualy unmerge a package `emerge-amd64-usr --unmerge packagename`
+- Remove a binary package from the cache `sudo rm /build/amd64-usr/packages/catagory/packagename-version.tbz2`
+- recreate the chroot prior to a clean rebuild `./chromite/bin/cros_sdk -r`
+- it may be necessary to comment out kernel source checks from the ebuild if the build fails -- as coreos does not yet provide visibility of the configured kernel source at biuld time -- usualy this is not a problem but may lead to warning messages
+- Chromeium OS [Portage Build FAQ]
+- [Gentoo Development Guide]
+
+
+[Portage Build FAQ]: http://www.chromium.org/chromium-os/how-tos-and-troubleshooting/portage-build-faq
+[Gentoo Development Guide]: http://devmanual.gentoo.org/
+
## Caching git https passwords
Note: You need git 1.7.10 or newer to use the credential helper
From f3b9e2a5e5e5d36a388bad87691005d115141865 Mon Sep 17 00:00:00 2001
From: Brian Clements
Date: Thu, 12 Jun 2014 11:57:11 -0700
Subject: [PATCH 0116/1291] Update amazon ec2 cloud-config link in docs.
---
sdk-distributors/distributors/notes-for-distributors/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sdk-distributors/distributors/notes-for-distributors/index.md b/sdk-distributors/distributors/notes-for-distributors/index.md
index 9bd6a8c55..51e8a74ab 100644
--- a/sdk-distributors/distributors/notes-for-distributors/index.md
+++ b/sdk-distributors/distributors/notes-for-distributors/index.md
@@ -42,7 +42,7 @@ CoreOS machines running on Amazon EC2 utilize a two-step cloud-config process. F
You can find the [code for this process on Github][amazon-github]. End-user instructions for this process can be found on our [Amazon EC2 docs][amazon-cloud-config].
-[amazon-github]: https://github.com/coreos/coreos-overlay/tree/master/coreos-base/oem-ami
+[amazon-github]: https://github.com/coreos/coreos-overlay/tree/master/coreos-base/oem-ec2-compat
[amazon-user-data-doc]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html#instancedata-user-data-retrieval
[amazon-cloud-config]: {{site.url}}/docs/running-coreos/cloud-providers/ec2#cloud-config
From 79278fdb6c152032e91672d88ef487e96d55dba8 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 12 Jun 2014 16:57:32 -0700
Subject: [PATCH 0117/1291] feat(running-coreos): clarify cloud-config details
---
running-coreos/cloud-providers/ec2/index.md | 4 +++-
.../cloud-providers/google-compute-engine/index.md | 4 +++-
running-coreos/cloud-providers/rackspace/index.md | 8 ++++++--
3 files changed, 12 insertions(+), 4 deletions(-)
diff --git a/running-coreos/cloud-providers/ec2/index.md b/running-coreos/cloud-providers/ec2/index.md
index 1e0f3cadc..4e3881a20 100644
--- a/running-coreos/cloud-providers/ec2/index.md
+++ b/running-coreos/cloud-providers/ec2/index.md
@@ -76,7 +76,9 @@ CloudFormation will launch a cluster of CoreOS machines with a security and auto
## Cloud-Config
-CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features][cloud-config-docs]. You can provide raw cloud-config data to CoreOS via the Amazon web console or [via the EC2 API][ec2-cloud-config]. Our CloudFormation template supports the most common cloud-config options as well.
+CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features][cloud-config-docs]. Cloud-config is intended to bring up a cluster of machines into a minimal useful state and ideally shouldn't be used to configure anything that isn't standard across many hosts. Once a machine is created on EC2, the cloud-config can only be modified after it is stopped or recreated.
+
+You can provide raw cloud-config data to CoreOS via the Amazon web console or [via the EC2 API][ec2-cloud-config]. Our CloudFormation template supports the most common cloud-config options as well.
The most common cloud-config for EC2 looks like:
diff --git a/running-coreos/cloud-providers/google-compute-engine/index.md b/running-coreos/cloud-providers/google-compute-engine/index.md
index 466d76789..065d3e6ec 100644
--- a/running-coreos/cloud-providers/google-compute-engine/index.md
+++ b/running-coreos/cloud-providers/google-compute-engine/index.md
@@ -15,7 +15,9 @@ Before proceeding, you will need to [install gcutil][gcutil-documentation] and c
## Cloud-Config
-CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config). You can provide cloud-config to CoreOS via the Google Cloud console's metadata field `user-data` or via a flag using `gcutil`.
+CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config). Cloud-config is intended to bring up a cluster of machines into a minimal useful state and ideally shouldn't be used to configure anything that isn't standard across many hosts. On GCE, the cloud-config can be modified while the instance is running and will be processed next time the machine boots.
+
+You can provide cloud-config to CoreOS via the Google Cloud console's metadata field `user-data` or via a flag using `gcutil`.
The most common cloud-config for GCE looks like:
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index bf2464033..061e6e13d 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -68,7 +68,9 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
## Cloud-Config
-CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features][cloud-config-docs]. You can provide cloud-config data via both Heat and Nova APIs. You **can not** provide cloud-config via the Control Panel. If you launch machines via the UI, you will have to do all configuration manually.
+CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features][cloud-config-docs]. Cloud-config is intended to bring up a cluster of machines into a minimal useful state and ideally shouldn't be used to configure anything that isn't standard across many hosts. Once a machine is created on Rackspace, the cloud-config can't be modified.
+
+You can provide cloud-config data via both Heat and Nova APIs. You **cannot** provide cloud-config via the Control Panel. If you launch machines via the UI, you will have to do all configuration manually.
The most common Rackspace cloud-config looks like:
@@ -93,7 +95,7 @@ coreos:
### Mount Data Disk
-Certain server flavors have separate system and data disks. To utilize the data disks or a Cloud Block Storage volume, they must be mounted with a `.mount` unit.
+Certain server flavors have separate system and data disks. To utilize the data disks, they must be mounted with a `.mount` unit. Check to make sure the `Where=` parameter accurately reflects the location of the block device:
```yaml
#cloud-config
@@ -108,6 +110,8 @@ coreos:
Type=ext3
```
+Mounting Cloud Block Storage can be done with a mount unit, but should not be included in cloud-config unless the disk is present on the first boot.
+
For more general information, check out [mounting storage on CoreOS]({{site.url}}/docs/cluster-management/setup/mounting-storage).
## Launch with Nova
From 42a21870caa97cfce56e147bc1a42e8a49760f6d Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 12 Jun 2014 17:35:27 -0700
Subject: [PATCH 0118/1291] fix(running-coreos): stronger warning about docker
usage and btrfs
---
running-coreos/bare-metal/booting-with-pxe/index.md | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/running-coreos/bare-metal/booting-with-pxe/index.md b/running-coreos/bare-metal/booting-with-pxe/index.md
index 7dc41862e..e7d4a400d 100644
--- a/running-coreos/bare-metal/booting-with-pxe/index.md
+++ b/running-coreos/bare-metal/booting-with-pxe/index.md
@@ -9,7 +9,7 @@ weight: 5
# Booting CoreOS via PXE
-CoreOS is currently in heavy development and actively being tested. These instructions will walk you through booting CoreOS via PXE on real or virtual hardware. By default, this will run CoreOS completely out of RAM. CoreOS can also be [installed to disk]({{site.url}}/docs/running-coreos/bare-metal/installing-to-disk).
+These instructions will walk you through booting CoreOS via PXE on real or virtual hardware. By default, this will run CoreOS completely out of RAM. CoreOS can also be [installed to disk]({{site.url}}/docs/running-coreos/bare-metal/installing-to-disk).
## Configuring pxelinux
@@ -23,10 +23,12 @@ If you need suggestions on how to set a server up, check out guides for [Debian]
### Setting up pxelinux.cfg
-When configuring the CoreOS pxelinux.cfg there are a few kernel options that may be useful but all are optional:
+When configuring the CoreOS pxelinux.cfg there are a few kernel options that may be useful but all are optional.
+
+If you plan to use docker, `/var/lib/docker` must have a btrfs filesystem. This is most easily accomplished by using btrfs for the entire root filesystem via `rootfstype=btrfs`, although this option is still experimental.
- **rootfstype=tmpfs**: Use tmpfs for the writable root filesystem. This is the default behavior.
-- **rootfstype=btrfs**: Use btrfs in ram for the writable root filesystem. Use this option if you want to use docker without any further configuration. *Experimental*
+- **rootfstype=btrfs**: Use btrfs in ram for the writable root filesystem. *Experimental*
- **root**: Use a local filesystem for root instead of one of two in-ram options above. The filesystem must be formatted in advance but may be completely blank, it will be initialized on boot. The filesystem may be specified by any of the usual ways including device, label, or UUID; e.g: `root=/dev/sda1`, `root=LABEL=ROOT` or `root=UUID=2c618316-d17a-4688-b43b-aa19d97ea821`.
- **sshkey**: Add the given SSH public key to the `core` user's authorized_keys file. Replace the example key below with your own (it is usually in `~/.ssh/id_rsa.pub`)
- **console**: Enable kernel output and a login prompt on a given tty. The default, `tty0`, generally maps to VGA. Can be used multiple times, e.g. `console=tty0 console=ttyS0`
@@ -38,7 +40,7 @@ When configuring the CoreOS pxelinux.cfg there are a few kernel options that may
This is an example pxelinux.cfg file that assumes CoreOS is the only option.
You should be able to copy this verbatim into `/var/lib/tftpboot/pxelinux.cfg/default` after providing a cloud-config URL:
-```ini
+```sh
default coreos
prompt 1
timeout 15
From 2b076304bfb40a1491fc7480b89cbbd25929c0a4 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 13 Jun 2014 13:48:44 -0700
Subject: [PATCH 0119/1291] feat(cluster-management): clarify updates after
switching channels
---
.../setup/switching-channels/index.md | 4 ++++
.../setup/switching-channels/update-timeline.png | Bin 0 -> 8665 bytes
2 files changed, 4 insertions(+)
create mode 100644 cluster-management/setup/switching-channels/update-timeline.png
diff --git a/cluster-management/setup/switching-channels/index.md b/cluster-management/setup/switching-channels/index.md
index e30707ce6..99fa397c9 100644
--- a/cluster-management/setup/switching-channels/index.md
+++ b/cluster-management/setup/switching-channels/index.md
@@ -10,6 +10,10 @@ weight: 5
CoreOS is released into beta and stable channels. New features and bug fixes are tested in the alpha channel and are promoted bit-for-bit to the beta channel if no additional bugs are found.
+By design, the CoreOS update engine does not execute downgrades. If you're switching from a channel with a higher CoreOS version than the new channel, your machine won't be updated again until the new channel contains a higher version number.
+
+
+
## Create Update Config File
You can switch machines between channels by creating `/etc/coreos/update.conf`:
diff --git a/cluster-management/setup/switching-channels/update-timeline.png b/cluster-management/setup/switching-channels/update-timeline.png
new file mode 100644
index 0000000000000000000000000000000000000000..2e2ec35beeac0da1c46274a576df5164e023fd71
GIT binary patch
literal 8665
zcmbW72Q=GV{QuMHq^hbYszxKRsl91SCDg1vi;Bi3Mr%a-w5qD8pf)X0Gt{2pc~D!B
z){IdzVphb6`H$y$e!t)Opa1Xt&+q(lPS*F_dp~!6?(4okH_}jFn~|QA9smF^LLr*Q
z0Ki#$+V74_7igbgRyj`E*Ez&}sL3VTE9BDCceMYPy&;we0D$58>H7?R5_w2_c*{@A
z!tb#c%rDTv*BNl{sh6WO??bq+bD)>68-n-2W4Wcq@(;9htf%Q7_&PiI!M!|rP2e8R
z06B@f(h_%NbZz9gc3LVxe3qR6@hY3hV_er9>3Gmx=e9n
z7Q6niyb|aT__*S)zhvH=qi-EAZe+IgD1H;&Vfo+*>lcjaYP;;m)%s1|wCJ>l_jrel
zAGMef)WR{v+mAj+*T8KgfAL)|>v&0A<>^PXyOi`Fg>ESBW^N&&=F*4_MdE=ml}wY3H7+9KRI%(YjSRF@h9KZ+
z_i^x3&jJ9Q*SgpN0FAehCfXw@P4amF;3FSO0swevfMrzebfLP>K=j)#*_t7S5h4QX|+a3e5{LCIu#KITK8kYkJTKpe>lhDIL}8u0WJx
zHBBMZ_xelF;k+&nZk@0Et9KZZ>W7Pdn`f3XR*RlPeVRA>c2*c+om98({3lgRVlm{*
znx)mz8-!;f`a4W6HPnY5)9YcoSf6x;Sc8T%z1+ElI-#>vRrzLzAHLQ7`00OELO_d5
zD_=3t7>`DSi!#-Dp|Akm+Y7?kQZ@c5D|Q#sr^&RI7bcH!N^{9q#52|(k3h$-m6+*p
z##0J1S-)v?PYgSk1ao*JjSO_Vl2RreVFxW-t}-hkQmN5iu%cdfmr>8W6pYRo>~S
+The `config.rb.sample` file contains a few useful settings about your Vagrant environment. We're going to set the CoreOS channel that we'd like the machine to track.
@@ -196,9 +196,9 @@ $update_channel='beta'
-#### Start Machines Using Vagrant's default VirtualBox Provider
+#### Start Machine Using Vagrant's default VirtualBox Provider
-Start the machine(s):
+Start the machine:
```sh
vagrant up
@@ -210,7 +210,7 @@ Connect to the machine:
vagrant ssh core-01
```
-#### Start Machines Using Vagrant's VMware Provider
+#### Start Machine Using Vagrant's VMware Provider
If you have purchased the [VMware Vagrant provider](http://www.vagrantup.com/vmware), run the following commands:
@@ -224,7 +224,6 @@ vagrant ssh core-01
Optionally, you can share a folder from your laptop into the virtual machine. This is useful for easily getting code and Dockerfiles into CoreOS.
```ini
-config.vm.network "private_network", ip: "172.12.8.150"
config.vm.synced_folder ".", "/home/core/share", id: "core", :nfs => true, :mount_options => ['nolock,vers=3,udp']
```
@@ -234,7 +233,7 @@ After a 'vagrant reload' you will be prompted for your local machine password.
CoreOS is a rolling release distribution and versions that are out of date will automatically update.
If you want to start from the most up to date version you will need to make sure that you have the latest box file of CoreOS.
-Simply remove the old box file and Vagrant will download the latest one the next time you `vagrant up`.
+You can do this using `vagrant box update` - or, simply remove the old box file and Vagrant will download the latest one the next time you `vagrant up`.
```sh
vagrant box remove coreos-alpha vmware_fusion
From b547d1572567f0035255d2f5d9aa77063c7fd1b3 Mon Sep 17 00:00:00 2001
From: Cole Gleason
Date: Thu, 19 Jun 2014 12:32:03 -0700
Subject: [PATCH 0129/1291] change docker port to 2375
---
.../building/customizing-docker/index.md | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/launching-containers/building/customizing-docker/index.md b/launching-containers/building/customizing-docker/index.md
index f3b956e1f..b4fd2fedb 100644
--- a/launching-containers/building/customizing-docker/index.md
+++ b/launching-containers/building/customizing-docker/index.md
@@ -12,14 +12,14 @@ The docker systemd unit can be customized by overriding the unit that ships with
## Enable the Remote API on a New Socket
-Create a file called `/etc/systemd/system/docker-tcp.socket` to make docker available on a tcp socket on port 4243.
+Create a file called `/etc/systemd/system/docker-tcp.socket` to make docker available on a tcp socket on port 2375.
```ini
[Unit]
Description=Docker Socket for the API
[Socket]
-ListenStream=4243
+ListenStream=2375
Service=docker.service
BindIPv6Only=both
@@ -34,7 +34,7 @@ systemctl enable docker-tcp.socket
systemctl stop docker
systemctl start docker-tcp.socket
systemctl start docker
-docker -H tcp://127.0.0.1:4243 ps
+docker -H tcp://127.0.0.1:2375 ps
```
### Cloud-Config
@@ -53,7 +53,7 @@ coreos:
Description=Docker Socket for the API
[Socket]
- ListenStream=4243
+ ListenStream=2375
Service=docker.service
BindIPv6Only=both
@@ -74,7 +74,7 @@ To keep access to the port local, replace the `ListenStream` configuration above
```yaml
[Socket]
- ListenStream=127.0.0.1:4243
+ ListenStream=127.0.0.1:2375
```
## Use Attached Storage for Docker Images
From 8e471a4421f84516a115821151205c60f680accf Mon Sep 17 00:00:00 2001
From: Sukrit Khera
Date: Mon, 23 Jun 2014 15:46:22 -0700
Subject: [PATCH 0130/1291] Add "ssh-add" command
Add "ssh-add" command for agent forwarding to work.
---
quickstart/index.md | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/quickstart/index.md b/quickstart/index.md
index 57f5646bc..d83e545ee 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -19,7 +19,12 @@ $ ssh -A core@an.ip.compute-1.amazonaws.com
CoreOS (beta)
```
-The `-A` forwards your ssh-agent to the machine, which is needed for the fleet section of this guide.
+The `-A` forwards your ssh-agent to the machine, which is needed for the fleet section of this guide. You might have to add your private key on ssh agent running on client machine:
+
+```sh
+$ ssh-add
+Identity added: .../.ssh/id_rsa (.../.ssh/id_rsa)
+```
If you're using Vagrant, you'll need to connect a bit differently:
From 91a56e75ad89ffd26e3a6a835c4356c8945c481b Mon Sep 17 00:00:00 2001
From: c4t3l
Date: Mon, 23 Jun 2014 20:18:02 -0500
Subject: [PATCH 0131/1291] Update index.md
Cleaned up run on sentence.
---
running-coreos/cloud-providers/ec2/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/cloud-providers/ec2/index.md b/running-coreos/cloud-providers/ec2/index.md
index 1e0f3cadc..41ef18c6f 100644
--- a/running-coreos/cloud-providers/ec2/index.md
+++ b/running-coreos/cloud-providers/ec2/index.md
@@ -336,7 +336,7 @@ coreos:
### Automatic Rollback Limitations on EC2
-Amazon EC2 uses Xen paravirtualization which is incompatible with kexec which CoreOS uses to make it possible to rollback a bad update by simply rebooting the virtual machine.
+Amazon EC2 uses Xen paravirtualization which is incompatible with kexec. CoreOS uses this to rollback a bad update by simply rebooting the virtual machine.
## Using CoreOS
From 10b9a2729b68dfafbc016ec3c331d6d812105239 Mon Sep 17 00:00:00 2001
From: Sukrit Khera
Date: Tue, 24 Jun 2014 11:08:09 -0700
Subject: [PATCH 0132/1291] Rewording based on PR comments
---
quickstart/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/quickstart/index.md b/quickstart/index.md
index d83e545ee..89f6a4c84 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -19,7 +19,7 @@ $ ssh -A core@an.ip.compute-1.amazonaws.com
CoreOS (beta)
```
-The `-A` forwards your ssh-agent to the machine, which is needed for the fleet section of this guide. You might have to add your private key on ssh agent running on client machine:
+The `-A` forwards your ssh-agent to the machine, which is needed for the fleet section of this guide. If you haven't already done so, you will need to add your private key to the SSH agent running on your client machine - for example:
```sh
$ ssh-add
From 354d411b5e0b24eb42297b4fbec1b6f4b04eaa46 Mon Sep 17 00:00:00 2001
From: Sukrit Khera
Date: Tue, 24 Jun 2014 23:37:20 -0700
Subject: [PATCH 0133/1291] Remove deprecated -name option
Change the deprecated option "-name" to "--name". To avoid warning:
```
Warning: '-name' is deprecated, it will be replaced by '--name' soon. See usage.
```
---
.../launching/launching-containers-fleet/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/launching-containers/launching/launching-containers-fleet/index.md b/launching-containers/launching/launching-containers-fleet/index.md
index b5b3747c9..a499a3492 100644
--- a/launching-containers/launching/launching-containers-fleet/index.md
+++ b/launching-containers/launching/launching-containers-fleet/index.md
@@ -66,7 +66,7 @@ After=docker.service
Requires=docker.service
[Service]
-ExecStart=/usr/bin/docker run -rm -name apache -p 80:80 coreos/apache /usr/sbin/apache2ctl -D FOREGROUND
+ExecStart=/usr/bin/docker run -rm --name apache -p 80:80 coreos/apache /usr/sbin/apache2ctl -D FOREGROUND
ExecStop=/usr/bin/docker rm -f apache
[X-Fleet]
From 50c7fcab078b0c3e81b7b93a2d613a3f4a3785c2 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 24 Jun 2014 23:50:07 -0700
Subject: [PATCH 0134/1291] fix(booting-ipxe): fix broken link to PXE
instructions
---
running-coreos/bare-metal/booting-with-ipxe/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/bare-metal/booting-with-ipxe/index.md b/running-coreos/bare-metal/booting-with-ipxe/index.md
index 146171c9e..807835721 100644
--- a/running-coreos/bare-metal/booting-with-ipxe/index.md
+++ b/running-coreos/bare-metal/booting-with-ipxe/index.md
@@ -94,7 +94,7 @@ CoreOS can be completely installed on disk or run from RAM but store user data o
## Adding a Custom OEM
-Similar to the [OEM partition][oem] in CoreOS disk images, iPXE images can be customized with a [cloud config][cloud-config] bundled in the initramfs. You can view the [instructions on the PXE docs]({{site.url}}/docs/bare-metal/booting-with-pxe/#adding-a-custom-oem).
+Similar to the [OEM partition][oem] in CoreOS disk images, iPXE images can be customized with a [cloud config][cloud-config] bundled in the initramfs. You can view the [instructions on the PXE docs]({{site.url}}/docs/running-coreos/bare-metal/booting-with-pxe/#adding-a-custom-oem).
[oem]: {{site.url}}/docs/sdk-distributors/distributors/notes-for-distributors/#image-customization
[cloud-config]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/
From 7f19bb8c46c04dcd12219008f33a11b8be7481d9 Mon Sep 17 00:00:00 2001
From: Ian Babrou
Date: Wed, 25 Jun 2014 19:02:17 +0400
Subject: [PATCH 0135/1291] Chromeium -> Chromium typo fix
---
sdk-distributors/sdk/tips-and-tricks/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sdk-distributors/sdk/tips-and-tricks/index.md b/sdk-distributors/sdk/tips-and-tricks/index.md
index 45461c1ce..7fc57dc8e 100644
--- a/sdk-distributors/sdk/tips-and-tricks/index.md
+++ b/sdk-distributors/sdk/tips-and-tricks/index.md
@@ -62,7 +62,7 @@ Push the changes to your github fork and create a pull request.
- Remove a binary package from the cache `sudo rm /build/amd64-usr/packages/catagory/packagename-version.tbz2`
- recreate the chroot prior to a clean rebuild `./chromite/bin/cros_sdk -r`
- it may be necessary to comment out kernel source checks from the ebuild if the build fails -- as coreos does not yet provide visibility of the configured kernel source at biuld time -- usualy this is not a problem but may lead to warning messages
-- Chromeium OS [Portage Build FAQ]
+- Chromium OS [Portage Build FAQ]
- [Gentoo Development Guide]
From f7e24530664738ffbf7a94c4ed9f9b6ee4e09731 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 25 Jun 2014 11:16:24 -0700
Subject: [PATCH 0136/1291] fix(rackspace): replace backticks with code tag
---
running-coreos/cloud-providers/rackspace/index.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index 061e6e13d..a87b923a2 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -248,12 +248,12 @@ source ~/.bash_profile
-
Launch the stack by providing the specified parameters. This command will reference the local file `data.yml` in the current working directory that contains the cloud-config parameters. `$(< data.yaml)` prints the contents of this file into our heat command:
+
Launch the stack by providing the specified parameters. This command will reference the local file data.yml in the current working directory that contains the cloud-config parameters. $(< data.yaml) prints the contents of this file into our heat command:
Launch the stack by providing the specified parameters. This command will reference the local file `data.yml` in the current working directory that contains the cloud-config parameters. `$(< data.yaml)` prints the contents of this file into our heat command:
+
Launch the stack by providing the specified parameters. This command will reference the local file data.yml in the current working directory that contains the cloud-config parameters. $(< data.yaml) prints the contents of this file into our heat command:
From 9f4084711aeacca77deec3b611095695360e8cd8 Mon Sep 17 00:00:00 2001
From: Jonathan Boulle
Date: Wed, 25 Jun 2014 11:52:25 -0700
Subject: [PATCH 0137/1291] vagrant: add warning about discovery token reuse
---
running-coreos/platforms/vagrant/index.md | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/running-coreos/platforms/vagrant/index.md b/running-coreos/platforms/vagrant/index.md
index b96f16916..b30bd4b53 100644
--- a/running-coreos/platforms/vagrant/index.md
+++ b/running-coreos/platforms/vagrant/index.md
@@ -47,7 +47,8 @@ Our cluster will use an etcd [discovery URL]({{site.url}}/docs/cluster-managemen
coreos:
etcd:
- #generate a new token for each unique cluster from https://discovery.etcd.io/new
+ # generate a new token for each unique cluster from https://discovery.etcd.io/new
+ # WARNING: replace each time you 'vagrant destroy'
discovery: https://discovery.etcd.io/
addr: $public_ipv4:4001
peer-addr: $public_ipv4:7001
From 58ff6e713670bbdcaf0d0b69d475feaeddf8a3cf Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Sun, 29 Jun 2014 17:38:03 -0700
Subject: [PATCH 0138/1291] feat(*): specify offically supported platforms
---
running-coreos/bare-metal/booting-with-ipxe/index.md | 1 +
running-coreos/bare-metal/booting-with-pxe/index.md | 1 +
running-coreos/bare-metal/installing-to-disk/index.md | 1 +
running-coreos/cloud-providers/ec2/index.md | 1 +
running-coreos/cloud-providers/google-compute-engine/index.md | 1 +
running-coreos/cloud-providers/rackspace/index.md | 1 +
running-coreos/platforms/iso/index.md | 1 +
running-coreos/platforms/openstack/index.md | 1 +
running-coreos/platforms/vagrant/index.md | 1 +
9 files changed, 9 insertions(+)
diff --git a/running-coreos/bare-metal/booting-with-ipxe/index.md b/running-coreos/bare-metal/booting-with-ipxe/index.md
index 807835721..5e6261dfe 100644
--- a/running-coreos/bare-metal/booting-with-ipxe/index.md
+++ b/running-coreos/bare-metal/booting-with-ipxe/index.md
@@ -4,6 +4,7 @@ slug: ipxe
title: Booting with iPXE
category: running_coreos
sub_category: bare_metal
+supported: true
weight: 5
---
diff --git a/running-coreos/bare-metal/booting-with-pxe/index.md b/running-coreos/bare-metal/booting-with-pxe/index.md
index e7d4a400d..78b96f395 100644
--- a/running-coreos/bare-metal/booting-with-pxe/index.md
+++ b/running-coreos/bare-metal/booting-with-pxe/index.md
@@ -4,6 +4,7 @@ slug: pxe
title: Booting with PXE
category: running_coreos
sub_category: bare_metal
+supported: true
weight: 5
---
diff --git a/running-coreos/bare-metal/installing-to-disk/index.md b/running-coreos/bare-metal/installing-to-disk/index.md
index 953af4dc5..88365e0c6 100644
--- a/running-coreos/bare-metal/installing-to-disk/index.md
+++ b/running-coreos/bare-metal/installing-to-disk/index.md
@@ -4,6 +4,7 @@ slug: pxe
title: Installing to Disk
category: running_coreos
sub_category: bare_metal
+supported: true
weight: 7
---
diff --git a/running-coreos/cloud-providers/ec2/index.md b/running-coreos/cloud-providers/ec2/index.md
index 42c4c39a6..cc2bc7828 100644
--- a/running-coreos/cloud-providers/ec2/index.md
+++ b/running-coreos/cloud-providers/ec2/index.md
@@ -3,6 +3,7 @@ layout: docs
title: Amazon EC2
category: running_coreos
sub_category: cloud_provider
+supported: true
weight: 1
cloud-formation-launch-logo: https://s3.amazonaws.com/cloudformation-examples/cloudformation-launch-stack.png
---
diff --git a/running-coreos/cloud-providers/google-compute-engine/index.md b/running-coreos/cloud-providers/google-compute-engine/index.md
index 065d3e6ec..ab7d9b5d9 100644
--- a/running-coreos/cloud-providers/google-compute-engine/index.md
+++ b/running-coreos/cloud-providers/google-compute-engine/index.md
@@ -2,6 +2,7 @@
layout: docs
category: running_coreos
sub_category: cloud_provider
+supported: true
weight: 3
title: Google Compute Engine
---
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index a87b923a2..fa96ed397 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -3,6 +3,7 @@ layout: docs
title: Rackspace Cloud
category: running_coreos
sub_category: cloud_provider
+supported: true
weight: 5
---
diff --git a/running-coreos/platforms/iso/index.md b/running-coreos/platforms/iso/index.md
index 17792bbdc..35593eb0d 100644
--- a/running-coreos/platforms/iso/index.md
+++ b/running-coreos/platforms/iso/index.md
@@ -3,6 +3,7 @@ layout: docs
title: ISO
category: running_coreos
sub_category: platforms
+supported: true
weight: 10
---
diff --git a/running-coreos/platforms/openstack/index.md b/running-coreos/platforms/openstack/index.md
index c18a57bc4..091908b6c 100644
--- a/running-coreos/platforms/openstack/index.md
+++ b/running-coreos/platforms/openstack/index.md
@@ -3,6 +3,7 @@ layout: docs
title: OpenStack
category: running_coreos
sub_category: platforms
+supported: true
weight: 5
---
diff --git a/running-coreos/platforms/vagrant/index.md b/running-coreos/platforms/vagrant/index.md
index b30bd4b53..c1fc7f287 100644
--- a/running-coreos/platforms/vagrant/index.md
+++ b/running-coreos/platforms/vagrant/index.md
@@ -4,6 +4,7 @@ slug: vagrant
title: Vagrant
category: running_coreos
sub_category: platforms
+supported: true
weight: 5
---
From d322cfb710bf8624525c55647a236ae451f88cdb Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Mon, 30 Jun 2014 00:13:27 -0700
Subject: [PATCH 0139/1291] feat(update-strategies): minor text changes
---
cluster-management/setup/update-strategies/index.md | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/cluster-management/setup/update-strategies/index.md b/cluster-management/setup/update-strategies/index.md
index 1fea426f7..360d5b76c 100644
--- a/cluster-management/setup/update-strategies/index.md
+++ b/cluster-management/setup/update-strategies/index.md
@@ -34,9 +34,7 @@ coreos:
### Best Effort
-The default setting is for CoreOS to make a `best-effort` to determine if the machine is part of a cluster. Currently this logic is very simple: if etcd has started, assume that the machine is part of a cluster.
-
-If so, use the `etcd-lock` strategy.
+The default setting is for CoreOS to make a `best-effort` to determine if the machine is part of a cluster. Currently this logic is very simple: if etcd has started, assume that the machine is part of a cluster and use the `etcd-lock` strategy.
Otherwise, use the `reboot` strategy.
@@ -73,11 +71,11 @@ locksmithctl unlock 69d27b356a94476da859461d3a3bc6fd
### Reboot Immediately
-The `reboot` strategy works exactly how it sounds: the machine is rebooted as soon as the update has been installed to the passive partition. If the applications running on your cluster are highly resilient, this strategy was made for you.
+The `reboot` strategy works exactly like it sounds: the machine is rebooted as soon as the update has been installed to the passive partition. If the applications running on your cluster are highly resilient, this strategy was made for you.
### Off
-The `off` strategy is also very straightforward. The update will be installed onto the passive partion and await a reboot command to complete the update. We don't recommend this strategy unless you reboot frequently as part of your normal operations workflow
+The `off` strategy is also straightforward. The update will be installed onto the passive partion and await a reboot command to complete the update. We don't recommend this strategy unless you reboot frequently as part of your normal operations workflow.
## Updating PXE/iPXE Machines
From 66a9c931eae88083fef41932db043855e20a20ac Mon Sep 17 00:00:00 2001
From: rdodev
Date: Mon, 30 Jun 2014 09:01:48 -0400
Subject: [PATCH 0140/1291] The account number is no longer displayed in the
upper right corner. The use must click on it in order to see it.
---
running-coreos/cloud-providers/rackspace/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index fa96ed397..8c8e23a83 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -131,7 +131,7 @@ sudo pip install supernova
### Store Account Information
-Edit your config file (`~/.supernova`) to store your Rackspace username, API key (referenced as `OS_PASSWORD`) and some other settings. The `OS_TENANT_NAME` should be set to your Rackspace account ID, which is displayed in the upper right-hand corner of the cloud control panel UI.
+Edit your config file (`~/.supernova`) to store your Rackspace username, API key (referenced as `OS_PASSWORD`) and some other settings. The `OS_TENANT_NAME` should be set to your Rackspace account ID, which can be found by clicking on your rackspace username in the upper right-hand corner of the cloud control panel UI.
```ini
[production]
From 2d4862e602206df9fe20e087f4821243b6bd5d4f Mon Sep 17 00:00:00 2001
From: rdodev
Date: Mon, 30 Jun 2014 10:11:11 -0400
Subject: [PATCH 0141/1291] Added Control Panel instructions to Rackspace docs
---
running-coreos/cloud-providers/rackspace/index.md | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index 8c8e23a83..95b6f2dd0 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -261,6 +261,19 @@ source ~/.bash_profile
+## Launch via Control Panel
+
+You can also launch servers with either the `alpha` and `beta` channel versions via the web-based Control Panel. To do so:
+
+ 1. log into your Rackspace Control Panel
+ 1. click on 'Severs'
+ 1. click on 'Create Server'
+ 1. choose server name and region
+ 1. click on 'Linux', then on 'CoreOS' and finally choose '(alpha)' or '(beta)' version
+ 1. choose flavor and use 'Advanced Options' to select SSH Key -- if available
+ 1. click on 'Create Server'
+
+
## Using CoreOS
Now that you have a machine booted it is time to play around.
From f721b91e28a40d7b058a34195d2aa2f6a61f79e4 Mon Sep 17 00:00:00 2001
From: rdodev
Date: Mon, 30 Jun 2014 10:18:31 -0400
Subject: [PATCH 0142/1291] Adding screenshot
---
running-coreos/cloud-providers/rackspace/index.md | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index 95b6f2dd0..272553583 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -269,7 +269,8 @@ You can also launch servers with either the `alpha` and `beta` channel versions
1. click on 'Severs'
1. click on 'Create Server'
1. choose server name and region
- 1. click on 'Linux', then on 'CoreOS' and finally choose '(alpha)' or '(beta)' version
+ 1. click on 'Linux', then on 'CoreOS' and finally choose '(alpha)' or '(beta)' version
+ 
1. choose flavor and use 'Advanced Options' to select SSH Key -- if available
1. click on 'Create Server'
From b4361e73ce8a1c83ec6c498805f28b00a7fe64b8 Mon Sep 17 00:00:00 2001
From: rdodev
Date: Mon, 30 Jun 2014 12:01:53 -0400
Subject: [PATCH 0143/1291] PR corrections
---
running-coreos/cloud-providers/rackspace/index.md | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index 272553583..d2a44aa22 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -131,7 +131,7 @@ sudo pip install supernova
### Store Account Information
-Edit your config file (`~/.supernova`) to store your Rackspace username, API key (referenced as `OS_PASSWORD`) and some other settings. The `OS_TENANT_NAME` should be set to your Rackspace account ID, which can be found by clicking on your rackspace username in the upper right-hand corner of the cloud control panel UI.
+Edit your config file (`~/.supernova`) to store your Rackspace username, API key (referenced as `OS_PASSWORD`) and some other settings. The `OS_TENANT_NAME` should be set to your Rackspace account ID, which can be found by clicking on your Rackspace username in the upper right-hand corner of the cloud control panel UI.
```ini
[production]
@@ -266,13 +266,13 @@ source ~/.bash_profile
You can also launch servers with either the `alpha` and `beta` channel versions via the web-based Control Panel. To do so:
1. log into your Rackspace Control Panel
- 1. click on 'Severs'
- 1. click on 'Create Server'
- 1. choose server name and region
- 1. click on 'Linux', then on 'CoreOS' and finally choose '(alpha)' or '(beta)' version
+ 2. click on 'Servers'
+ 3. click on 'Create Server'
+ 4. choose server name and region
+ 5. click on 'Linux', then on 'CoreOS' and finally choose '(alpha)' or '(beta)' version

- 1. choose flavor and use 'Advanced Options' to select SSH Key -- if available
- 1. click on 'Create Server'
+ 6. choose flavor and use 'Advanced Options' to select SSH Key -- if available
+ 7. click on 'Create Server'
## Using CoreOS
From 0a2d0e66b7b4641f373b68f9bc7d95e9db2f2ec1 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Mon, 30 Jun 2014 10:34:27 -0700
Subject: [PATCH 0144/1291] feat(rackspace): add cloud-config note for UI
---
.../cloud-providers/rackspace/index.md | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index d2a44aa22..05ccf70bf 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -263,16 +263,16 @@ source ~/.bash_profile
## Launch via Control Panel
-You can also launch servers with either the `alpha` and `beta` channel versions via the web-based Control Panel. To do so:
+You can also launch servers with either the `alpha` and `beta` channel versions via the web-based Control Panel, although you can't provide cloud-config via the UI. To do so:
- 1. log into your Rackspace Control Panel
- 2. click on 'Servers'
- 3. click on 'Create Server'
- 4. choose server name and region
- 5. click on 'Linux', then on 'CoreOS' and finally choose '(alpha)' or '(beta)' version
+ 1. Log into your Rackspace Control Panel
+ 2. Click on 'Servers'
+ 3. Click on 'Create Server'
+ 4. Choose server name and region
+ 5. Click on 'Linux', then on 'CoreOS' and finally choose '(alpha)' or '(beta)' version

- 6. choose flavor and use 'Advanced Options' to select SSH Key -- if available
- 7. click on 'Create Server'
+ 6. Choose flavor and use 'Advanced Options' to select SSH Key -- if available
+ 7. Click on 'Create Server'
## Using CoreOS
From fe8086d217b3a1a166f2b0f5519f6f5284951465 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Mon, 30 Jun 2014 16:02:48 -0700
Subject: [PATCH 0145/1291] feat(customizing-ssh): instructions for changing
the ssh port
---
.../setup/customizing-sshd/index.md | 33 ++++++++++++++-----
1 file changed, 25 insertions(+), 8 deletions(-)
diff --git a/cluster-management/setup/customizing-sshd/index.md b/cluster-management/setup/customizing-sshd/index.md
index 9e785a593..16c9baab8 100644
--- a/cluster-management/setup/customizing-sshd/index.md
+++ b/cluster-management/setup/customizing-sshd/index.md
@@ -24,14 +24,31 @@ write_files:
permissions: 0600
owner: root:root
content: |
- # Use most defaults for sshd configuration.
- UsePrivilegeSeparation sandbox
- Subsystem sftp internal-sftp
-
- PermitRootLogin no
- AllowUsers core
- PasswordAuthentication no
- ChallengeResponseAuthentication no
+ # Use most defaults for sshd configuration.
+ UsePrivilegeSeparation sandbox
+ Subsystem sftp internal-sftp
+
+ PermitRootLogin no
+ AllowUsers core
+ PasswordAuthentication no
+ ChallengeResponseAuthentication no
+```
+
+## Changing the sshd Port
+
+CoreOS ships with socket-activated SSH by default. The configuration for this can be found at `/usr/lib/systemd/system/sshd.socket`. We're going to override this in the cloud-config provided at boot:
+
+```yaml
+#cloud-config
+
+coreos:
+ units:
+ - name: sshd.socket
+ command: restart
+ content: |
+ [Socket]
+ ListenStream=2222
+ Accept=yes
```
## Further Reading
From f4d7361802482bb1a2332805ff13c0d12575c536 Mon Sep 17 00:00:00 2001
From: Paul Querna
Date: Wed, 2 Jul 2014 10:24:41 -0700
Subject: [PATCH 0146/1291] OpenStack is normally a CamelCase word
---
running-coreos/cloud-providers/rackspace/index.md | 2 +-
running-coreos/platforms/openstack/index.md | 4 ++--
running-coreos/platforms/vmware/index.md | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index 05ccf70bf..f348f2b2d 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -9,7 +9,7 @@ weight: 5
# Running CoreOS on Rackspace
-CoreOS is currently in heavy development and actively being tested. These instructions will walk you through running CoreOS on the Rackspace Openstack cloud, which differs slightly from the generic Openstack instructions. There are two ways to launch a CoreOS cluster: launch an entire cluster with Heat or launch machines with Nova.
+CoreOS is currently in heavy development and actively being tested. These instructions will walk you through running CoreOS on the Rackspace OpenStack cloud, which differs slightly from the generic OpenStack instructions. There are two ways to launch a CoreOS cluster: launch an entire cluster with Heat or launch machines with Nova.
## Choosing a Channel
diff --git a/running-coreos/platforms/openstack/index.md b/running-coreos/platforms/openstack/index.md
index 091908b6c..da8318b30 100644
--- a/running-coreos/platforms/openstack/index.md
+++ b/running-coreos/platforms/openstack/index.md
@@ -57,13 +57,13 @@ $ glance image-create --name CoreOS \
## Cloud-Config
CoreOS allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features][cloud-config].
-We're going to provide our cloud-config to Openstack via the user-data flag. Our cloud-config will also contain SSH keys that will be used to connect to the instance.
+We're going to provide our cloud-config to OpenStack via the user-data flag. Our cloud-config will also contain SSH keys that will be used to connect to the instance.
In order for this to work your OpenStack cloud provider must support [config drive][config-drive] or the OpenStack metadata service.
[cloud-config]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config
[config-drive]: http://docs.openstack.org/user-guide/content/config-drive.html
-The most common cloud-config for Openstack looks like:
+The most common cloud-config for OpenStack looks like:
```yaml
#cloud-config
diff --git a/running-coreos/platforms/vmware/index.md b/running-coreos/platforms/vmware/index.md
index 050d6f1ef..76db1a60f 100644
--- a/running-coreos/platforms/vmware/index.md
+++ b/running-coreos/platforms/vmware/index.md
@@ -70,7 +70,7 @@ The last step uploads the files to your ESXi datastore and registers your VM. Yo
Cloud-config can be specified by attaching a [config-drive]({{site.url}}/docs/cluster-management/setup/cloudinit-config-drive/) with the label `config-2`. This is commonly done through whatever interface allows for attaching cd-roms or new drives.
-Note that the config-drive standard was originally an Openstack feature, which is why you'll see strings containing `openstack`. This filepath needs to be retained, although CoreOS supports config-drive on all platforms.
+Note that the config-drive standard was originally an OpenStack feature, which is why you'll see strings containing `openstack`. This filepath needs to be retained, although CoreOS supports config-drive on all platforms.
For more information on customization that can be done with cloud-config, head on over to the [cloud-config guide]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/).
From 464441170c49a35aebf03a18fce4913b538a8041 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 3 Jul 2014 10:54:17 -0700
Subject: [PATCH 0147/1291] fix(launching-containers-fleet): typo
---
.../launching/launching-containers-fleet/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/launching-containers/launching/launching-containers-fleet/index.md b/launching-containers/launching/launching-containers-fleet/index.md
index a499a3492..d41e55b70 100644
--- a/launching-containers/launching/launching-containers-fleet/index.md
+++ b/launching-containers/launching/launching-containers-fleet/index.md
@@ -113,7 +113,7 @@ Second is `%H`, a variable built into systemd, that represents the hostname of t
The third is a [fleet-specific property]({{site.url}}/docs/launching-containers/launching/fleet-unit-files/) called `X-ConditionMachineOf`. This property causes the unit to be placed onto the same machine that `apache.1.service` is running on.
-Let's verify that each unit was placed on to the same machine as the Apache service is is bound to:
+Let's verify that each unit was placed on to the same machine as the Apache service is bound to:
```sh
$ fleetctl start apache-discovery.1.service
From 22a6f10c62d9523a390d42d02cb3973be6e51548 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 3 Jul 2014 10:57:39 -0700
Subject: [PATCH 0148/1291] fix(mounting-storage): missing colon in YAML
---
cluster-management/setup/mounting-storage/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cluster-management/setup/mounting-storage/index.md b/cluster-management/setup/mounting-storage/index.md
index eed56639c..87282097a 100644
--- a/cluster-management/setup/mounting-storage/index.md
+++ b/cluster-management/setup/mounting-storage/index.md
@@ -37,7 +37,7 @@ We're going to bind mount a btrfs device to `/var/lib/docker`, where docker stor
```yaml
#cloud-config
coreos:
- units
+ units:
- name: format-ephemeral.service
command: start
content: |
From 258e5a782f4fb5e5d89d14c1ebf53b75f5a415fa Mon Sep 17 00:00:00 2001
From: Michael Marineau
Date: Thu, 3 Jul 2014 14:19:57 -0700
Subject: [PATCH 0149/1291] cloud-config: Explicitly state which platforms can
use $private_ipv4
---
.../bare-metal/booting-with-pxe/index.md | 2 +
.../bare-metal/installing-to-disk/index.md | 2 +
.../cloud-providers/brightbox/index.md | 41 +++++++++++++------
running-coreos/cloud-providers/ec2/index.md | 2 +
.../google-compute-engine/index.md | 2 +
.../cloud-providers/rackspace/index.md | 2 +
.../cloud-providers/vexxhost/index.md | 2 +
running-coreos/platforms/libvirt/index.md | 2 +
running-coreos/platforms/openstack/index.md | 2 +
running-coreos/platforms/vagrant/index.md | 2 +
running-coreos/platforms/vmware/index.md | 2 +
11 files changed, 49 insertions(+), 12 deletions(-)
diff --git a/running-coreos/bare-metal/booting-with-pxe/index.md b/running-coreos/bare-metal/booting-with-pxe/index.md
index 78b96f395..716163487 100644
--- a/running-coreos/bare-metal/booting-with-pxe/index.md
+++ b/running-coreos/bare-metal/booting-with-pxe/index.md
@@ -70,6 +70,8 @@ ssh_authorized_keys:
You can view all of the [cloud-config options here]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/).
+Note: The `$private_ipv4` and `$public_ipv4` substitution variables referenced in other documents are *not* supported on PXE systems.
+
### Choose a Channel
CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
diff --git a/running-coreos/bare-metal/installing-to-disk/index.md b/running-coreos/bare-metal/installing-to-disk/index.md
index 88365e0c6..c79899965 100644
--- a/running-coreos/bare-metal/installing-to-disk/index.md
+++ b/running-coreos/bare-metal/installing-to-disk/index.md
@@ -74,6 +74,8 @@ ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h...
```
+Note: The `$private_ipv4` and `$public_ipv4` substitution variables referenced in other documents are *not* supported when installing via the `coreos-install` script.
+
To start the installation script with a reference to our cloud-config file, run:
```
diff --git a/running-coreos/cloud-providers/brightbox/index.md b/running-coreos/cloud-providers/brightbox/index.md
index a42448ad6..c28e442fd 100644
--- a/running-coreos/cloud-providers/brightbox/index.md
+++ b/running-coreos/cloud-providers/brightbox/index.md
@@ -85,23 +85,40 @@ $ brightbox images list | grep CoreOS
{{site.brightbox-id}} brightbox official 2013-12-15 public 5442 CoreOS {{site.brightbox-version}} (x86_64)
```
-## Building Servers
-
-Before building the cluster, we need to generate a unique identifier for it, which is used by CoreOS to discover and identify nodes.
+## Cloud-Config
+
+CoreOS allows you to configure machine parameters, launch systemd units on
+startup and more via [cloud-config][cloud-config]. We're going to provide the
+`cloud-config` data via the `user-data-file` flag.
+
+[cloud-config]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config
+
+A sample common `cloud-config` file will look something like the following:
+
+```yaml
+#cloud-config
+
+coreos:
+ etcd:
+ # generate a new token for each unique cluster from https://discovery.etcd.io/new
+ discovery: https://discovery.etcd.io/
+ addr: $private_ipv4:4001
+ peer-addr: $private_ipv4:7001
+ units:
+ - name: etcd.service
+ command: start
+ - name: fleet.service
+ command: start
+```
-You can use any random string so we’ll use the `uuid` tool here to generate one:
+The `$private_ipv4` and `$public_ipv4` substitution variables are fully supported in cloud-config on Brightbox.
-```sh
-$ TOKEN=`uuid`
-
-$ echo $TOKEN
-53cf11d4-3726-11e3-958f-939d4f7f9688
-```
+## Building Servers
-Then build three servers using the image, in the server group we created and specifying the token as the user data:
+Now build three servers using the image, in the server group we created and specifying the cloud-config as the user data:
```sh
-$ brightbox servers create -i 3 --type small --name "coreos" --user-data $TOKEN --server-groups grp-cdl6h {{site.brightbox-id}}
+$ brightbox servers create -i 3 --type small --name "coreos" --user-data-file ./user-data --server-groups grp-cdl6h {{site.brightbox-id}}
Creating 3 small (typ-8fych) servers with image CoreOS {{site.brightbox-version}} ({{ site.brightbox-id }}) in groups grp-cdl6h with 0.05k of user data
diff --git a/running-coreos/cloud-providers/ec2/index.md b/running-coreos/cloud-providers/ec2/index.md
index cc2bc7828..1d4a9e69f 100644
--- a/running-coreos/cloud-providers/ec2/index.md
+++ b/running-coreos/cloud-providers/ec2/index.md
@@ -100,6 +100,8 @@ coreos:
command: start
```
+The `$private_ipv4` and `$public_ipv4` substitution variables are fully supported in cloud-config on EC2.
+
diff --git a/running-coreos/cloud-providers/google-compute-engine/index.md b/running-coreos/cloud-providers/google-compute-engine/index.md
index ab7d9b5d9..2eb711050 100644
--- a/running-coreos/cloud-providers/google-compute-engine/index.md
+++ b/running-coreos/cloud-providers/google-compute-engine/index.md
@@ -39,6 +39,8 @@ coreos:
command: start
```
+The `$private_ipv4` and `$public_ipv4` substitution variables are fully supported in cloud-config on GCE.
+
## Choosing a Channel
CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updates) with different schedules per channel. You can [disable this feature]({{site.url}}/docs/cluster-management/debugging/prevent-reboot-after-update), although we don't recommend it. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes.
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index f348f2b2d..7ad33400b 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -92,6 +92,8 @@ coreos:
command: start
```
+The `$private_ipv4` and `$public_ipv4` substitution variables are fully supported in cloud-config on Rackspace.
+
[cloud-config-docs]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config
### Mount Data Disk
diff --git a/running-coreos/cloud-providers/vexxhost/index.md b/running-coreos/cloud-providers/vexxhost/index.md
index 70fb31ce5..fba18a6ab 100644
--- a/running-coreos/cloud-providers/vexxhost/index.md
+++ b/running-coreos/cloud-providers/vexxhost/index.md
@@ -61,6 +61,8 @@ coreos:
command: start
```
+The `$private_ipv4` and `$public_ipv4` substitution variables are fully supported in cloud-config on VEXXHOST.
+
## Launch Cluster
You will need to install `python-novaclient` which supplies the OpenStack CLI
diff --git a/running-coreos/platforms/libvirt/index.md b/running-coreos/platforms/libvirt/index.md
index 172e09b84..2e88ea9e2 100644
--- a/running-coreos/platforms/libvirt/index.md
+++ b/running-coreos/platforms/libvirt/index.md
@@ -121,6 +121,8 @@ ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq.......
```
+Note: The `$private_ipv4` and `$public_ipv4` substitution variables referenced in other documents are *not* supported on libvirt.
+
[cloud-config]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config
### Network configuration
diff --git a/running-coreos/platforms/openstack/index.md b/running-coreos/platforms/openstack/index.md
index da8318b30..897241c5e 100644
--- a/running-coreos/platforms/openstack/index.md
+++ b/running-coreos/platforms/openstack/index.md
@@ -85,6 +85,8 @@ ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h...
```
+The `$private_ipv4` and `$public_ipv4` substitution variables are fully supported in cloud-config on most OpenStack deployments. Unfortunately some systems relying on config drive may leave these values undefined.
+
## Launch Cluster
Boot the machines with the `nova` CLI, referencing the image ID from the import step above and your `cloud-config.yaml`:
diff --git a/running-coreos/platforms/vagrant/index.md b/running-coreos/platforms/vagrant/index.md
index c1fc7f287..d9758e6fb 100644
--- a/running-coreos/platforms/vagrant/index.md
+++ b/running-coreos/platforms/vagrant/index.md
@@ -68,6 +68,8 @@ coreos:
ExecStart=/usr/bin/fleet
```
+The `$private_ipv4` and `$public_ipv4` substitution variables are fully supported in cloud-config on Vagrant. They will map to the first statically defined private and public networks defined in the Vagrantfile.
+
[cloud-config-docs]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config
### Start up CoreOS
diff --git a/running-coreos/platforms/vmware/index.md b/running-coreos/platforms/vmware/index.md
index 76db1a60f..637d59a08 100644
--- a/running-coreos/platforms/vmware/index.md
+++ b/running-coreos/platforms/vmware/index.md
@@ -74,6 +74,8 @@ Note that the config-drive standard was originally an OpenStack feature, which i
For more information on customization that can be done with cloud-config, head on over to the [cloud-config guide]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/).
+Note: The `$private_ipv4` and `$public_ipv4` substitution variables referenced in other documents are *not* supported on VMware.
+
## Logging in
Networking can take a bit of time to come up under VMware and you will need to
From f40f152235282c22a20fad7dbdc2ee9c20c481a9 Mon Sep 17 00:00:00 2001
From: kelvinn
Date: Sat, 5 Jul 2014 16:58:46 +1000
Subject: [PATCH 0150/1291] Update index.md
---
running-coreos/cloud-providers/vultr/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index 8f9338418..41ff1f5b5 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -38,7 +38,7 @@ Create a new VPS (any server type and location of your choice), and then:
1. For the "Operating System" select "Custom"
2. Select iPXE boot
-3. Set the chain URL to the URL of your script (http://example.com/script.txt)
+3. Set the chain URL to the URL of your script (http://example.com/script.txt) *Note*: URL must be plain old HTTP, not HTTPS
4. Click "Place Order"

From 7a6ef642439eb1009b18560b1d8308bf2cb3c30b Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 8 Jul 2014 10:21:50 -0700
Subject: [PATCH 0151/1291] fix(vagrant): use coreos.fleet for setting IP
---
running-coreos/platforms/vagrant/index.md | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/running-coreos/platforms/vagrant/index.md b/running-coreos/platforms/vagrant/index.md
index d9758e6fb..2a06d68f8 100644
--- a/running-coreos/platforms/vagrant/index.md
+++ b/running-coreos/platforms/vagrant/index.md
@@ -53,19 +53,13 @@ coreos:
discovery: https://discovery.etcd.io/
addr: $public_ipv4:4001
peer-addr: $public_ipv4:7001
+ fleet:
+ public-ip: $public_ipv4
units:
- name: etcd.service
command: start
- name: fleet.service
command: start
- runtime: no
- content: |
- [Unit]
- Description=fleet
-
- [Service]
- Environment=FLEET_PUBLIC_IP=$public_ipv4
- ExecStart=/usr/bin/fleet
```
The `$private_ipv4` and `$public_ipv4` substitution variables are fully supported in cloud-config on Vagrant. They will map to the first statically defined private and public networks defined in the Vagrantfile.
From c92b10dc130534a3f547c8e5ac4d52ce01bc18b7 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 8 Jul 2014 15:31:48 -0700
Subject: [PATCH 0152/1291] fix(mounting-storage): remove reference to
non-existent unit
---
cluster-management/setup/mounting-storage/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cluster-management/setup/mounting-storage/index.md b/cluster-management/setup/mounting-storage/index.md
index 87282097a..bfefa9a7c 100644
--- a/cluster-management/setup/mounting-storage/index.md
+++ b/cluster-management/setup/mounting-storage/index.md
@@ -62,7 +62,7 @@ coreos:
Type=btrfs
```
-Notice that we're starting all three of these units at the same time and using the power of systemd to work out the dependencies for us. In this case, `docker-storage.service` requires `format-ephemeral.service`, ensuring that our storage will always be formatted before it is bind mounted. Docker will refuse to start otherwise.
+Notice that we're starting both units at the same time and using the power of systemd to work out the dependencies for us. In this case, `var-lib-docker.mount` requires `format-ephemeral.service`, ensuring that our storage will always be formatted before it is mounted. Docker will refuse to start otherwise.
## Further Reading
From f47baa350904712d9e2735fedf14b77c7647060e Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 8 Jul 2014 10:37:26 -0700
Subject: [PATCH 0153/1291] fix(customize-docker): improve systemd units
---
.../building/customizing-docker/index.md | 30 ++++++++-----------
1 file changed, 12 insertions(+), 18 deletions(-)
diff --git a/launching-containers/building/customizing-docker/index.md b/launching-containers/building/customizing-docker/index.md
index b4fd2fedb..ef9efa695 100644
--- a/launching-containers/building/customizing-docker/index.md
+++ b/launching-containers/building/customizing-docker/index.md
@@ -12,7 +12,7 @@ The docker systemd unit can be customized by overriding the unit that ships with
## Enable the Remote API on a New Socket
-Create a file called `/etc/systemd/system/docker-tcp.socket` to make docker available on a tcp socket on port 2375.
+Create a file called `/etc/systemd/system/docker.socket` to make docker available on a TCP socket on port 2375.
```ini
[Unit]
@@ -20,20 +20,23 @@ Description=Docker Socket for the API
[Socket]
ListenStream=2375
-Service=docker.service
BindIPv6Only=both
[Install]
WantedBy=sockets.target
```
-Then enable this new socket:
+Docker has support for socket activation, which solves a common race condition during start up. If requests are sent over the socket before docker has started, they will be queued in the kernel and processed as soon as docker is ready.
+
+Since docker is socket-activated and already looking for the socket, all we need to do is restart it after the socket file has been written to disk:
+
+```sh
+systemctl restart docker
+```
+
+Test that it's working:
```sh
-systemctl enable docker-tcp.socket
-systemctl stop docker
-systemctl start docker-tcp.socket
-systemctl start docker
docker -H tcp://127.0.0.1:2375 ps
```
@@ -46,28 +49,19 @@ To enable the remote API on every CoreOS machine in a cluster, use [cloud-config
coreos:
units:
- - name: docker-tcp.socket
+ - name: docker.socket
command: start
+ enable: yes
content: |
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=2375
- Service=docker.service
BindIPv6Only=both
[Install]
WantedBy=sockets.target
- - name: enable-docker-tcp.service
- command: start
- content: |
- [Unit]
- Description=Enable the Docker Socket for the API
-
- [Service]
- Type=oneshot
- ExecStart=/usr/bin/systemctl enable docker-tcp.socket
```
To keep access to the port local, replace the `ListenStream` configuration above with:
From 2eb2da4fb631c2c00731d61ffcd35a8d5044ce5a Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 11 Jul 2014 11:02:08 -0700
Subject: [PATCH 0154/1291] fix(vagrant): update single machine config +
updating config note
---
running-coreos/platforms/vagrant/index.md | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/running-coreos/platforms/vagrant/index.md b/running-coreos/platforms/vagrant/index.md
index 2a06d68f8..675858506 100644
--- a/running-coreos/platforms/vagrant/index.md
+++ b/running-coreos/platforms/vagrant/index.md
@@ -64,6 +64,8 @@ coreos:
The `$private_ipv4` and `$public_ipv4` substitution variables are fully supported in cloud-config on Vagrant. They will map to the first statically defined private and public networks defined in the Vagrantfile.
+If you wish to update your cloud-config later on, `vagrant up --provision` must be run to apply the new file.
+
[cloud-config-docs]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config
### Start up CoreOS
@@ -152,19 +154,13 @@ coreos:
etcd:
addr: $public_ipv4:4001
peer-addr: $public_ipv4:7001
+ fleet:
+ public-ip: $public_ipv4
units:
- name: etcd.service
command: start
- name: fleet.service
command: start
- runtime: no
- content: |
- [Unit]
- Description=fleet
-
- [Service]
- Environment=FLEET_PUBLIC_IP=$public_ipv4
- ExecStart=/usr/bin/fleet
```
### Start up CoreOS
From b86db9b70b75cb1e9715ce97d500cca7effc5954 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Mon, 14 Jul 2014 11:51:40 -0700
Subject: [PATCH 0155/1291] feat(ec2): list PV and HVM AMIs
---
running-coreos/cloud-providers/ec2/index.md | 32 +++++++++++++++------
1 file changed, 24 insertions(+), 8 deletions(-)
diff --git a/running-coreos/cloud-providers/ec2/index.md b/running-coreos/cloud-providers/ec2/index.md
index 1d4a9e69f..438d6d8da 100644
--- a/running-coreos/cloud-providers/ec2/index.md
+++ b/running-coreos/cloud-providers/ec2/index.md
@@ -7,8 +7,9 @@ supported: true
weight: 1
cloud-formation-launch-logo: https://s3.amazonaws.com/cloudformation-examples/cloudformation-launch-stack.png
---
-{% capture cf_alpha_template %}{{ site.https-s3 }}/dist/aws/coreos-alpha.template{% endcapture %}
-{% capture cf_beta_template %}{{ site.https-s3 }}/dist/aws/coreos-beta.template{% endcapture %}
+{% capture cf_alpha_pv_template %}{{ site.https-s3 }}/dist/aws/coreos-alpha-pv.template{% endcapture %}
+{% capture cf_alpha_hvm_template %}{{ site.https-s3 }}/dist/aws/coreos-alpha-hvm.template{% endcapture %}
+{% capture cf_beta_pv_template %}{{ site.https-s3 }}/dist/aws/coreos-beta-pv.template{% endcapture %}
# Running CoreOS on EC2
@@ -32,6 +33,7 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
EC2 Region
+
AMI Type
AMI ID
CloudFormation
@@ -39,9 +41,15 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
{% for region in site.data.alpha-channel.amis %}
+
{% endfor %}
@@ -140,7 +156,7 @@ For more information about mounting storage, Amazon's [own documentation](http:/
To add more instances to the cluster, just launch more with the same cloud-config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.
## Multiple Clusters
-If you would like to create multiple clusters you will need to change the "Stack Name". You can find the direct [template file on S3]({{ cf_beta_template }}).
+If you would like to create multiple clusters you will need to change the "Stack Name". You can find the direct [template file on S3]({{ cf_beta_pv_template }}).
## Manual setup
From d2887d766f413103e64539899cc99297edd7ee27 Mon Sep 17 00:00:00 2001
From: Alex Crawford
Date: Mon, 14 Jul 2014 13:17:44 -0700
Subject: [PATCH 0156/1291] cluster-management: Clarify the use of cloud-init
with networkd units
---
.../setup/network-config-with-networkd/index.md | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/cluster-management/setup/network-config-with-networkd/index.md b/cluster-management/setup/network-config-with-networkd/index.md
index 321acbace..e53a7f4e0 100644
--- a/cluster-management/setup/network-config-with-networkd/index.md
+++ b/cluster-management/setup/network-config-with-networkd/index.md
@@ -10,7 +10,9 @@ weight: 7
CoreOS machines are preconfigured with [networking customized]({{site.url}}/docs/sdk-distributors/distributors/notes-for-distributors) for each platform. You can write your own networkd units to replace or override the units created for each platform. This article covers a subset of networkd functionality. You can view the [full docs here](http://www.freedesktop.org/software/systemd/man/systemd-networkd.service.html).
-Drop a file in `/etc/systemd/network/` or inject a file on boot via [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/#write_files) to override an existing file. Let's take a look at two common situations: using a static IP and turning off DHCP.
+Drop a networkd unit in `/etc/systemd/network/` or inject a unit on boot via [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/#coreos) to override an existing unit. Network units injected via the `coreos.units` node in the cloud-config will automatically trigger a networkd reload in order for changes to be applied. Files placed on the filesystem will need to reload networkd afterwards with `sudo systemctl restart systemd-networkd`.
+
+Let's take a look at two common situations: using a static IP and turning off DHCP.
## Static Networking
From a210355d062e284692bb03763113d9b6831c061b Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Mon, 14 Jul 2014 14:20:04 -0700
Subject: [PATCH 0157/1291] fix(vmware): update link location
---
running-coreos/platforms/vmware/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/platforms/vmware/index.md b/running-coreos/platforms/vmware/index.md
index 637d59a08..e34c2f903 100644
--- a/running-coreos/platforms/vmware/index.md
+++ b/running-coreos/platforms/vmware/index.md
@@ -35,7 +35,7 @@ open coreos_production_vmware_insecure.vmx
### To deploy on an ESXi/vSphere host, convert the VM to OVF
* follow the steps above to download and extract the coreos_production_vmware_insecure.zip
-* download and run the [OVF Tool 3.5.0 installer](https://developercenter.vmware.com/tool/ovf) Requires VMware account login but the download is free. Available for Linux, OSX & Windows for both 32 & 64 bit architectures.
+* download and run the [OVF Tool 3.5.0 installer](https://developercenter.vmware.com/tool/ovf/3.5.0) Requires VMware account login but the download is free. Available for Linux, OSX & Windows for both 32 & 64 bit architectures.
* convert VM to OVF from the extract dir
```sh
From 924dde64c6956ec8ac167abac232be1e795c8342 Mon Sep 17 00:00:00 2001
From: Michael Marineau
Date: Mon, 14 Jul 2014 15:11:04 -0700
Subject: [PATCH 0158/1291] update-engine: add documentation for using proxies
---
.../setup/update-strategies/index.md | 21 ++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/cluster-management/setup/update-strategies/index.md b/cluster-management/setup/update-strategies/index.md
index 360d5b76c..22663fc51 100644
--- a/cluster-management/setup/update-strategies/index.md
+++ b/cluster-management/setup/update-strategies/index.md
@@ -81,4 +81,23 @@ The `off` strategy is also straightforward. The update will be installed onto th
PXE/iPXE machines download a new copy of CoreOS every time they are started thus are dependent on the version of CoreOS they are served. If you don't automatically load new CoreOS images into your PXE/iPXE server, your machines will never have new features or security updates.
-An easy solution to this problem is to use iPXE and reference images [directly from the CoreOS storage site]({{site.url}}/docs/running-coreos/bare-metal/booting-with-ipxe/#setting-up-the-boot-script). The `alpha` URL is automatically pointed to the new version of CoreOS as it is released.
\ No newline at end of file
+An easy solution to this problem is to use iPXE and reference images [directly from the CoreOS storage site]({{site.url}}/docs/running-coreos/bare-metal/booting-with-ipxe/#setting-up-the-boot-script). The `alpha` URL is automatically pointed to the new version of CoreOS as it is released.
+
+## Updating Behind a Proxy
+
+Public Internet access is required to contact CoreUpdate and download new versions of CoreOS.
+If direct access is not available the `update-engine` service may be configured to use a HTTP or SOCKS proxy using curl-compatible environment variables, such as `HTTPS_PROXY` or `ALL_PROXY`.
+See [curl's documentation](http://curl.haxx.se/docs/manpage.html#ALLPROXY) for details.
+
+```yaml
+#cloud-config
+write_files:
+ - path: /etc/systemd/system/update-engine.service.d/proxy.conf
+ content: |
+ [Service]
+ Environment=ALL_PROXY=http://proxy.example.com:3128
+coreos:
+ units:
+ - name: update-engine.service
+ command: restart
+```
From 9a9189f18d0e5c152a2765bada8c63cedeb6d3d4 Mon Sep 17 00:00:00 2001
From: Ed Rooth
Date: Mon, 14 Jul 2014 17:19:33 -0700
Subject: [PATCH 0159/1291] custominzing-docker: rename docker.socket to
docker-tcp.socket
---
.../building/customizing-docker/index.md | 24 ++++++++++++++-----
1 file changed, 18 insertions(+), 6 deletions(-)
diff --git a/launching-containers/building/customizing-docker/index.md b/launching-containers/building/customizing-docker/index.md
index ef9efa695..02102d871 100644
--- a/launching-containers/building/customizing-docker/index.md
+++ b/launching-containers/building/customizing-docker/index.md
@@ -12,7 +12,7 @@ The docker systemd unit can be customized by overriding the unit that ships with
## Enable the Remote API on a New Socket
-Create a file called `/etc/systemd/system/docker.socket` to make docker available on a TCP socket on port 2375.
+Create a file called `/etc/systemd/system/docker-tcp.socket` to make docker available on a TCP socket on port 2375.
```ini
[Unit]
@@ -21,17 +21,19 @@ Description=Docker Socket for the API
[Socket]
ListenStream=2375
BindIPv6Only=both
+Service=docker.service
[Install]
WantedBy=sockets.target
```
-Docker has support for socket activation, which solves a common race condition during start up. If requests are sent over the socket before docker has started, they will be queued in the kernel and processed as soon as docker is ready.
-
-Since docker is socket-activated and already looking for the socket, all we need to do is restart it after the socket file has been written to disk:
+Then enable this new socket:
```sh
-systemctl restart docker
+systemctl enable docker-tcp.socket
+systemctl stop docker
+systemctl start docker-tcp.socket
+systemctl start docker
```
Test that it's working:
@@ -49,7 +51,7 @@ To enable the remote API on every CoreOS machine in a cluster, use [cloud-config
coreos:
units:
- - name: docker.socket
+ - name: docker-tcp.socket
command: start
enable: yes
content: |
@@ -59,9 +61,19 @@ coreos:
[Socket]
ListenStream=2375
BindIPv6Only=both
+ Service=docker.service
[Install]
WantedBy=sockets.target
+ - name: enable-docker-tcp.service
+ command: start
+ content: |
+ [Unit]
+ Description=Enable the Docker Socket for the API
+
+ [Service]
+ Type=oneshot
+ ExecStart=/usr/bin/systemctl enable docker-tcp.socket
```
To keep access to the port local, replace the `ListenStream` configuration above with:
From b52a76e668ad5eb02e31bb52ef6f6f1e7fbfe751 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 15 Jul 2014 17:45:17 -0700
Subject: [PATCH 0160/1291] feat(ec2): add beta HVM AMIs
---
running-coreos/cloud-providers/ec2/index.md | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/running-coreos/cloud-providers/ec2/index.md b/running-coreos/cloud-providers/ec2/index.md
index 438d6d8da..05422b0c5 100644
--- a/running-coreos/cloud-providers/ec2/index.md
+++ b/running-coreos/cloud-providers/ec2/index.md
@@ -71,17 +71,16 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
{% for region in site.data.beta-channel.amis %}
{% endfor %}
From 7422b84286e00f0e30fd72442f79ea2c1e438501 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 16 Jul 2014 21:41:17 -0700
Subject: [PATCH 0162/1291] fix(ec2): remove unneeded variable
---
running-coreos/cloud-providers/ec2/index.md | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/running-coreos/cloud-providers/ec2/index.md b/running-coreos/cloud-providers/ec2/index.md
index fb0de0321..770692067 100644
--- a/running-coreos/cloud-providers/ec2/index.md
+++ b/running-coreos/cloud-providers/ec2/index.md
@@ -10,6 +10,7 @@ cloud-formation-launch-logo: https://s3.amazonaws.com/cloudformation-examples/cl
{% capture cf_alpha_pv_template %}{{ site.https-s3 }}/dist/aws/coreos-alpha-pv.template{% endcapture %}
{% capture cf_alpha_hvm_template %}{{ site.https-s3 }}/dist/aws/coreos-alpha-hvm.template{% endcapture %}
{% capture cf_beta_pv_template %}{{ site.https-s3 }}/dist/aws/coreos-beta-pv.template{% endcapture %}
+{% capture cf_beta_hvm_template %}{{ site.https-s3 }}/dist/aws/coreos-beta-hvm.template{% endcapture %}
# Running CoreOS on EC2
@@ -159,8 +160,6 @@ If you would like to create multiple clusters you will need to change the "Stack
## Manual setup
-[us-east-latest-quicklaunch]: https://console.aws.amazon.com/ec2/home?region=us-east-1#launchAmi={{ami-us-east-1}} "{{ami-us-east-1}}"
-
{% for region in site.data.alpha-channel.amis %}
{% if region.name == 'us-east-1' %}
**TL;DR:** launch three instances of [{{region.ami-id}}](https://console.aws.amazon.com/ec2/home?region={{region.name}}#launchAmi={{region.ami-id}}) in **{{region.name}}** with a security group that has open port 22, 4001, and 7001 and the same "User Data" of each host. SSH uses the `core` user and you have [etcd][etcd-docs] and [docker][docker-docs] to play with.
From c46f11ba6cb3b4961d83e6d7b0deb07c029a9c4f Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 17 Jul 2014 11:25:09 -0700
Subject: [PATCH 0163/1291] feat(qemu): add beta and stable tabs
---
running-coreos/platforms/qemu/index.md | 41 +++++++++++++++++++-------
1 file changed, 30 insertions(+), 11 deletions(-)
diff --git a/running-coreos/platforms/qemu/index.md b/running-coreos/platforms/qemu/index.md
index 5a29feaa9..bc798f731 100644
--- a/running-coreos/platforms/qemu/index.md
+++ b/running-coreos/platforms/qemu/index.md
@@ -81,19 +81,38 @@ image.
### Choosing a Channel
-CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
-
-The channel is selected based on the URL below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
-
-There are two files you need: the disk image (provided in qcow2
-format) and the wrapper shell script to start QEMU.
-
-```sh
-mkdir coreos; cd coreos
+CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
+
+
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
+
+
There are two files you need: the disk image (provided in qcow2
+ format) and the wrapper shell script to start QEMU.
Starting is as simple as:
From ced93c790569e445c7fcd6ba999b0986ff74a3d4 Mon Sep 17 00:00:00 2001
From: Kyle Kelley
Date: Fri, 18 Jul 2014 12:14:54 -0500
Subject: [PATCH 0164/1291] Typo fix
---
running-coreos/cloud-providers/rackspace/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index 7ad33400b..e110a9f2c 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -236,7 +236,7 @@ export OS_PASSWORD=
export OS_AUTH_SYSTEM=rackspace
```
-If you have credentials already set up for use with the Nova CLI, they may conflict due to oddities in these tools. Re-source your credientials:
+If you have credentials already set up for use with the Nova CLI, they may conflict due to oddities in these tools. Re-source your credentials:
```sh
source ~/.bash_profile
From c9bc01a33d9aff025678f7a8ed808362a2e86982 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Sat, 19 Jul 2014 12:51:57 -0700
Subject: [PATCH 0165/1291] feat(getting-started-etcd): more clarity around
reading etcd in a container
---
.../getting-started-with-etcd/index.md | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/distributed-configuration/getting-started-with-etcd/index.md b/distributed-configuration/getting-started-with-etcd/index.md
index 85fd97a07..f8ecf9570 100644
--- a/distributed-configuration/getting-started-with-etcd/index.md
+++ b/distributed-configuration/getting-started-with-etcd/index.md
@@ -59,7 +59,12 @@ $ curl -L -X DELETE http://127.0.0.1:4001/v2/keys/message
## Reading and Writing from Inside a Container
-To read and write to etcd from *within a container* you must use the `docker0` interface which you can find in `ip address show`. It's normally `172.17.42.1` and using it is as easy as replacing `127.0.0.1`.
+To read and write to etcd from *within a container* you must use the IP address assigned to the `docker0` interface on the CoreOS host. From the host, run `ip address show` to find this address. It's normally `172.17.42.1` and using it is as easy as replacing `127.0.0.1` while running `curl` in the container:
+
+```
+$ curl -L http://172.17.42.1:4001/v2/keys/
+{"action":"get","node":{"key":"/","dir":true,"nodes":[{"key":"/coreos.com","dir":true,"modifiedIndex":4,"createdIndex":4}]}}
+```
## Proxy Example
From 099235acf2e8cf2cec7d8d9adebcb0f296e0c816 Mon Sep 17 00:00:00 2001
From: Brandon Philips
Date: Mon, 21 Jul 2014 11:02:38 -0700
Subject: [PATCH 0166/1291] cluster-management: add ca doc
Based on questsions here:
https://github.com/coreos/coreos-overlay/issues/327
---
.../adding-certificate-authorities/index.md | 23 +++++++++++++++++++
1 file changed, 23 insertions(+)
create mode 100644 cluster-management/setup/adding-certificate-authorities/index.md
diff --git a/cluster-management/setup/adding-certificate-authorities/index.md b/cluster-management/setup/adding-certificate-authorities/index.md
new file mode 100644
index 000000000..fb5212a94
--- /dev/null
+++ b/cluster-management/setup/adding-certificate-authorities/index.md
@@ -0,0 +1,23 @@
+---
+layout: docs
+title: Adding Certificate Authorities
+category: cluster_management
+sub_category: setting_up
+weight: 7
+---
+
+# Custom Certificate Authorities
+
+What if we restructured this paragraph to list the reasons first, then have it lead into the steps:
+
+CoreOS supports custom Certificate Authorities (CAs) in addition to the default list of trusted CAs. Adding your own CA allows you to:
+
+- Use a corporate wildcard certificate
+- Use your own CA to communicate with an installation of CoreUpdate
+- Use your own CA to communicate with a private docker registry
+
+The setup process for any of these use-cases is the same:
+
+1. Drop the certificate authority PEM file into `/etc/ssl/certs`
+
+2. Run the `update-ca-certificates` script
From f5a5a90edf95bc205ae5485920d5560c14a26723 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Mon, 21 Jul 2014 11:47:49 -0700
Subject: [PATCH 0167/1291] fix(adding CA): remove unneeded line
---
.../setup/adding-certificate-authorities/index.md | 2 --
1 file changed, 2 deletions(-)
diff --git a/cluster-management/setup/adding-certificate-authorities/index.md b/cluster-management/setup/adding-certificate-authorities/index.md
index fb5212a94..bc453d240 100644
--- a/cluster-management/setup/adding-certificate-authorities/index.md
+++ b/cluster-management/setup/adding-certificate-authorities/index.md
@@ -8,8 +8,6 @@ weight: 7
# Custom Certificate Authorities
-What if we restructured this paragraph to list the reasons first, then have it lead into the steps:
-
CoreOS supports custom Certificate Authorities (CAs) in addition to the default list of trusted CAs. Adding your own CA allows you to:
- Use a corporate wildcard certificate
From 0796d89a4c85b15f9db5b6cb9d6b6d0e31aab7d3 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Mon, 21 Jul 2014 12:08:28 -0700
Subject: [PATCH 0168/1291] feat(getting-started-etcd): fetch docker0 ip
programmatically
---
.../getting-started-with-etcd/index.md | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/distributed-configuration/getting-started-with-etcd/index.md b/distributed-configuration/getting-started-with-etcd/index.md
index f8ecf9570..484f3dddc 100644
--- a/distributed-configuration/getting-started-with-etcd/index.md
+++ b/distributed-configuration/getting-started-with-etcd/index.md
@@ -59,13 +59,21 @@ $ curl -L -X DELETE http://127.0.0.1:4001/v2/keys/message
## Reading and Writing from Inside a Container
-To read and write to etcd from *within a container* you must use the IP address assigned to the `docker0` interface on the CoreOS host. From the host, run `ip address show` to find this address. It's normally `172.17.42.1` and using it is as easy as replacing `127.0.0.1` while running `curl` in the container:
+To read and write to etcd from *within a container* you must use the IP address assigned to the `docker0` interface on the CoreOS host. From the host, run `ip address show` to find this address. It's normally `172.17.42.1`.
+
+To read from etcd, replace `127.0.0.1` when running `curl` in the container:
```
$ curl -L http://172.17.42.1:4001/v2/keys/
{"action":"get","node":{"key":"/","dir":true,"nodes":[{"key":"/coreos.com","dir":true,"modifiedIndex":4,"createdIndex":4}]}}
```
+You can also fetch the `docker0` IP programmatically:
+
+```
+ETCD_ENDPOINT="$(ifconfig docker0 | grep 'inet ' | awk '{ print $2}'):4001"
+```
+
## Proxy Example
Let's pretend we're setting up a service that consists of a few containers that are behind a proxy container. We can use etcd to announce these containers when they start by creating a directory, having each container write a key within that directory and have the proxy watch the entire directory. We're going to skip creating the containers here but the [docker guide]({{site.url}}/docs/launching-containers/building/getting-started-with-docker) is a good place to start for that.
From b4d3d32ea802cba444414ca1e22e5e63bc930b5f Mon Sep 17 00:00:00 2001
From: Aris Pikeas
Date: Mon, 21 Jul 2014 21:50:56 -0700
Subject: [PATCH 0169/1291] Add cloudinit mention to Vagrant setup docs
---
running-coreos/platforms/vagrant/index.md | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/running-coreos/platforms/vagrant/index.md b/running-coreos/platforms/vagrant/index.md
index 675858506..bad04474c 100644
--- a/running-coreos/platforms/vagrant/index.md
+++ b/running-coreos/platforms/vagrant/index.md
@@ -64,7 +64,8 @@ coreos:
The `$private_ipv4` and `$public_ipv4` substitution variables are fully supported in cloud-config on Vagrant. They will map to the first statically defined private and public networks defined in the Vagrantfile.
-If you wish to update your cloud-config later on, `vagrant up --provision` must be run to apply the new file.
+Your Vagrantfile should copy your cloud-config file to `/var/lib/coreos-vagrant/vagrantfile-user-data`. The provided Vagrantfile is already configured to do this. `cloudinit` reads `vagrantfile-user-data` on every boot and uses it to create the machine's user-data file. If you wish to update your cloud-config later on, `vagrant up --provision` must be run to apply the new file.
+
[cloud-config-docs]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config
From 1341556732ab0c7a0bc7a0066e2af2431828549b Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 24 Jul 2014 10:55:31 -0700
Subject: [PATCH 0170/1291] feat(rackspace): add OnMetal info
---
.../cloud-providers/rackspace/index.md | 22 ++++++++++++++-----
1 file changed, 16 insertions(+), 6 deletions(-)
diff --git a/running-coreos/cloud-providers/rackspace/index.md b/running-coreos/cloud-providers/rackspace/index.md
index e110a9f2c..d79487fe8 100644
--- a/running-coreos/cloud-providers/rackspace/index.md
+++ b/running-coreos/cloud-providers/rackspace/index.md
@@ -29,17 +29,22 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
Launch the stack by providing the specified parameters. This command will reference the local file data.yml in the current working directory that contains the cloud-config parameters. $(< data.yaml) prints the contents of this file into our heat command:
Launch the stack by providing the specified parameters. This command will reference the local file data.yml in the current working directory that contains the cloud-config parameters. $(< data.yaml) prints the contents of this file into our heat command:
From 91637f7c7bf9d2c876aeb5c963e6ab9682ba6364 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 25 Jul 2014 00:30:53 -0700
Subject: [PATCH 0171/1291] feat(*): add stable images (git status)
---
.../bare-metal/booting-with-ipxe/index.md | 15 ++-
.../bare-metal/booting-with-pxe/index.md | 18 ++-
.../bare-metal/installing-to-disk/index.md | 10 +-
running-coreos/cloud-providers/ec2/index.md | 122 +++++++++++++++++-
.../google-compute-engine/index.md | 9 +-
running-coreos/platforms/iso/index.md | 14 +-
6 files changed, 171 insertions(+), 17 deletions(-)
diff --git a/running-coreos/bare-metal/booting-with-ipxe/index.md b/running-coreos/bare-metal/booting-with-ipxe/index.md
index 5e6261dfe..ea1118af1 100644
--- a/running-coreos/bare-metal/booting-with-ipxe/index.md
+++ b/running-coreos/bare-metal/booting-with-ipxe/index.md
@@ -27,7 +27,8 @@ CoreOS is released into alpha and beta channels. Releases to each channel serve
iPXE downloads a boot script from a publicly available URL. You will need to host this URL somewhere public and replace the example SSH key with your own. You can also run a custom iPXE server.
iPXE downloads a boot script from a publicly available URL. You will need to host this URL somewhere public and replace the example SSH key with your own. You can also run a custom iPXE server.
diff --git a/running-coreos/bare-metal/installing-to-disk/index.md b/running-coreos/bare-metal/installing-to-disk/index.md
index c79899965..9238cd1ed 100644
--- a/running-coreos/bare-metal/installing-to-disk/index.md
+++ b/running-coreos/bare-metal/installing-to-disk/index.md
@@ -29,7 +29,8 @@ CoreOS is released into alpha and beta channels. Releases to each channel serve
@@ -38,11 +39,16 @@ CoreOS is released into alpha and beta channels. Releases to each channel serve
If you want to ensure you are installing the latest alpha version, use the -C option:
coreos-install -d /dev/sda -C alpha
-
+
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
If you want to ensure you are installing the latest beta version, use the -C option:
coreos-install -d /dev/sda -C beta
+
+
TThe Stable channel should be used by production clusters. Versions of CoreOS are battle-tested within the Beta and Alpha channels before being promoted. Current version is CoreOS {{site.stable-channel}}.
+
If you want to ensure you are installing the latest stable version, use the -C option:
@@ -56,7 +59,7 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
-
+
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
@@ -86,6 +89,36 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
+
+
+
The Stable channel should be used by production clusters. Versions of CoreOS are battle-tested within the Beta and Alpha channels before being promoted. Current version is CoreOS {{site.stable-channel}}.
+
+
+
+
+
EC2 Region
+
AMI Type
+
AMI ID
+
CloudFormation
+
+
+
+ {% for region in site.data.stable-channel.amis %}
+
@@ -162,7 +195,7 @@ If you would like to create multiple clusters you will need to change the "Stack
{% for region in site.data.alpha-channel.amis %}
{% if region.name == 'us-east-1' %}
-**TL;DR:** launch three instances of [{{region.ami-id}}](https://console.aws.amazon.com/ec2/home?region={{region.name}}#launchAmi={{region.ami-id}}) in **{{region.name}}** with a security group that has open port 22, 4001, and 7001 and the same "User Data" of each host. SSH uses the `core` user and you have [etcd][etcd-docs] and [docker][docker-docs] to play with.
+**TL;DR:** launch three instances of [{{region.pv}}](https://console.aws.amazon.com/ec2/home?region={{region.name}}#launchAmi={{region.pv}}) in **{{region.name}}** with a security group that has open port 22, 4001, and 7001 and the same "User Data" of each host. SSH uses the `core` user and you have [etcd][etcd-docs] and [docker][docker-docs] to play with.
{% endif %}
{% endfor %}
@@ -198,7 +231,8 @@ First we need to create a security group to allow CoreOS instances to communicat
@@ -208,7 +242,7 @@ First we need to create a security group to allow CoreOS instances to communicat
{% for region in site.data.alpha-channel.amis %}
{% if region.name == 'us-east-1' %}
- Open the quick launch wizard to boot {{region.ami-id}}.
+ Open the quick launch wizard to boot {{region.pv}}.
{% endif %}
{% endfor %}
@@ -276,13 +310,87 @@ coreos:
-
+
We will be launching three instances, with a few parameters in the User Data, and selecting our security group.
{% for region in site.data.beta-channel.amis %}
{% if region.name == 'us-east-1' %}
- Open the quick launch wizard to boot {{region.ami-id}}.
+ Open the quick launch wizard to boot {{region.pv}}.
+ {% endif %}
+ {% endfor %}
+
+
+ On the second page of the wizard, launch 3 servers to test our clustering
+
+
Number of instances: 3
+
Click "Continue"
+
+
+
+ Next, we need to specify a discovery URL, which contains a unique token that allows us to find other hosts in our cluster. If you're launching your first machine, generate one at https://discovery.etcd.io/new and add it to the metadata. You should re-use this key for each machine in the cluster.
+
+
+#cloud-config
+
+coreos:
+ etcd:
+ # generate a new token from https://discovery.etcd.io/new
+ discovery: https://discovery.etcd.io/<token>
+ # multi-region and multi-cloud deployments need to use $public_ipv4
+ addr: $private_ipv4:4001
+ peer-addr: $private_ipv4:7001
+ units:
+ - name: etcd.service
+ command: start
+ - name: fleet.service
+ command: start
+
+
+ Back in the EC2 dashboard, paste this information verbatim into the "User Data" field.
+
+
Paste link into "User Data"
+
"Continue"
+
+
+
+ Storage Configuration
+
+
"Continue"
+
+
+
+ Tags
+
+
"Continue"
+
+
+
+ Create Key Pair
+
+
Choose a key of your choice, it will be added in addition to the one in the gist.
+
"Continue"
+
+
+
+ Choose one or more of your existing Security Groups
+
+
"coreos-testing" as above.
+
"Continue"
+
+
+
+ Launch!
+
+
+
+
+
We will be launching three instances, with a few parameters in the User Data, and selecting our security group.
+
+
+ {% for region in site.data.stable-channel.amis %}
+ {% if region.name == 'us-east-1' %}
+ Open the quick launch wizard to boot {{region.pv}}.
{% endif %}
{% endfor %}
diff --git a/running-coreos/cloud-providers/google-compute-engine/index.md b/running-coreos/cloud-providers/google-compute-engine/index.md
index 2eb711050..c3d3f833a 100644
--- a/running-coreos/cloud-providers/google-compute-engine/index.md
+++ b/running-coreos/cloud-providers/google-compute-engine/index.md
@@ -49,7 +49,8 @@ Create 3 instances from the image above using our cloud-config from `cloud-confi
@@ -57,10 +58,14 @@ Create 3 instances from the image above using our cloud-config from `cloud-confi
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
The Stable channel should be used by production clusters. Versions of CoreOS are battle-tested within the Beta and Alpha channels before being promoted. Current version is CoreOS {{site.stable-channel}}.
diff --git a/running-coreos/platforms/iso/index.md b/running-coreos/platforms/iso/index.md
index 35593eb0d..9b3943469 100644
--- a/running-coreos/platforms/iso/index.md
+++ b/running-coreos/platforms/iso/index.md
@@ -13,7 +13,8 @@ The latest CoreOS ISOs can be downloaded from the image storage site:
@@ -26,7 +27,7 @@ The latest CoreOS ISOs can be downloaded from the image storage site:
All of the files necessary to verify the image can be found on the storage site.
-
+
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
@@ -35,6 +36,15 @@ The latest CoreOS ISOs can be downloaded from the image storage site:
All of the files necessary to verify the image can be found on the storage site.
+
+
+
The Stable channel should be used by production clusters. Versions of CoreOS are battle-tested within the Beta and Alpha channels before being promoted. Current version is CoreOS {{site.stable-channel}}.
@@ -48,7 +49,7 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
-
+
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.data.beta-channel.rackspace-version}}.
@@ -69,6 +70,27 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
+
+
+
The Stable channel should be used by production clusters. Versions of CoreOS are battle-tested within the Beta and Alpha channels before being promoted. Current version is CoreOS {{site.data.stable-channel.rackspace-version}}.
Launch the stack by providing the specified parameters. This command will reference the local file data.yml in the current working directory that contains the cloud-config parameters. $(< data.yaml) prints the contents of this file into our heat command:
Launch the stack by providing the specified parameters. This command will reference the local file data.yml in the current working directory that contains the cloud-config parameters. $(< data.yaml) prints the contents of this file into our heat command:
From 678055c93867b2db84a4355961752c14f7beabd4 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Filip=20Dupanovi=C4=87?=
Date: Sat, 26 Jul 2014 22:27:51 +0200
Subject: [PATCH 0173/1291] fix(getting-started-with-systemd): address small
typo
---
.../launching/getting-started-with-systemd/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/launching-containers/launching/getting-started-with-systemd/index.md b/launching-containers/launching/getting-started-with-systemd/index.md
index bb34f5f77..5cd3adb42 100644
--- a/launching-containers/launching/getting-started-with-systemd/index.md
+++ b/launching-containers/launching/getting-started-with-systemd/index.md
@@ -77,7 +77,7 @@ systemd provides a high degree of functionality in your unit files. Here's a cur
| ExecReload | Commands that will run when this unit is reloaded via `systemctl reload foo.service` |
| ExecStop | Commands that will run when this unit is considered failed or if it is stopped via `systemctl stop foo.service` |
| ExecStopPost | Commands that will run after `ExecStop` has completed. |
-| RestartSec | The amount of time to sleep before restarting a serivce. Useful to prevent your failed service from attempting to restart itself every 100ms. |
+| RestartSec | The amount of time to sleep before restarting a service. Useful to prevent your failed service from attempting to restart itself every 100ms. |
The full list is located on the [systemd man page](http://www.freedesktop.org/software/systemd/man/systemd.service.html).
From 353b48b529d4adcffd41cabd422cc850e48f650d Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Filip=20Dupanovi=C4=87?=
Date: Sat, 26 Jul 2014 22:34:18 +0200
Subject: [PATCH 0174/1291] feat(getting-started-with-systemd): reference
systemd unit specifiers
---
.../launching/getting-started-with-systemd/index.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/launching-containers/launching/getting-started-with-systemd/index.md b/launching-containers/launching/getting-started-with-systemd/index.md
index 5cd3adb42..981f8587a 100644
--- a/launching-containers/launching/getting-started-with-systemd/index.md
+++ b/launching-containers/launching/getting-started-with-systemd/index.md
@@ -105,7 +105,7 @@ ExecStopPost=/usr/bin/etcdctl rm /domains/example.com/10.10.10.123:8081
WantedBy=multi-user.target
```
-## Unit Variables
+## Unit Specifiers
In our last example we had to hardcode our IP address when we announced our container in etcd. That's not scalable and systemd has a few variables built in to help us out. Here's a few of the most useful:
@@ -116,7 +116,7 @@ In our last example we had to hardcode our IP address when we announced our cont
| `%b` | BootID | Similar to the machine ID, but this value is random and changes on each boot |
| `%H` | Hostname | Allows you to run the same unit file across many machines. Useful for service discovery. Example: `/domains/example.com/%H:8081` |
-A full list is on the [systemd man page](http://www.freedesktop.org/software/systemd/man/systemd.unit.html).
+A full list of specifiers can be found on the [systemd man page](http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Specifiers).
## Instantiated Units
From e9c08cb8a630634f04f05039c72b9bd2a9d6bdb5 Mon Sep 17 00:00:00 2001
From: none
Date: Sun, 27 Jul 2014 16:53:19 +0100
Subject: [PATCH 0175/1291] Adding stable channel to the vagrant docs
---
running-coreos/platforms/vagrant/index.md | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/running-coreos/platforms/vagrant/index.md b/running-coreos/platforms/vagrant/index.md
index 675858506..9c863a3b2 100644
--- a/running-coreos/platforms/vagrant/index.md
+++ b/running-coreos/platforms/vagrant/index.md
@@ -76,7 +76,8 @@ CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updat
The stable channel consists of promoted beta releases. Current version is CoreOS {{site.stable-channel}}.
+
The Stable channel should be used by production clusters. Versions of CoreOS are battle-tested within the Beta and Alpha channels before being promoted. Current version is CoreOS {{site.stable-channel}}.
Rename the file to config.rb then uncomment and modify:
config.rb
# Size of the CoreOS cluster created by Vagrant
From 83b3b7b7f629f45197ff41990046247974a1a34f Mon Sep 17 00:00:00 2001
From: c4t3l
Date: Mon, 28 Jul 2014 11:42:22 -0500
Subject: [PATCH 0177/1291] Update channel definitions
Updated the channel definitions to reflect stable. Created table to ensure consistency with other install pages.
---
running-coreos/platforms/libvirt/index.md | 43 ++++++++++++++++++-----
1 file changed, 34 insertions(+), 9 deletions(-)
diff --git a/running-coreos/platforms/libvirt/index.md b/running-coreos/platforms/libvirt/index.md
index 2e88ea9e2..3688c519a 100644
--- a/running-coreos/platforms/libvirt/index.md
+++ b/running-coreos/platforms/libvirt/index.md
@@ -26,16 +26,41 @@ to substitute that path if you use another one.
CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel.
-The channel is selected based on the URL below. Simply replace `alpha` with `beta`. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
-
-We start by downloading the most recent disk image:
-
-```sh
+Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
+
+
From 621eb813484ac8a0953fe07a542a960556012098 Mon Sep 17 00:00:00 2001
From: c4t3l
Date: Tue, 29 Jul 2014 14:20:13 -0500
Subject: [PATCH 0179/1291] Update channel definitions
Added stable channel definitions for consistency with other pages.
---
running-coreos/platforms/qemu/index.md | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/running-coreos/platforms/qemu/index.md b/running-coreos/platforms/qemu/index.md
index bc798f731..b61c51ad7 100644
--- a/running-coreos/platforms/qemu/index.md
+++ b/running-coreos/platforms/qemu/index.md
@@ -81,17 +81,29 @@ image.
### Choosing a Channel
-CoreOS is released into alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
+CoreOS is released into stable, alpha and beta channels. Releases to each channel serve as a release-candidate for the next channel. For example, a bug-free alpha release is promoted bit-for-bit to the beta channel. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes in each channel.
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.alpha-channel}}.
+
The alpha channel closely tracks master and is released to frequently. Current version is CoreOS {{site.alpha-channel}}.
There are two files you need: the disk image (provided in qcow2
format) and the wrapper shell script to start QEMU.
From 1a8a5f277125b324a695bba0e20969b3d1c236fb Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 29 Jul 2014 15:55:01 -0700
Subject: [PATCH 0180/1291] fix(qemu): fix tab content and update channel
description
---
running-coreos/platforms/qemu/index.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/running-coreos/platforms/qemu/index.md b/running-coreos/platforms/qemu/index.md
index b61c51ad7..f3f3f5437 100644
--- a/running-coreos/platforms/qemu/index.md
+++ b/running-coreos/platforms/qemu/index.md
@@ -90,9 +90,9 @@ CoreOS is released into stable, alpha and beta channels. Releases to each channe
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.beta-channel}}.
From bca2db6bf8e742ad547eefd340aaeb83764171f7 Mon Sep 17 00:00:00 2001
From: Chris Cowley
Date: Sat, 2 Aug 2014 21:53:07 +0200
Subject: [PATCH 0181/1291] Added section to Quickstart better explaining the
discovery token. Makes the Quickstart a little more self-contained.
---
quickstart/index.md | 25 ++++++++++++++++++++++++-
1 file changed, 24 insertions(+), 1 deletion(-)
diff --git a/quickstart/index.md b/quickstart/index.md
index 89f6a4c84..f43b16833 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -39,7 +39,30 @@ CoreOS (beta)
The first building block of CoreOS is service discovery with **etcd** ([docs][etcd-docs]). Data stored in etcd is distributed across all of your machines running CoreOS. For example, each of your app containers can announce itself to a proxy container, which would automatically know which machines should receive traffic. Building service discovery into your application allows you to add more machines and scale your services seamlessly.
-If you used an example [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config) from a guide linked in the first paragraph, etcd is automatically started on boot. The API is easy to use. From a CoreOS machine, you can simply use curl to set and retrieve a key from etcd:
+If you used an example [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config) from a guide linked in the first paragraph, etcd is automatically started on boot.
+
+A good starting point would be something like:
+
+```
+#cloud-config
+
+hostname: coreos0
+ssh_authorized_keys:
+ - ssh-rsa AAAA...
+coreos:
+ units:
+ - name: etcd.service
+ command: start
+ - name: fleet.service
+ command: start
+ etcd:
+ name: coreos0
+ discovery: https://discovery.etcd.io/
+```
+
+In order to get the discovery token, visit [https://discovery.etcd.io/new] and you will receive a URL including your token. Paste the whole thing into your cloud-config file.
+
+The API is easy to use. From a CoreOS machine, you can simply use curl to set and retrieve a key from etcd:
Set a key `message` with value `Hello world`:
From e6043eccf17ea09463420105570021e118240d80 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Mon, 4 Aug 2014 10:34:37 -0700
Subject: [PATCH 0182/1291] fix(*): update service files
---
.../getting-started-with-systemd/index.md | 16 ++++++++++++----
.../launching-containers-fleet/index.md | 14 +++++++++++---
quickstart/index.md | 8 ++++++--
3 files changed, 29 insertions(+), 9 deletions(-)
diff --git a/launching-containers/launching/getting-started-with-systemd/index.md b/launching-containers/launching/getting-started-with-systemd/index.md
index 981f8587a..31a64e79e 100644
--- a/launching-containers/launching/getting-started-with-systemd/index.md
+++ b/launching-containers/launching/getting-started-with-systemd/index.md
@@ -25,12 +25,16 @@ On CoreOS, unit files are located within the R/W filesystem at `/etc/systemd/sys
```ini
[Unit]
-Description=My Service
+Description=MyApp
After=docker.service
Requires=docker.service
[Service]
-ExecStart=/usr/bin/docker run busybox /bin/sh -c "while true; do echo Hello World; sleep 1; done"
+TimeoutStartSec=0
+ExecStartPre=-/usr/bin/docker kill busybox1
+ExecStartPre=-/usr/bin/docker rm busybox1
+ExecStartPre=/usr/bin/docker pull busybox
+ExecStart=/usr/bin/docker run busybox --name busybox1 /bin/sh -c "while true; do echo Hello World; sleep 1; done"
[Install]
WantedBy=multi-user.target
@@ -96,9 +100,13 @@ After=etcd.service
After=docker.service
[Service]
-ExecStart=/bin/bash -c '/usr/bin/docker start -a apache || /usr/bin/docker run --name apache -p 80:80 coreos/apache /usr/sbin/apache2ctl -D FOREGROUND'
+TimeoutStartSec=0
+ExecStartPre=-/usr/bin/docker kill apache1
+ExecStartPre=-/usr/bin/docker rm apache1
+ExecStartPre=/usr/bin/docker pull coreos/apache
+ExecStart=/usr/bin/docker run --name apache1 -p 80:80 coreos/apache /usr/sbin/apache2ctl -D FOREGROUND
ExecStartPost=/usr/bin/etcdctl set /domains/example.com/10.10.10.123:8081 running
-ExecStop=/usr/bin/docker stop apache
+ExecStop=/usr/bin/docker stop apache1
ExecStopPost=/usr/bin/etcdctl rm /domains/example.com/10.10.10.123:8081
[Install]
diff --git a/launching-containers/launching/launching-containers-fleet/index.md b/launching-containers/launching/launching-containers-fleet/index.md
index d41e55b70..68540c0d7 100644
--- a/launching-containers/launching/launching-containers-fleet/index.md
+++ b/launching-containers/launching/launching-containers-fleet/index.md
@@ -26,7 +26,11 @@ After=docker.service
Requires=docker.service
[Service]
-ExecStart=/usr/bin/docker run busybox /bin/sh -c "while true; do echo Hello World; sleep 1; done"
+TimeoutStartSec=0
+ExecStartPre=-/usr/bin/docker kill busybox1
+ExecStartPre=-/usr/bin/docker rm busybox1
+ExecStartPre=/usr/bin/docker pull busybox
+ExecStart=/usr/bin/docker run busybox --name busybox1 /bin/sh -c "while true; do echo Hello World; sleep 1; done"
```
Run the start command to start up the container on the cluster:
@@ -66,8 +70,12 @@ After=docker.service
Requires=docker.service
[Service]
-ExecStart=/usr/bin/docker run -rm --name apache -p 80:80 coreos/apache /usr/sbin/apache2ctl -D FOREGROUND
-ExecStop=/usr/bin/docker rm -f apache
+TimeoutStartSec=0
+ExecStartPre=-/usr/bin/docker kill apache1
+ExecStartPre=-/usr/bin/docker rm apache1
+ExecStartPre=/usr/bin/docker pull coreos/apache
+ExecStart=/usr/bin/docker run -rm --name apache1 -p 80:80 coreos/apache /usr/sbin/apache2ctl -D FOREGROUND
+ExecStop=/usr/bin/docker stop apache1
[X-Fleet]
X-Conflicts=apache.*.service
diff --git a/quickstart/index.md b/quickstart/index.md
index 89f6a4c84..8effecd21 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -95,8 +95,12 @@ Description=My Service
After=docker.service
[Service]
-ExecStart=/bin/bash -c '/usr/bin/docker start -a hello || /usr/bin/docker run --name hello busybox /bin/sh -c "while true; do echo Hello World; sleep 1; done"'
-ExecStop=/usr/bin/docker stop -t 1 hello
+TimeoutStartSec=0
+ExecStartPre=-/usr/bin/docker kill hello
+ExecStartPre=-/usr/bin/docker rm hello
+ExecStartPre=/usr/bin/docker pull busybox
+ExecStart=/usr/bin/docker run --name hello busybox /bin/sh -c "while true; do echo Hello World; sleep 1; done"
+ExecStop=/usr/bin/docker stop hello
```
The [Getting Started with systemd]({{site.url}}/docs/launching-containers/launching/getting-started-with-systemd) guide explains the format of this file in more detail.
From 69617597e851d04bac78ca62954818e88ee608eb Mon Sep 17 00:00:00 2001
From: coderfi
Date: Wed, 6 Aug 2014 11:59:54 -0700
Subject: [PATCH 0183/1291] added Cloud-Config section
added warnings about unsupported private_ipv4 and public_ipv4 variables
---
running-coreos/cloud-providers/vultr/index.md | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index 41ff1f5b5..2186710b3 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -74,3 +74,11 @@ Now that you have a cluster bootstrapped it is time to play around.
CoreOS is currently running from RAM, based on the loaded image. You may want to [install it on the disk]({{site.url}}/docs/running-coreos/bare-metal/installing-to-disk). Note that when following these instructions on Vultr, the device name should be `/dev/vda` rather than `/dev/sda`.
Check out the [CoreOS Quickstart]({{site.url}}/docs/quickstart) guide or dig into [more specific topics]({{site.url}}/docs).
+
+## Using Cloud-Config
+
+Please be sure to check out [Using Cloud-Config](http://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/).
+
+In particular, please note that the `$private_ipv4` and `$public_ipv4` variables are NOT supported on `vultr`.
+
+In other words, you will need to hard code these values into your `cloud-config` file.
From 50b006f6890de781cf1652f60d5e5193ebaf6695 Mon Sep 17 00:00:00 2001
From: coderfi
Date: Wed, 6 Aug 2014 12:02:48 -0700
Subject: [PATCH 0184/1291] uses site.url macro
---
running-coreos/cloud-providers/vultr/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index 2186710b3..d8cfe012f 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -77,7 +77,7 @@ Check out the [CoreOS Quickstart]({{site.url}}/docs/quickstart) guide or dig int
## Using Cloud-Config
-Please be sure to check out [Using Cloud-Config](http://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/).
+Please be sure to check out [Using Cloud-Config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/).
In particular, please note that the `$private_ipv4` and `$public_ipv4` variables are NOT supported on `vultr`.
From bb6c6dbb34d3a62d673c745cc70b609eff08ee39 Mon Sep 17 00:00:00 2001
From: coderfi
Date: Wed, 6 Aug 2014 12:05:30 -0700
Subject: [PATCH 0185/1291] updated cloud-config url
---
running-coreos/cloud-providers/vultr/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index d8cfe012f..ede702095 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -77,7 +77,7 @@ Check out the [CoreOS Quickstart]({{site.url}}/docs/quickstart) guide or dig int
## Using Cloud-Config
-Please be sure to check out [Using Cloud-Config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/).
+Please be sure to check out [Using Cloud-Config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config).
In particular, please note that the `$private_ipv4` and `$public_ipv4` variables are NOT supported on `vultr`.
From aeedf9645852866e57fbea06266843043b7bf7d7 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 6 Aug 2014 13:27:33 -0700
Subject: [PATCH 0186/1291] vultr: add channel tabs and reorg
---
running-coreos/cloud-providers/vultr/index.md | 77 ++++++++++++++-----
1 file changed, 59 insertions(+), 18 deletions(-)
diff --git a/running-coreos/cloud-providers/vultr/index.md b/running-coreos/cloud-providers/vultr/index.md
index ede702095..4a95c8430 100644
--- a/running-coreos/cloud-providers/vultr/index.md
+++ b/running-coreos/cloud-providers/vultr/index.md
@@ -8,30 +8,79 @@ weight: 10
# Running CoreOS on a Vultr VPS
-CoreOS is currently in heavy development and actively being tested. These instructions will walk you through running a single CoreOS node. This guide assumes:
+These instructions will walk you through running a single CoreOS node. This guide assumes:
* You have an account at [Vultr.com](http://vultr.com).
* The location of your iPXE script (referenced later in the guide) is located at ```http://example.com/script.txt```
* You have a public + private key combination generated. Here's a helpful guide if you need to generate these keys: [How to set up SSH keys](https://www.digitalocean.com/community/articles/how-to-set-up-ssh-keys--2).
-## Create the script
-
The simplest option to boot up CoreOS is to load a script that contains the series of commands you'd otherwise need to manually type at the command line. This script needs to be publicly accessible (host this file on your own server). Save this script as a text file (.txt extension).
-A sample script will look like this :
+## Choosing a Channel
+
+CoreOS is designed to be [updated automatically]({{site.url}}/using-coreos/updates) with different schedules per channel. You can [disable this feature]({{site.url}}/docs/cluster-management/debugging/prevent-reboot-after-update), although we don't recommend it. Read the [release notes]({{site.url}}/releases) for specific features and bug fixes.
-```ini
-#!ipxe
+
The alpha channel closely tracks master and is released to frequently. The newest versions of docker, etcd and fleet will be available for testing. Current version is CoreOS {{site.data.alpha-channel.rackspace-version}}.
+
+
A sample script will look like this:
+
+
#!ipxe
set base-url http://alpha.release.core-os.net/amd64-usr/current
kernel ${base-url}/coreos_production_pxe.vmlinuz sshkey="YOUR_PUBLIC_KEY_HERE"
initrd ${base-url}/coreos_production_pxe_image.cpio.gz
-boot
-```
-Make sure to replace `YOUR_PUBLIC_KEY_HERE` with your actual public key, it will begin with "ssh-rsa...".
+boot
+
+
+
+
The beta channel consists of promoted alpha releases. Current version is CoreOS {{site.data.beta-channel.rackspace-version}}.
The Stable channel should be used by production clusters. Versions of CoreOS are battle-tested within the Beta and Alpha channels before being promoted. Current version is CoreOS {{site.data.stable-channel.rackspace-version}}.
+
+Make sure to replace `YOUR_PUBLIC_KEY_HERE` with your actual public key, it will begin with `ssh-rsa...`.
Additional reading can be found at [Booting CoreOS with iPXE](http://coreos.com/docs/running-coreos/bare-metal/booting-with-ipxe/) and [Embedded scripts for iPXE](http://ipxe.org/embed).
+## Using Cloud-Config
+
+Please be sure to check out [Using Cloud-Config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config).
+
+In particular, please note that the `$private_ipv4` and `$public_ipv4` variables are NOT supported on `vultr`.
+
+In other words, you will need to hard code these values into your `cloud-config` file.
+
## Create the VPS
Create a new VPS (any server type and location of your choice), and then:
@@ -73,12 +122,4 @@ Now that you have a cluster bootstrapped it is time to play around.
CoreOS is currently running from RAM, based on the loaded image. You may want to [install it on the disk]({{site.url}}/docs/running-coreos/bare-metal/installing-to-disk). Note that when following these instructions on Vultr, the device name should be `/dev/vda` rather than `/dev/sda`.
-Check out the [CoreOS Quickstart]({{site.url}}/docs/quickstart) guide or dig into [more specific topics]({{site.url}}/docs).
-
-## Using Cloud-Config
-
-Please be sure to check out [Using Cloud-Config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config).
-
-In particular, please note that the `$private_ipv4` and `$public_ipv4` variables are NOT supported on `vultr`.
-
-In other words, you will need to hard code these values into your `cloud-config` file.
+Check out the [CoreOS Quickstart]({{site.url}}/docs/quickstart) guide or dig into [more specific topics]({{site.url}}/docs).
\ No newline at end of file
From eb873357bdc878cb7bd1f49564b3d8fde12d0fbc Mon Sep 17 00:00:00 2001
From: gdusbabek
Date: Thu, 7 Aug 2014 17:52:15 -0500
Subject: [PATCH 0187/1291] Change order of command line for starting busybox.
---
.../launching/getting-started-with-systemd/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/launching-containers/launching/getting-started-with-systemd/index.md b/launching-containers/launching/getting-started-with-systemd/index.md
index 31a64e79e..a5d0fd7d8 100644
--- a/launching-containers/launching/getting-started-with-systemd/index.md
+++ b/launching-containers/launching/getting-started-with-systemd/index.md
@@ -34,7 +34,7 @@ TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill busybox1
ExecStartPre=-/usr/bin/docker rm busybox1
ExecStartPre=/usr/bin/docker pull busybox
-ExecStart=/usr/bin/docker run busybox --name busybox1 /bin/sh -c "while true; do echo Hello World; sleep 1; done"
+ExecStart=/usr/bin/docker run --name busybox1 busybox /bin/sh -c "while true; do echo Hello World; sleep 1; done"
[Install]
WantedBy=multi-user.target
From 9e723745e4ed4d8b0bfb58e0b5326725465fbd8e Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Sun, 10 Aug 2014 18:01:58 -0700
Subject: [PATCH 0188/1291] cluster-discovery: rewrite guide
---
.../setup/cluster-discovery/index.md | 145 ++++++++++++++++++
1 file changed, 145 insertions(+)
create mode 100644 cluster-management/setup/cluster-discovery/index.md
diff --git a/cluster-management/setup/cluster-discovery/index.md b/cluster-management/setup/cluster-discovery/index.md
new file mode 100644
index 000000000..d7b621175
--- /dev/null
+++ b/cluster-management/setup/cluster-discovery/index.md
@@ -0,0 +1,145 @@
+---
+layout: docs
+title: Cluster Discovery
+category: cluster_management
+sub_category: setting_up
+forkurl: https://github.com/coreos/docs/blob/master/cluster-management/setup/cluster-discovery/index.md
+weight: 5
+---
+
+# CoreOS Cluster Discovery
+
+## Overview
+
+CoreOS uses etcd, a service running on each machine, to handle coordination between software running on the cluster. For a group of CoreOS machines to form a cluster, their etcd instances need be connected.
+
+A discovery service, [https://discovery.etcd.io](https://discovery.etcd.io), is provided as a free service to help connect etcd instances together by storing a list of peer addresses and metadata under a unique address, known as the discovery URL.
+
+The discovery URL can be provided to each CoreOS machine via [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config), a minimal config tool that's designed to get a machine connected to the network and join the cluster. The rest of this guide will explain what's happening behind the scenes, but if you're trying to get clustered as quickly as possible, all you need to do is provide a _fresh, unique_ discovery token in your cloud-config.
+
+Boot each one of the machines with identical cloud-config and they should be auotmatically clustered. You can grab a new token from [https://discovery.etcd.io/new](https://discovery.etcd.io/new) at any time.
+
+A common cloud-config is provided below, but specific guides are provided for each platform's guide. Not all providers support the `$private_ipv4` variable substitution.
+
+```
+#cloud-config
+
+coreos:
+ etcd:
+ # generate a new token for each unique cluster from https://discovery.etcd.io/new
+ discovery: https://discovery.etcd.io/
+ # multi-region and multi-cloud deployments need to use $public_ipv4
+ addr: $private_ipv4:4001
+ peer-addr: $private_ipv4:7001
+ units:
+ - name: etcd.service
+ command: start
+ - name: fleet.service
+ command: start
+```
+
+## New Clusters
+
+Starting a CoreOS cluster requires one of the new machines to become the first leader of the cluster. The initial leader is stored as metadata with the discovery URL in order to inform the other members of the new cluster. Let's walk through an example with a new 3 machine CoreOS cluster:
+
+1. 3 machines are booted via a cloud-provider
+2. Machine 1 that boots connects to the discovery token and submits its `peer-addr` address `10.10.10.1`.
+3. No leader is recorded into the discovery URL metadata, so machine 1 becomes the leader.
+4. Machine 2 boots and submits its `peer-addr` address `10.10.10.2`. It also reads back the list of existing peers (only `10.10.10.1`) and attempts to connect to the address listed.
+5. Machine 2 is now part of the cluster as a follower.
+6. Machine 3 boots and submits its `peer-addr` address `10.10.10.3`. It reads back the list of peers ( `10.10.10.1` and `10.10.10.2`) and selects one of the addresses to try first. If it can connect, the machine joins the cluster and is given a full list of the existing other members of the cluster.
+7. The cluster is now bootstrapped with an intial leader and two followers.
+
+There are two interesting things happening during this process.
+
+First, each machine is configured with the same discovery URL and etcd figured out what to do. This allows you to load the same cloud-config into an auto-scaling group and it will work whether it is the first or 30th machine in the group.
+
+Second, machine 3 only needed to use one of the addresses stored in the discovery URL to connect to the cluster, because the rest of the active peers can be obtained after cluster membership through the Raft protocol.
+
+## Existing Clusters
+
+If you're already bootstrapped a cluster with a discovery URL, all you need to do is to boot new machines with a cloud-config containing the same URL. After boot, new machines will see that a cluster already exists and attempt to join through one of the addresses stored with the discovery URL.
+
+Over time, as machines come and go, the discovery URL will eventually contain addresses of peers that are no longer alive. Each entry in the discovery URL has a TTL of 7 days, which should be long enough to make sure no extended outages cause an address to be removed erroneously. There is no harm in having stale peers in the list until they are cleaned up, since an etcd instance only needs to connect to one valid peer in the cluster to join.
+
+## Common Problems with Cluster Discovery
+
+### Invalid Cloud-Config
+
+The most common problem with cluster discovery is using invalid cloud-config, which will prevent the cloud-config from being applied to the machine. Formatting errors are easy to do with YAML. You should always run newly written cloud-config through a [YAML validator](yamllint.com).
+
+Unfortunately, if you are providing an SSH-key via cloud-config, it can be hard to read the `coreos-cloudinit` log to find out what's wrong. If you're using a cloud provider, you can normally provide an SSH-key via another method which will allow you to log in. If you're running on bare metal, the [coreos.autologin]({{site.url}}/docs/running-coreos/bare-metal/booting-with-pxe/#setting-up-pxelinux.cfg) kernel option will bypass authentication, letting you read the journal.
+
+Reading the `coreos-cloudinit` log will indicate which line is invalid:
+
+```
+journalctl -u coreos-cloudinit
+```
+
+### Stale Tokens
+
+Another common problem with cluster discovery is attempting to boot a new cluster with a stable discovery URL. As explained above, the intial leader election is recorded into the URL, which inticates that the new etcd instance should be joining an existing cluster. On a stale token, each of the old peer addresses will be used to try to join a cluster but will fail. A new cluster can't be formed by discarding these old addresses, because if an etcd peer was in a network partition, it would look exactly like the described situtation. Because etcd can't ever accurately determined whether a token has been reused or not, it must assume the worst and fail the cluster discovery.
+
+If you're running into problems with your discovery URL, there are a few sources of information that can help you see what's going on. First, you can open the URL in a browser to see what information etcd is using to bootstrap itself:
+
+```sh
+{
+ action: "get",
+ node: {
+ key: "/_etcd/registry/506f6c1bc729377252232a0121247119",
+ dir: true,
+ nodes: [
+ {
+ key: "/_etcd/registry/506f6c1bc729377252232a0121247119/0d79b4791be9688332cc05367366551e",
+ value: "http://10.183.202.105:7001",
+ expiration: "2014-08-17T16:21:37.426001686Z",
+ ttl: 576008,
+ modifiedIndex: 72783864,
+ createdIndex: 72783864
+ },
+ {
+ key: "/_etcd/registry/506f6c1bc729377252232a0121247119/c72c63ffce6680737ea2b670456aaacd",
+ value: "http://10.65.177.56:7001",
+ expiration: "2014-08-17T12:05:57.717243529Z",
+ ttl: 560669,
+ modifiedIndex: 72626400,
+ createdIndex: 72626400
+ },
+ {
+ key: "/_etcd/registry/506f6c1bc729377252232a0121247119/f7a93d1f0cd4d318c9ad0b624afb9cf9",
+ value: "http://10.29.193.50:7001",
+ expiration: "2014-08-17T17:18:25.045563473Z",
+ ttl: 579416,
+ modifiedIndex: 72821950,
+ createdIndex: 72821950
+ }
+ ],
+ modifiedIndex: 69367741,
+ createdIndex: 69367741
+ }
+}
+```
+
+To rule out firewall settings as a source of your issue, ensure that you can curl each of the IPs from machines in your cluster.
+
+### Communicating with discovery.etcd.io
+
+If your CoreOS cluster can't communicate out the public internet, [https://discovery.etcd.io](https://discovery.etcd.io) won't work and you'll have to run your own discovery endpoint, which is described later in this document.
+
+### Setting Peer Addresses Correctly
+
+Each etcd instance submits the `-peer-addr` of each etcd instance to the configured discovery service. It's important to select an address that *all* peers in the cluster can communicate with. For example, if you're located in two regions of a cloud provider, configuring a private `10.x` address will not work between the two regions, and communication will not be possible between all peers.
+
+## Running Your Own Discovery Service
+
+The public discovery service is just an etcd cluster made available to the public internet. Since the discovery service conducts and stores the result of the first leader election, it needs to be consistent. You wouldn't want 2 machines in the same cluster to think they were both the leader.
+
+Since etcd is designed to this type of leader election, it was an obvious choice to use it for everyone's initial leader election. This means that it's easy to run your own etcd cluster for this purpose.
+
+If you're interested in how to discovery API works behind the scenes in etcd, read about the [Discovery Protocol](https://github.com/coreos/etcd/blob/master/Documentation/discovery-protocol.md).
+
+## Lifetime of a Discovery URL
+
+A discovery URL identifies a single etcd cluster. Do not re-use discovery URLs for new clusters.
+
+When a machine starts with a new discovery URL the discovery URL will be activated and record the machine's metadata. If you destroy the whole cluster and attempt to bring the cluster back up with the same discovery URL it will fail. This is intentional because all of the registered machines are gone including their logs so there is nothing to recover the killed cluster.
From 92e6f029631396076fe2cf1b3cb275568de06f90 Mon Sep 17 00:00:00 2001
From: Aris Pikeas
Date: Tue, 12 Aug 2014 11:08:21 -0700
Subject: [PATCH 0189/1291] Use reload instead of up
---
running-coreos/platforms/vagrant/index.md | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/running-coreos/platforms/vagrant/index.md b/running-coreos/platforms/vagrant/index.md
index bad04474c..112e85826 100644
--- a/running-coreos/platforms/vagrant/index.md
+++ b/running-coreos/platforms/vagrant/index.md
@@ -64,7 +64,9 @@ coreos:
The `$private_ipv4` and `$public_ipv4` substitution variables are fully supported in cloud-config on Vagrant. They will map to the first statically defined private and public networks defined in the Vagrantfile.
-Your Vagrantfile should copy your cloud-config file to `/var/lib/coreos-vagrant/vagrantfile-user-data`. The provided Vagrantfile is already configured to do this. `cloudinit` reads `vagrantfile-user-data` on every boot and uses it to create the machine's user-data file. If you wish to update your cloud-config later on, `vagrant up --provision` must be run to apply the new file.
+Your Vagrantfile should copy your cloud-config file to `/var/lib/coreos-vagrant/vagrantfile-user-data`. The provided Vagrantfile is already configured to do this. `cloudinit` reads `vagrantfile-user-data` on every boot and uses it to create the machine's user-data file.
+
+If you need to update your cloud-config later on, run `vagrant reload --provision` to reboot your VM and apply the new file.
[cloud-config-docs]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config
From 26e97c630cee645d7d748a8b6de8622dca67f879 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 12 Aug 2014 16:13:25 -0700
Subject: [PATCH 0190/1291] cluster-discovery: update content
---
.../setup/cluster-discovery/index.md | 20 +++++++------------
1 file changed, 7 insertions(+), 13 deletions(-)
diff --git a/cluster-management/setup/cluster-discovery/index.md b/cluster-management/setup/cluster-discovery/index.md
index d7b621175..a64a6ac7d 100644
--- a/cluster-management/setup/cluster-discovery/index.md
+++ b/cluster-management/setup/cluster-discovery/index.md
@@ -11,13 +11,13 @@ weight: 5
## Overview
-CoreOS uses etcd, a service running on each machine, to handle coordination between software running on the cluster. For a group of CoreOS machines to form a cluster, their etcd instances need be connected.
+CoreOS uses etcd, a service running on each machine, to handle coordination between software running on the cluster. For a group of CoreOS machines to form a cluster, their etcd instances need to be connected.
A discovery service, [https://discovery.etcd.io](https://discovery.etcd.io), is provided as a free service to help connect etcd instances together by storing a list of peer addresses and metadata under a unique address, known as the discovery URL.
The discovery URL can be provided to each CoreOS machine via [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config), a minimal config tool that's designed to get a machine connected to the network and join the cluster. The rest of this guide will explain what's happening behind the scenes, but if you're trying to get clustered as quickly as possible, all you need to do is provide a _fresh, unique_ discovery token in your cloud-config.
-Boot each one of the machines with identical cloud-config and they should be auotmatically clustered. You can grab a new token from [https://discovery.etcd.io/new](https://discovery.etcd.io/new) at any time.
+Boot each one of the machines with identical cloud-config and they should be automatically clustered. You can grab a new token from [https://discovery.etcd.io/new](https://discovery.etcd.io/new) at any time.
A common cloud-config is provided below, but specific guides are provided for each platform's guide. Not all providers support the `$private_ipv4` variable substitution.
@@ -73,12 +73,12 @@ Unfortunately, if you are providing an SSH-key via cloud-config, it can be hard
Reading the `coreos-cloudinit` log will indicate which line is invalid:
```
-journalctl -u coreos-cloudinit
+journalctl _EXE=/usr/bin/coreos-cloudinit
```
### Stale Tokens
-Another common problem with cluster discovery is attempting to boot a new cluster with a stable discovery URL. As explained above, the intial leader election is recorded into the URL, which inticates that the new etcd instance should be joining an existing cluster. On a stale token, each of the old peer addresses will be used to try to join a cluster but will fail. A new cluster can't be formed by discarding these old addresses, because if an etcd peer was in a network partition, it would look exactly like the described situtation. Because etcd can't ever accurately determined whether a token has been reused or not, it must assume the worst and fail the cluster discovery.
+Another common problem with cluster discovery is attempting to boot a new cluster with a stale discovery URL. As explained above, the intial leader election is recorded into the URL, which inticates that the new etcd instance should be joining an existing cluster. On a stale token, each of the old peer addresses will be used to try to join a cluster but will fail. A new cluster can't be formed by discarding these old addresses, because if an etcd peer was in a network partition, it would look exactly like the described situation. Because etcd can't ever accurately determined whether a token has been reused or not, it must assume the worst and fail the cluster discovery.
If you're running into problems with your discovery URL, there are a few sources of information that can help you see what's going on. First, you can open the URL in a browser to see what information etcd is using to bootstrap itself:
@@ -124,7 +124,7 @@ To rule out firewall settings as a source of your issue, ensure that you can cur
### Communicating with discovery.etcd.io
-If your CoreOS cluster can't communicate out the public internet, [https://discovery.etcd.io](https://discovery.etcd.io) won't work and you'll have to run your own discovery endpoint, which is described later in this document.
+If your CoreOS cluster can't communicate out to the public internet, [https://discovery.etcd.io](https://discovery.etcd.io) won't work and you'll have to run your own discovery endpoint, which is described later in this document.
### Setting Peer Addresses Correctly
@@ -132,14 +132,8 @@ Each etcd instance submits the `-peer-addr` of each etcd instance to the configu
## Running Your Own Discovery Service
-The public discovery service is just an etcd cluster made available to the public internet. Since the discovery service conducts and stores the result of the first leader election, it needs to be consistent. You wouldn't want 2 machines in the same cluster to think they were both the leader.
+The public discovery service is just an etcd cluster made available to the public internet. Since the discovery service conducts and stores the result of the first leader election, it needs to be consistent. You wouldn't want two machines in the same cluster to think they were both the leader.
Since etcd is designed to this type of leader election, it was an obvious choice to use it for everyone's initial leader election. This means that it's easy to run your own etcd cluster for this purpose.
-If you're interested in how to discovery API works behind the scenes in etcd, read about the [Discovery Protocol](https://github.com/coreos/etcd/blob/master/Documentation/discovery-protocol.md).
-
-## Lifetime of a Discovery URL
-
-A discovery URL identifies a single etcd cluster. Do not re-use discovery URLs for new clusters.
-
-When a machine starts with a new discovery URL the discovery URL will be activated and record the machine's metadata. If you destroy the whole cluster and attempt to bring the cluster back up with the same discovery URL it will fail. This is intentional because all of the registered machines are gone including their logs so there is nothing to recover the killed cluster.
+If you're interested in how to discovery API works behind the scenes in etcd, read about the [Discovery Protocol](https://github.com/coreos/etcd/blob/master/Documentation/discovery-protocol.md).
\ No newline at end of file
From abf38495df8b2585a0a59bd577daad5b2ce32e26 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 12 Aug 2014 16:37:18 -0700
Subject: [PATCH 0191/1291] cluster-discovery: more clarity
---
.../setup/cluster-discovery/index.md | 30 ++++++++++++-------
1 file changed, 20 insertions(+), 10 deletions(-)
diff --git a/cluster-management/setup/cluster-discovery/index.md b/cluster-management/setup/cluster-discovery/index.md
index a64a6ac7d..307750167 100644
--- a/cluster-management/setup/cluster-discovery/index.md
+++ b/cluster-management/setup/cluster-discovery/index.md
@@ -40,25 +40,25 @@ coreos:
## New Clusters
-Starting a CoreOS cluster requires one of the new machines to become the first leader of the cluster. The initial leader is stored as metadata with the discovery URL in order to inform the other members of the new cluster. Let's walk through an example with a new 3 machine CoreOS cluster:
+Starting a CoreOS cluster requires one of the new machines to become the first leader of the cluster. The initial leader is stored as metadata with the discovery URL in order to inform the other members of the new cluster. Let's walk through a timeline a new 3 machine CoreOS cluster discovering each other:
-1. 3 machines are booted via a cloud-provider
-2. Machine 1 that boots connects to the discovery token and submits its `peer-addr` address `10.10.10.1`.
+1. All three machines are booted via a cloud-provider with the same cloud-config in the user-data.
+2. Machine 1 starts up first. It requests information about the cluster from the discovery token and submits its `peer-addr` address `10.10.10.1`.
3. No leader is recorded into the discovery URL metadata, so machine 1 becomes the leader.
4. Machine 2 boots and submits its `peer-addr` address `10.10.10.2`. It also reads back the list of existing peers (only `10.10.10.1`) and attempts to connect to the address listed.
-5. Machine 2 is now part of the cluster as a follower.
-6. Machine 3 boots and submits its `peer-addr` address `10.10.10.3`. It reads back the list of peers ( `10.10.10.1` and `10.10.10.2`) and selects one of the addresses to try first. If it can connect, the machine joins the cluster and is given a full list of the existing other members of the cluster.
+5. Machine 2 connects to Machine 1 and is now part of the cluster as a follower.
+6. Machine 3 boots and submits its `peer-addr` address `10.10.10.3`. It reads back the list of peers ( `10.10.10.1` and `10.10.10.2`) and selects one of the addresses to try first. When it connects to a machine in the cluster, the machine is given a full list of the existing other members of the cluster.
7. The cluster is now bootstrapped with an intial leader and two followers.
There are two interesting things happening during this process.
First, each machine is configured with the same discovery URL and etcd figured out what to do. This allows you to load the same cloud-config into an auto-scaling group and it will work whether it is the first or 30th machine in the group.
-Second, machine 3 only needed to use one of the addresses stored in the discovery URL to connect to the cluster, because the rest of the active peers can be obtained after cluster membership through the Raft protocol.
+Second, machine 3 only needed to use one of the addresses stored in the discovery URL to connect to the cluster. Since etcd uses the Raft consensus algorithm, existing machines in the cluster already maintain a list of healty members in order for the algorithm to function properly. This list is given to the new machine and it starts normal operations with each of the other cluster members.
## Existing Clusters
-If you're already bootstrapped a cluster with a discovery URL, all you need to do is to boot new machines with a cloud-config containing the same URL. After boot, new machines will see that a cluster already exists and attempt to join through one of the addresses stored with the discovery URL.
+If you're already operating a bootstrapped a cluster with a discovery URL, adding new machines to the cluster is very easy. All you need to do is to boot the new machines with a cloud-config containing the same discovery URL. After boot, the new machines will see that a cluster already exists and attempt to join through one of the addresses stored with the discovery URL.
Over time, as machines come and go, the discovery URL will eventually contain addresses of peers that are no longer alive. Each entry in the discovery URL has a TTL of 7 days, which should be long enough to make sure no extended outages cause an address to be removed erroneously. There is no harm in having stale peers in the list until they are cleaned up, since an etcd instance only needs to connect to one valid peer in the cluster to join.
@@ -66,7 +66,7 @@ Over time, as machines come and go, the discovery URL will eventually contain ad
### Invalid Cloud-Config
-The most common problem with cluster discovery is using invalid cloud-config, which will prevent the cloud-config from being applied to the machine. Formatting errors are easy to do with YAML. You should always run newly written cloud-config through a [YAML validator](yamllint.com).
+The most common problem with cluster discovery is using invalid cloud-config, which will prevent the cloud-config from being applied to the machine. The YAML format uses indention to represent data hierarchy, which makes it easy to create an invalid cloud-config. You should always run newly written cloud-config through a [YAML validator](yamllint.com).
Unfortunately, if you are providing an SSH-key via cloud-config, it can be hard to read the `coreos-cloudinit` log to find out what's wrong. If you're using a cloud provider, you can normally provide an SSH-key via another method which will allow you to log in. If you're running on bare metal, the [coreos.autologin]({{site.url}}/docs/running-coreos/bare-metal/booting-with-pxe/#setting-up-pxelinux.cfg) kernel option will bypass authentication, letting you read the journal.
@@ -78,7 +78,11 @@ journalctl _EXE=/usr/bin/coreos-cloudinit
### Stale Tokens
-Another common problem with cluster discovery is attempting to boot a new cluster with a stale discovery URL. As explained above, the intial leader election is recorded into the URL, which inticates that the new etcd instance should be joining an existing cluster. On a stale token, each of the old peer addresses will be used to try to join a cluster but will fail. A new cluster can't be formed by discarding these old addresses, because if an etcd peer was in a network partition, it would look exactly like the described situation. Because etcd can't ever accurately determined whether a token has been reused or not, it must assume the worst and fail the cluster discovery.
+Another common problem with cluster discovery is attempting to boot a new cluster with a stale discovery URL. As explained above, the intial leader election is recorded into the URL, which inticates that the new etcd instance should be joining an existing cluster.
+
+If you provide a stale discovery URL, the new machines will attempt to connec to each of the old peer addresses, which will fail since they don't exist, and the bootstrapping process will fail.
+
+If you're thinking, why can't the new machines just form a new cluster if they're all down. There's a really great reason for this — if an etcd peer was in a network partition, it would look exactly like the "full-down" situation and starting a new cluster would form a split-brain. Since etcd will never be able to determine whether a token has been reused or not, it must assume the worst and abort the cluster discovery.
If you're running into problems with your discovery URL, there are a few sources of information that can help you see what's going on. First, you can open the URL in a browser to see what information etcd is using to bootstrap itself:
@@ -122,9 +126,15 @@ If you're running into problems with your discovery URL, there are a few sources
To rule out firewall settings as a source of your issue, ensure that you can curl each of the IPs from machines in your cluster.
+If all of the IPs can be reached, the etcd log can provide more clues:
+
+```
+journalctl -u etcd
+```
+
### Communicating with discovery.etcd.io
-If your CoreOS cluster can't communicate out to the public internet, [https://discovery.etcd.io](https://discovery.etcd.io) won't work and you'll have to run your own discovery endpoint, which is described later in this document.
+If your CoreOS cluster can't communicate out to the public internet, [https://discovery.etcd.io](https://discovery.etcd.io) won't work and you'll have to run your own discovery endpoint, which is described below.
### Setting Peer Addresses Correctly
From 01b4502d479fa7b37959475314434c3e9820d373 Mon Sep 17 00:00:00 2001
From: Joseph Anthony Pasquale Holsten
Date: Wed, 13 Aug 2014 01:46:04 +0000
Subject: [PATCH 0192/1291] fleetctl load hello.service before fleetctl start
`fleetctl` can't start a service until it has been loaded. Normally, this would go without saying. But since this is the quickstart guide, let's be explicit.
---
quickstart/index.md | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/quickstart/index.md b/quickstart/index.md
index 228999a20..1672d21e6 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -128,9 +128,10 @@ ExecStop=/usr/bin/docker stop hello
The [Getting Started with systemd]({{site.url}}/docs/launching-containers/launching/getting-started-with-systemd) guide explains the format of this file in more detail.
-Then start the unit:
+Then load and start the unit:
```sh
+$ fleetctl load hello.service
$ fleetctl start hello.service
Job hello.service launched on 8145ebb7.../172.17.8.105
```
From 1313a605aa7b9db0cdc25e4a308177fcbf5a3ffd Mon Sep 17 00:00:00 2001
From: Sebastian Fastner
Date: Wed, 13 Aug 2014 15:23:52 +0200
Subject: [PATCH 0193/1291] Fix syntax error in docker run command
---
.../launching/launching-containers-fleet/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/launching-containers/launching/launching-containers-fleet/index.md b/launching-containers/launching/launching-containers-fleet/index.md
index 68540c0d7..ad880405e 100644
--- a/launching-containers/launching/launching-containers-fleet/index.md
+++ b/launching-containers/launching/launching-containers-fleet/index.md
@@ -30,7 +30,7 @@ TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill busybox1
ExecStartPre=-/usr/bin/docker rm busybox1
ExecStartPre=/usr/bin/docker pull busybox
-ExecStart=/usr/bin/docker run busybox --name busybox1 /bin/sh -c "while true; do echo Hello World; sleep 1; done"
+ExecStart=/usr/bin/docker run --name busybox1 busybox /bin/sh -c "while true; do echo Hello World; sleep 1; done"
```
Run the start command to start up the container on the cluster:
From 67d179d0d1bb2842af7f93139a6ea80df760a706 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 13 Aug 2014 11:54:03 -0700
Subject: [PATCH 0194/1291] systemd: update to reflect unit file changes
---
.../launching/getting-started-with-systemd/index.md | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/launching-containers/launching/getting-started-with-systemd/index.md b/launching-containers/launching/getting-started-with-systemd/index.md
index a5d0fd7d8..74cd91f51 100644
--- a/launching-containers/launching/getting-started-with-systemd/index.md
+++ b/launching-containers/launching/getting-started-with-systemd/index.md
@@ -87,7 +87,11 @@ The full list is located on the [systemd man page](http://www.freedesktop.org/so
Let's put a few of these concepts togther to register new units within etcd. Imagine we had another container running that would read these values from etcd and act upon them.
-We can use `ExecStart` to either create a container with the `docker run` command or start a pre-existing container with the `docker start -a` command. We need to account for both because you can't issue multiple docker run commands when specifying a `--name`. In either case we must leave the container in the foreground (i.e. don't run with `-d`) so systemd knows the service is running.
+We can use `ExecStartPre` to scrub existing conatiner state. The `docker kill` will force any previous copy of this container to stop, which is useful if we restarted the unit but docker didn't stop the container for some reason. The `=-` is systemd syntax to ignore errors for this command. We need to do this because docker will return a non-zero exit code if we try to stop a container that doesn't exist. We don't consider this an error (because we want the container stopped) so we tell systemd to ignore the possible failure.
+
+`docker rm` will remove the container and `docker pull` will pull down the latest version. You can optionally pull down a specific version as a docker tag: `coreos/apache:1.2.3`
+
+`ExecStart` is where the container is started from the container image that we pulled above.
Since our container will be started in `ExecStart`, it makes sense for our etcd command to run as `ExecStartPost` to ensure that our container is started and functioning.
From 6da4a32f263a71f5fede67d989bfab43a1f64931 Mon Sep 17 00:00:00 2001
From: Joseph Anthony Pasquale Holsten
Date: Fri, 15 Aug 2014 01:28:22 +0000
Subject: [PATCH 0195/1291] Include output of fleectl load
---
quickstart/index.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/quickstart/index.md b/quickstart/index.md
index 1672d21e6..0b0844d14 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -132,6 +132,7 @@ Then load and start the unit:
```sh
$ fleetctl load hello.service
+Job hello.service loaded on c72c6ea2.../10.65.174.36
$ fleetctl start hello.service
Job hello.service launched on 8145ebb7.../172.17.8.105
```
From 54baab56eb810cf7e20f2327475084712d894716 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 15 Aug 2014 09:23:27 -0700
Subject: [PATCH 0196/1291] quickstart: fleetctl output now matches
---
quickstart/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/quickstart/index.md b/quickstart/index.md
index 0b0844d14..e69a2280f 100644
--- a/quickstart/index.md
+++ b/quickstart/index.md
@@ -132,7 +132,7 @@ Then load and start the unit:
```sh
$ fleetctl load hello.service
-Job hello.service loaded on c72c6ea2.../10.65.174.36
+Job hello.service loaded on 8145ebb7.../172.17.8.105
$ fleetctl start hello.service
Job hello.service launched on 8145ebb7.../172.17.8.105
```
From c51b6bb5b744755523158ef80670002a9ff4c844 Mon Sep 17 00:00:00 2001
From: Jon Chen
Date: Wed, 20 Aug 2014 12:27:13 -0400
Subject: [PATCH 0197/1291] fix minor typo
---
running-coreos/bare-metal/booting-with-ipxe/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/running-coreos/bare-metal/booting-with-ipxe/index.md b/running-coreos/bare-metal/booting-with-ipxe/index.md
index ea1118af1..ace54668b 100644
--- a/running-coreos/bare-metal/booting-with-ipxe/index.md
+++ b/running-coreos/bare-metal/booting-with-ipxe/index.md
@@ -87,7 +87,7 @@ iPXE> dhcp
iPXE> chain http://${YOUR_BOOT_URL}
```
-Immediatly iPXE should download your boot script URL and start grabbing the images from the CoreOS storage site:
+Immediately iPXE should download your boot script URL and start grabbing the images from the CoreOS storage site:
```sh
${YOUR_BOOT_URL}... ok
From f1b8a6250426e1d107076df2d5620fe0540ac48b Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Thu, 21 Aug 2014 15:25:33 -0400
Subject: [PATCH 0198/1291] cluster-discovery: add curl block for new URLs
---
.../setup/cluster-discovery/index.md | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/cluster-management/setup/cluster-discovery/index.md b/cluster-management/setup/cluster-discovery/index.md
index 307750167..70af97c74 100644
--- a/cluster-management/setup/cluster-discovery/index.md
+++ b/cluster-management/setup/cluster-discovery/index.md
@@ -13,13 +13,16 @@ weight: 5
CoreOS uses etcd, a service running on each machine, to handle coordination between software running on the cluster. For a group of CoreOS machines to form a cluster, their etcd instances need to be connected.
-A discovery service, [https://discovery.etcd.io](https://discovery.etcd.io), is provided as a free service to help connect etcd instances together by storing a list of peer addresses and metadata under a unique address, known as the discovery URL.
+A discovery service, [https://discovery.etcd.io](https://discovery.etcd.io), is provided as a free service to help connect etcd instances together by storing a list of peer addresses and metadata under a unique address, known as the discovery URL. You can generate them very easily:
-The discovery URL can be provided to each CoreOS machine via [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config), a minimal config tool that's designed to get a machine connected to the network and join the cluster. The rest of this guide will explain what's happening behind the scenes, but if you're trying to get clustered as quickly as possible, all you need to do is provide a _fresh, unique_ discovery token in your cloud-config.
+```
+$ curl -w "\n" https://discovery.etcd.io/new
+https://discovery.etcd.io/6a28e078895c5ec737174db2419bb2f3
+```
-Boot each one of the machines with identical cloud-config and they should be automatically clustered. You can grab a new token from [https://discovery.etcd.io/new](https://discovery.etcd.io/new) at any time.
+The discovery URL can be provided to each CoreOS machine via [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config), a minimal config tool that's designed to get a machine connected to the network and join the cluster. The rest of this guide will explain what's happening behind the scenes, but if you're trying to get clustered as quickly as possible, all you need to do is provide a _fresh, unique_ discovery token in your cloud-config.
-A common cloud-config is provided below, but specific guides are provided for each platform's guide. Not all providers support the `$private_ipv4` variable substitution.
+Boot each one of the machines with identical cloud-config and they should be automatically clustered:
```
#cloud-config
@@ -38,13 +41,15 @@ coreos:
command: start
```
+Specific documentation are provided for each platform's guide. Not all providers support the $private_ipv4 variable substitution.
+
## New Clusters
Starting a CoreOS cluster requires one of the new machines to become the first leader of the cluster. The initial leader is stored as metadata with the discovery URL in order to inform the other members of the new cluster. Let's walk through a timeline a new 3 machine CoreOS cluster discovering each other:
1. All three machines are booted via a cloud-provider with the same cloud-config in the user-data.
2. Machine 1 starts up first. It requests information about the cluster from the discovery token and submits its `peer-addr` address `10.10.10.1`.
-3. No leader is recorded into the discovery URL metadata, so machine 1 becomes the leader.
+3. No state is recorded into the discovery URL metadata, so machine 1 becomes the leader and records the state as `started`.
4. Machine 2 boots and submits its `peer-addr` address `10.10.10.2`. It also reads back the list of existing peers (only `10.10.10.1`) and attempts to connect to the address listed.
5. Machine 2 connects to Machine 1 and is now part of the cluster as a follower.
6. Machine 3 boots and submits its `peer-addr` address `10.10.10.3`. It reads back the list of peers ( `10.10.10.1` and `10.10.10.2`) and selects one of the addresses to try first. When it connects to a machine in the cluster, the machine is given a full list of the existing other members of the cluster.
@@ -62,6 +67,8 @@ If you're already operating a bootstrapped a cluster with a discovery URL, addin
Over time, as machines come and go, the discovery URL will eventually contain addresses of peers that are no longer alive. Each entry in the discovery URL has a TTL of 7 days, which should be long enough to make sure no extended outages cause an address to be removed erroneously. There is no harm in having stale peers in the list until they are cleaned up, since an etcd instance only needs to connect to one valid peer in the cluster to join.
+It's also possible that a discovery URL can contain no existing addresses, because they were all removed after 7 days. This represents a dead cluster and the discovery URL won't work any more and should be discarded.
+
## Common Problems with Cluster Discovery
### Invalid Cloud-Config
@@ -138,7 +145,7 @@ If your CoreOS cluster can't communicate out to the public internet, [https://di
### Setting Peer Addresses Correctly
-Each etcd instance submits the `-peer-addr` of each etcd instance to the configured discovery service. It's important to select an address that *all* peers in the cluster can communicate with. For example, if you're located in two regions of a cloud provider, configuring a private `10.x` address will not work between the two regions, and communication will not be possible between all peers.
+Each etcd instance submits the `-peer-addr` of each etcd instance to the configured discovery service. It's important to select an address that *all* peers in the cluster can communicate with. For example, if you're located in two regions of a cloud provider, configuring a private `10.x` address will not work between the two regions, and communication will not be possible between all peers. The `--bindaddr` flag allows you to bind to a specific interface (or all interfaces) to ensure your etcd traffic is routed properly.
## Running Your Own Discovery Service
From 9b97a2b1db942b24e873c1614a81ad769ac79388 Mon Sep 17 00:00:00 2001
From: Brian Waldon
Date: Fri, 22 Aug 2014 11:29:54 -0700
Subject: [PATCH 0199/1291] sdk: s/modificaions/modifications/
---
sdk-distributors/sdk/tips-and-tricks/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/sdk-distributors/sdk/tips-and-tricks/index.md b/sdk-distributors/sdk/tips-and-tricks/index.md
index 7fc57dc8e..6f539289b 100644
--- a/sdk-distributors/sdk/tips-and-tricks/index.md
+++ b/sdk-distributors/sdk/tips-and-tricks/index.md
@@ -26,7 +26,7 @@ repo forall -c git grep 'CONFIG_EXTRA_FIRMWARE_DIR'
## Add new upstream package
-Before making modificaions use `repo start` to create a new branch for the changes.
+Before making modifications use `repo start` to create a new branch for the changes.
To add a new package fetch the Gentoo package from upstream and add the package as a dependency of coreos-base/coreos
From 357c344d0a7f9a639be0ba31b97c1af5bf32fd8f Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 26 Aug 2014 15:17:36 -0400
Subject: [PATCH 0200/1291] launching-containers: add missing ExecStop to unit
---
.../launching/launching-containers-fleet/index.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/launching-containers/launching/launching-containers-fleet/index.md b/launching-containers/launching/launching-containers-fleet/index.md
index ad880405e..1aa846c32 100644
--- a/launching-containers/launching/launching-containers-fleet/index.md
+++ b/launching-containers/launching/launching-containers-fleet/index.md
@@ -31,6 +31,7 @@ ExecStartPre=-/usr/bin/docker kill busybox1
ExecStartPre=-/usr/bin/docker rm busybox1
ExecStartPre=/usr/bin/docker pull busybox
ExecStart=/usr/bin/docker run --name busybox1 busybox /bin/sh -c "while true; do echo Hello World; sleep 1; done"
+ExecStop=/usr/bin/docker stop busybox1
```
Run the start command to start up the container on the cluster:
From 06563e8f875c3dc0958c5a3547f5b9a6784aa7d2 Mon Sep 17 00:00:00 2001
From: Jonathan Boulle
Date: Tue, 26 Aug 2014 13:37:43 -0700
Subject: [PATCH 0201/1291] launching-containers: update service-discovery link
---
.../launching/launching-containers-fleet/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/launching-containers/launching/launching-containers-fleet/index.md b/launching-containers/launching/launching-containers-fleet/index.md
index 1aa846c32..5ad83d924 100644
--- a/launching-containers/launching/launching-containers-fleet/index.md
+++ b/launching-containers/launching/launching-containers-fleet/index.md
@@ -101,7 +101,7 @@ How do we route requests to these containers? The best strategy is to run a "sid
## Run a Simple Sidekick
-The simplest sidekick example is for [service discovery](https://github.com/coreos/fleet/blob/master/Documentation/service-discovery.md). This unit blindly announces that our container has been started. We'll run one of these for each Apache unit that's already running. Make two copies of the unit called `apache-discovery.1.service` and `apache-discovery.2.service`. Be sure to change all instances of `apache.1.service` to `apache.2.service` and `apache1` to `apache2` when you create the second unit.
+The simplest sidekick example is for [service discovery](https://github.com/coreos/fleet/blob/master/Documentation/examples/service-discovery.md). This unit blindly announces that our container has been started. We'll run one of these for each Apache unit that's already running. Make two copies of the unit called `apache-discovery.1.service` and `apache-discovery.2.service`. Be sure to change all instances of `apache.1.service` to `apache.2.service` and `apache1` to `apache2` when you create the second unit.
```ini
[Unit]
From a90d31d218840874d072c1ed528fb3f1567b3d1d Mon Sep 17 00:00:00 2001
From: Jonathan Boulle
Date: Tue, 26 Aug 2014 14:24:25 -0700
Subject: [PATCH 0202/1291] launching-containers: update more fleet links
---
.../launching/launching-containers-fleet/index.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/launching-containers/launching/launching-containers-fleet/index.md b/launching-containers/launching/launching-containers-fleet/index.md
index 5ad83d924..8dd11a0af 100644
--- a/launching-containers/launching/launching-containers-fleet/index.md
+++ b/launching-containers/launching/launching-containers-fleet/index.md
@@ -156,7 +156,7 @@ If you're running in the cloud, many services have APIs that can be automated ba
Applications with complex and specific requirements can target a subset of the cluster for scheduling via machine metadata. Powerful deployment topologies can be achieved — schedule units based on the machine's region, rack location, disk speed or anything else you can think of.
-Metadata can be provided via [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/#coreos) or a [config file](https://github.com/coreos/fleet/blob/master/Documentation/configuration.md). Here's an example config file:
+Metadata can be provided via [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config/#coreos) or a [config file](https://github.com/coreos/fleet/blob/master/Documentation/deployment-and-configuration.md). Here's an example config file:
```ini
# Comma-delimited key/value pairs that are published to the fleet registry.
@@ -211,4 +211,4 @@ X-ConditionMachineMetadata=region=east
#### More Information
Example Deployment with fleetfleet Unit Specifications
-fleet Configuration
+fleet Configuration
From 197342d958b561cb2fb8b6f2cf3e8084883eab75 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 27 Aug 2014 11:10:45 -0700
Subject: [PATCH 0203/1291] registry: initial commit
---
.../configure-machines/index.md | 68 +++++++++++++++++++
1 file changed, 68 insertions(+)
create mode 100644 enterprise-registry/configure-machines/index.md
diff --git a/enterprise-registry/configure-machines/index.md b/enterprise-registry/configure-machines/index.md
new file mode 100644
index 000000000..1863c7a90
--- /dev/null
+++ b/enterprise-registry/configure-machines/index.md
@@ -0,0 +1,68 @@
+---
+layout: docs
+title: Configure Machines for Enterprise Registry
+category: registry
+sub_category: usage
+forkurl: https://github.com/coreos/docs/blob/master/enterprise-registry/configure-machines/index.md
+weight: 5
+---
+
+# Configure Machines for Enterprise Registry
+
+The Enterprise Registry allows you to create teams and user accounts that match your existing business unit organization. A special type of user, a robot account, is designed to be used programatically by deployment systems and other pieces of software. Robot accounts are commonly configured with read-only access to an organizations repositories.
+
+This guide we will assume you have the DNS record `registry.example.com` configured to point to your Enterprise Registry.
+
+## Credentials
+
+Each CoreOS machine needs to be configured with the username and password for a robot account in order to deploy your containers. Docker looks for configured credentials in a `.dockercfg` file located within the user's home directory. You can download this file directly from the Enterprise Registry interface. Let's assume you've created a robot account called `myapp+deployment`.
+
+Writing the `.dockercfg` can be specified in [cloud-config]({{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config) with the write_files parameter, or created manually on each machine.
+
+### Cloud-Config
+
+A snippet to configure the credentials via write_files looks like:
+
+```yaml
+#cloud-config
+
+write_files:
+ - path: /home/root/.dockercfg
+ permissions: 0644
+ content: |
+ {
+ "https://registry.example.com/v1/": {
+ "auth": "cm9ic3p1bXNrajYzUFFXSU9HSkhMUEdNMEISt0ZXN0OkdOWEVHWDRaSFhNUVVSMkI1WE9MM1k1S1R1VET0I1RUZWSVg3TFRJV1I3TFhPMUI=",
+ "email": ""
+ }
+ }
+```
+
+Each machine booted with this cloud-config should automatically be authenticated with your Enterprise Registry.
+
+### Manual Login
+
+To temporarily login to an Enterprise Registry account on a machine, run `docker login`:
+
+```sh
+$ docker login registry.example.com
+Login against server at https://registry.example.com/v1/
+Username: myapp+deployment
+Password: GNXEGX4Y5J63PQWIOGJHLPGM0B5GUDOBZHXMQUR2B5XOL35EFVIX7LTIWR7LXO1B
+Email: myemail@example.com
+```
+
+## Test Push or Pull
+
+Now that your machine is authenticated, try pulling one of your repositories. If you haven't pushed a repository into your Enterprise Registry, you will need to tag it with the full name:
+
+```sh
+$ docker tag bf60637a656c registry.domain.com/myapp
+$ docker push registry.domain.com/myapp
+```
+
+If you already have images in your registry, test out a pull:
+
+```sh
+docker pull registry.domain.com/myapp
+```
From 5a34b2149f10647be42003ba07e286ec76fc3dba Mon Sep 17 00:00:00 2001
From: Kelsey Hightower
Date: Wed, 27 Aug 2014 11:23:57 -0700
Subject: [PATCH 0204/1291] add docs for date and timezone settings
---
.../configuring-date-and-timezone/index.md | 140 ++++++++++++++++++
1 file changed, 140 insertions(+)
create mode 100644 cluster-management/setup/configuring-date-and-timezone/index.md
diff --git a/cluster-management/setup/configuring-date-and-timezone/index.md b/cluster-management/setup/configuring-date-and-timezone/index.md
new file mode 100644
index 000000000..26a17993f
--- /dev/null
+++ b/cluster-management/setup/configuring-date-and-timezone/index.md
@@ -0,0 +1,140 @@
+---
+layout: docs
+title: Configuring Date and Timezone
+category: cluster_management
+sub_category: setting_up
+weight: 7
+---
+
+# Configuring Date and Timezone
+
+NTP is used to to keep clocks in sync across machines in a CoreOS cluster. The ntpd service is responsible for keeping each machines local clock in sync with a configured set of time servers. The services will automatically start by default. To check if the ntpd service is running, run the follow command:
+
+```
+systemctl status ntpd
+ntpd.service - Network Time Service
+ Loaded: loaded (/usr/lib64/systemd/system/ntpd.service; enabled)
+ Active: active (running) since Tue 2014-08-26 15:10:23 UTC; 4h 23min ago
+ Main PID: 483 (ntpd)
+ CGroup: /system.slice/ntpd.service
+ └─483 /usr/sbin/ntpd -g -n -u ntp:ntp -f /var/lib/ntp/ntp.drift
+```
+
+## Changing NTP time servers
+
+The ntpd service can be configured via the /etc/ntp.conf configuration file. By default systems will sync time with NTP servers from ntp.org. If you would like to use a different set of NTP servers edit /etc/ntp.conf:
+
+```
+# Common pool
+server 0.pool.example.com
+server 1.pool.example.com
+...
+```
+
+## Viewing the date and timezone settings with timedatectl
+
+The timedatectl command can be use to view and change timezone settings as well as report the current time.
+
+```
+timedatectl status
+ Local time: Tue 2014-08-26 19:29:12 UTC
+ Universal time: Tue 2014-08-26 19:29:12 UTC
+ RTC time: Tue 2014-08-26 19:29:12
+ Time zone: UTC (UTC, +0000)
+ NTP enabled: no
+NTP synchronized: yes
+ RTC in local TZ: no
+ DST active: n/a
+```
+
+## Changing the system timezone
+
+Start by listing the available time zones:
+
+```
+timedatectl list-timezones
+Africa/Abidjan
+Africa/Accra
+Africa/Addis_Ababa
+…
+```
+
+Pick a timezone from the list and set it:
+
+```
+sudo timedatectl set-timezone America/New_York
+```
+
+Check the timezone status to view the changes:
+
+```
+timedatectl
+ Local time: Tue 2014-08-26 15:44:07 EDT
+ Universal time: Tue 2014-08-26 19:44:07 UTC
+ RTC time: Tue 2014-08-26 19:44:07
+ Time zone: America/New_York (EDT, -0400)
+ NTP enabled: yes
+NTP synchronized: yes
+ RTC in local TZ: no
+ DST active: yes
+ Last DST change: DST began at
+ Sun 2014-03-09 01:59:59 EST
+ Sun 2014-03-09 03:00:00 EDT
+ Next DST change: DST ends (the clock jumps one hour backwards) at
+ Sun 2014-11-02 01:59:59 EDT
+ Sun 2014-11-02 01:00:00 EST
+```
+
+## CoreOS Recommendations
+
+### What time should I use?
+
+To avoid time zone confusion and the complexities of adjusting clocks for daylight saving time it’s recommended that all machines in a CoreOS cluster use Coordinated Universal Time (UTC).
+
+```
+sudo timedatectl set-timezone UTC
+```
+
+### Which NTP servers should I sync against?
+
+Unless you have a highly reliable and precise time server pool you should stick to the default NTP servers from the ntp.org server pool.
+
+```
+server 0.pool.ntp.org
+server 1.pool.ntp.org
+server 2.pool.ntp.org
+server 3.pool.ntp.org
+```
+
+## Automating with cloud-config
+
+The following cloud-config snippet can be used setup and configure NTP and timezone settings:
+
+```
+#cloud-config
+
+coreos:
+ units:
+ - name: settimezone.service
+ command: start
+ content: |
+ [Unit]
+ Description=Set the timezone
+
+ [Service]
+ ExecStart=/usr/bin/timedatectl set-timezone UTC
+ RemainAfterExit=yes
+ Type=oneshot
+write_files:
+ - path: /etc/ntp.conf
+ content: |
+ # Common pool
+ server 0.pool.ntp.org
+ server 1.pool.ntp.org
+
+ # - Allow only time queries, at a limited rate.
+ # - Allow all local queries (IPv4, IPv6)
+ restrict default nomodify nopeer noquery limited kod
+ restrict 127.0.0.1
+ restrict [::1]
+```
From 73512366a4699651bb5988b5b259e24c9293d5c0 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 27 Aug 2014 13:40:25 -0700
Subject: [PATCH 0205/1291] config NTP: tweak title to prevent truncation
---
cluster-management/setup/configuring-date-and-timezone/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cluster-management/setup/configuring-date-and-timezone/index.md b/cluster-management/setup/configuring-date-and-timezone/index.md
index 26a17993f..525cd2213 100644
--- a/cluster-management/setup/configuring-date-and-timezone/index.md
+++ b/cluster-management/setup/configuring-date-and-timezone/index.md
@@ -1,6 +1,6 @@
---
layout: docs
-title: Configuring Date and Timezone
+title: Configuring Date & Timezone (NTP)
category: cluster_management
sub_category: setting_up
weight: 7
From 0eb13b7cb3abe4bbd13d39e62b725a9fc1b86764 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Fri, 29 Aug 2014 19:57:47 -0700
Subject: [PATCH 0206/1291] cluster-discovery: fix broken link to yamllint.com
---
cluster-management/setup/cluster-discovery/index.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/cluster-management/setup/cluster-discovery/index.md b/cluster-management/setup/cluster-discovery/index.md
index 70af97c74..7e9092bed 100644
--- a/cluster-management/setup/cluster-discovery/index.md
+++ b/cluster-management/setup/cluster-discovery/index.md
@@ -73,7 +73,7 @@ It's also possible that a discovery URL can contain no existing addresses, becau
### Invalid Cloud-Config
-The most common problem with cluster discovery is using invalid cloud-config, which will prevent the cloud-config from being applied to the machine. The YAML format uses indention to represent data hierarchy, which makes it easy to create an invalid cloud-config. You should always run newly written cloud-config through a [YAML validator](yamllint.com).
+The most common problem with cluster discovery is using invalid cloud-config, which will prevent the cloud-config from being applied to the machine. The YAML format uses indention to represent data hierarchy, which makes it easy to create an invalid cloud-config. You should always run newly written cloud-config through a [YAML validator](http://www.yamllint.com).
Unfortunately, if you are providing an SSH-key via cloud-config, it can be hard to read the `coreos-cloudinit` log to find out what's wrong. If you're using a cloud provider, you can normally provide an SSH-key via another method which will allow you to log in. If you're running on bare metal, the [coreos.autologin]({{site.url}}/docs/running-coreos/bare-metal/booting-with-pxe/#setting-up-pxelinux.cfg) kernel option will bypass authentication, letting you read the journal.
@@ -153,4 +153,4 @@ The public discovery service is just an etcd cluster made available to the publi
Since etcd is designed to this type of leader election, it was an obvious choice to use it for everyone's initial leader election. This means that it's easy to run your own etcd cluster for this purpose.
-If you're interested in how to discovery API works behind the scenes in etcd, read about the [Discovery Protocol](https://github.com/coreos/etcd/blob/master/Documentation/discovery-protocol.md).
\ No newline at end of file
+If you're interested in how to discovery API works behind the scenes in etcd, read about the [Discovery Protocol](https://github.com/coreos/etcd/blob/master/Documentation/discovery-protocol.md).
From 98d3d89aa098a8aca92dd809a329ff5a55409b7d Mon Sep 17 00:00:00 2001
From: Dennis Benkert
Date: Sat, 30 Aug 2014 10:55:08 +0200
Subject: [PATCH 0207/1291] Fixed typo in systemd's "Getting started" guide
---
.../launching/getting-started-with-systemd/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/launching-containers/launching/getting-started-with-systemd/index.md b/launching-containers/launching/getting-started-with-systemd/index.md
index 74cd91f51..416ab392b 100644
--- a/launching-containers/launching/getting-started-with-systemd/index.md
+++ b/launching-containers/launching/getting-started-with-systemd/index.md
@@ -85,7 +85,7 @@ systemd provides a high degree of functionality in your unit files. Here's a cur
The full list is located on the [systemd man page](http://www.freedesktop.org/software/systemd/man/systemd.service.html).
-Let's put a few of these concepts togther to register new units within etcd. Imagine we had another container running that would read these values from etcd and act upon them.
+Let's put a few of these concepts together to register new units within etcd. Imagine we had another container running that would read these values from etcd and act upon them.
We can use `ExecStartPre` to scrub existing conatiner state. The `docker kill` will force any previous copy of this container to stop, which is useful if we restarted the unit but docker didn't stop the container for some reason. The `=-` is systemd syntax to ignore errors for this command. We need to do this because docker will return a non-zero exit code if we try to stop a container that doesn't exist. We don't consider this an error (because we want the container stopped) so we tell systemd to ignore the possible failure.
From 6bc5bd07aa8ede6a11cb273857e9ded4d59d7358 Mon Sep 17 00:00:00 2001
From: Alexandr Morozov
Date: Sun, 31 Aug 2014 18:35:25 +0400
Subject: [PATCH 0208/1291] Fix directory to change
---
running-coreos/platforms/libvirt/index.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/running-coreos/platforms/libvirt/index.md b/running-coreos/platforms/libvirt/index.md
index 77752b7a8..caa2fa9cc 100644
--- a/running-coreos/platforms/libvirt/index.md
+++ b/running-coreos/platforms/libvirt/index.md
@@ -39,21 +39,21 @@ Read the [release notes]({{site.url}}/releases) for specific features and bug fi
We start by downloading the most recent disk image:
From 89c3e714ce9c5178813b65b135b92045a26394f2 Mon Sep 17 00:00:00 2001
From: Joseph Schorr
Date: Tue, 2 Sep 2014 13:41:24 -0400
Subject: [PATCH 0209/1291] Add enterprise registry setup instructions
---
enterprise-registry/initial-setup/index.md | 175 +++++++++++++++++++++
1 file changed, 175 insertions(+)
create mode 100644 enterprise-registry/initial-setup/index.md
diff --git a/enterprise-registry/initial-setup/index.md b/enterprise-registry/initial-setup/index.md
new file mode 100644
index 000000000..e770b606f
--- /dev/null
+++ b/enterprise-registry/initial-setup/index.md
@@ -0,0 +1,175 @@
+---
+layout: docs
+title: Initial Setup of CoreOS Enterprise Registry
+category: registry
+sub_category: setup
+forkurl: https://github.com/coreos/docs/blob/master/enterprise-registry/initial-setup/index.md
+weight: 5
+---
+
+# Initial Setup of CoreOS Enterprise Registry
+
+## Introduction
+
+CoreOS Enterprise Registry requires four components to operate successfully:
+- A supported database (MySQL, Postgres)
+- A Redis instance (for real-time events)
+- A config.yaml file
+- The Enterprise Registry image
+
+
+## Preparing the Database
+
+A MySQL RDBMS or Postgres installation with an empty database is required, and a login with full access to said database. The schema will be created the first time the registry image is run.
+
+Please have the url for the login and database available in the SQLAlchemy format:
+
+### For MySQL:
+```mysql+pymysql://:@/```
+
+### For Postgres:
+```postgresql://:@/```
+
+
+## Setting up Redis
+
+Redis stores data which must be accessed quickly but doesn’t necessarily require durability guarantees. If you have an existing Redis instance, make sure to accept incoming connections on port 6379 and then feel free to skip this step.
+
+To run redis, simply pull and run the Quay.io Redis image:
+
+```sudo docker pull quay.io/quay/redis
+sudo docker run -d -p 6379:6379 quay.io/quay/redis```
+
+**NOTE**: This host will have to accept incoming connections on port 6379 from the hosts on which the registry will run.
+
+
+## Writing a config.yaml
+
+CoreOS Enterprise Registry requires a `config.yaml` file.
+
+Sample configuration can be found below. Any fields marked as `(FILL IN HERE)` are required to be edited.
+
+ # A unique secret key. This should be a UUID or some other secret
+ # string.
+ SECRET_KEY: '(FILL IN HERE: secret key)'
+
+ # Should be 'https' if SSL is used and 'http' otherwise.
+ PREFERRED_URL_SCHEME: '(FILL IN HERE: "https" or "http")'
+
+ # The HTTP host (and optionally the port number) of the location
+ # where the registry will be accessible on the network.
+ SERVER_HOSTNAME: '(FILL IN HERE: registry.mycorp.com)'
+
+ # A logo to use for your enterprise
+ ENTERPRISE_LOGO_URL: '(FILL IN HERE: http://someurl/...)'
+
+ # Settings for SMTP and mailing. This is *required*.
+ MAIL_PORT: 587
+ MAIL_PASSWORD: '(FILL IN HERE: password)'
+ MAIL_SERVER: '(FILL IN HERE: hostname)'
+ MAIL_USERNAME: '(FILL IN HERE: username)'
+ MAIL_USE_TLS: true
+
+ # The database URI for your MySQL or Postgres DB.
+ DB_URI: '(FILL IN HERE: database uri)'
+
+ # References to the REDIS host setup above. Note that this does
+ # not include the port, but merely the hostname/ip.
+ BUILDLOGS_REDIS_HOSTNAME: '(FILL IN HERE: redis host)'
+ USER_EVENTS_REDIS_HOSTNAME: '(FILL IN HERE: redis host)'
+
+ # The usernames of your super-users, if any. Super users will
+ # have the ability to view and delete other users.
+ SUPER_USERS: []
+
+ # Either 'Database' or 'LDAP'.
+ # If LDAP, additional configuration is required below.
+ AUTHENTICATION_TYPE: 'Database'
+
+ # Should always be 'local'.
+ DISTRIBUTED_STORAGE_PREFERENCE: ['local']
+
+ # Defines the kind of storage used by the registry:
+ # LocalStorage: Registry data is stored on a local mounted volume
+ #
+ # Required fields:
+ # storage_path: The path under the mounted volume
+ #
+ # S3Storage: Registry data is stored in Amazon S3
+ #
+ # Required fields:
+ # storage_path: The path under the S3 bucket
+ # s3_access_key: The S3 access key
+ # s3_secret_key: The S3 secret key
+ # s3_bucket: The S3 bucket
+ #
+ # GoogleCloudStorage: Registry data is stored in GCS
+ #
+ # Required fields:
+ # storage_path: The path under the GCS bucket
+ # access_key: The GCS access key
+ # secret_key: The GCS secret key
+ # bucket_name: The GCS bucket
+ #
+ DISTRIBUTED_STORAGE_CONFIG:
+ local:
+ # The name of the storage provider
+ - LocalStorage
+
+ # Fields, in dictionary form
+ - {'storage_path': '/datastorage/registry'}
+
+ # LDAP information (only needed if `LDAP` is chosen above).
+ # LDAP_URI: 'ldap://localhost'
+ # LDAP_ADMIN_DN: 'cn=admin,dc=devtable,dc=com'
+ # LDAP_ADMIN_PASSWD: 'secret'
+ # LDAP_BASE_DN: ['dc=devtable', 'dc=com']
+ # LDAP_EMAIL_ATTR: 'mail'
+ # LDAP_UID_ATTR: 'uid'
+ # LDAP_USER_RDN: ['ou=People']
+
+ # Where user files (uploaded build packs, other binary data)
+ # are stored.
+ USERFILES_PATH: 'datastorage/userfiles'
+ USERFILES_TYPE: 'LocalUserfiles'
+
+ # Required constants.
+ TESTING: false
+ USE_CDN: false
+ FEATURE_USER_LOG_ACCESS: true
+ FEATURE_BUILD_SUPPORT: false
+
+
+## Setting up the directories
+
+CoreOS Enterprise registry requires a storage directory and a configuration directory containing the `config.yaml`, and, if SSL is used, two files named `ssl.cert` and `ssl.key`:
+
+ mkdir storage
+ mkdir config
+ mv config.yaml config/config.yaml
+ cp my-ssl-cert config/ssl.cert
+ cp my-ssl-key config/ssl.key
+
+
+## Pulling the CoreOS Enterprise Registry image
+
+As part of the setup package, a set of pull credentials have been included. To pull the CoreOS Enterprise Registry image, run a `docker login` and then a `docker pull`:
+
+ docker login quay.io
+ Username: (the username given)
+ Password: (the password given)
+ E-mail: (put anything here)
+
+ docker pull quay.io/coreos/registry:latest
+
+
+## Running the CoreOS Enterprise Registry image
+
+The CoreOS Enterprise Registry is run via a `docker run` call, with the `` and `` being the directories created above.
+
+ docker run -p 443:443 -p 80:80 --privileged=true -v :/conf/stack -v :/datastorage -d quay.io/coreos/registry
+
+
+## Verifying that CoreOS Enterprise Registry is running
+
+Visit the `/status` endpoint on the registry hostname and verify it returns true for both variables.
From fb7007157f452de5e571c021067cbc24c604971e Mon Sep 17 00:00:00 2001
From: Joseph Schorr
Date: Tue, 2 Sep 2014 13:43:44 -0400
Subject: [PATCH 0210/1291] See if Github will syntax highlight the yaml block
---
enterprise-registry/initial-setup/index.md | 181 +++++++++++----------
1 file changed, 91 insertions(+), 90 deletions(-)
diff --git a/enterprise-registry/initial-setup/index.md b/enterprise-registry/initial-setup/index.md
index e770b606f..a548776b3 100644
--- a/enterprise-registry/initial-setup/index.md
+++ b/enterprise-registry/initial-setup/index.md
@@ -49,96 +49,97 @@ CoreOS Enterprise Registry requires a `config.yaml` file.
Sample configuration can be found below. Any fields marked as `(FILL IN HERE)` are required to be edited.
- # A unique secret key. This should be a UUID or some other secret
- # string.
- SECRET_KEY: '(FILL IN HERE: secret key)'
-
- # Should be 'https' if SSL is used and 'http' otherwise.
- PREFERRED_URL_SCHEME: '(FILL IN HERE: "https" or "http")'
-
- # The HTTP host (and optionally the port number) of the location
- # where the registry will be accessible on the network.
- SERVER_HOSTNAME: '(FILL IN HERE: registry.mycorp.com)'
-
- # A logo to use for your enterprise
- ENTERPRISE_LOGO_URL: '(FILL IN HERE: http://someurl/...)'
-
- # Settings for SMTP and mailing. This is *required*.
- MAIL_PORT: 587
- MAIL_PASSWORD: '(FILL IN HERE: password)'
- MAIL_SERVER: '(FILL IN HERE: hostname)'
- MAIL_USERNAME: '(FILL IN HERE: username)'
- MAIL_USE_TLS: true
-
- # The database URI for your MySQL or Postgres DB.
- DB_URI: '(FILL IN HERE: database uri)'
-
- # References to the REDIS host setup above. Note that this does
- # not include the port, but merely the hostname/ip.
- BUILDLOGS_REDIS_HOSTNAME: '(FILL IN HERE: redis host)'
- USER_EVENTS_REDIS_HOSTNAME: '(FILL IN HERE: redis host)'
-
- # The usernames of your super-users, if any. Super users will
- # have the ability to view and delete other users.
- SUPER_USERS: []
-
- # Either 'Database' or 'LDAP'.
- # If LDAP, additional configuration is required below.
- AUTHENTICATION_TYPE: 'Database'
-
- # Should always be 'local'.
- DISTRIBUTED_STORAGE_PREFERENCE: ['local']
-
- # Defines the kind of storage used by the registry:
- # LocalStorage: Registry data is stored on a local mounted volume
- #
- # Required fields:
- # storage_path: The path under the mounted volume
- #
- # S3Storage: Registry data is stored in Amazon S3
- #
- # Required fields:
- # storage_path: The path under the S3 bucket
- # s3_access_key: The S3 access key
- # s3_secret_key: The S3 secret key
- # s3_bucket: The S3 bucket
- #
- # GoogleCloudStorage: Registry data is stored in GCS
- #
- # Required fields:
- # storage_path: The path under the GCS bucket
- # access_key: The GCS access key
- # secret_key: The GCS secret key
- # bucket_name: The GCS bucket
- #
- DISTRIBUTED_STORAGE_CONFIG:
- local:
- # The name of the storage provider
- - LocalStorage
-
- # Fields, in dictionary form
- - {'storage_path': '/datastorage/registry'}
-
- # LDAP information (only needed if `LDAP` is chosen above).
- # LDAP_URI: 'ldap://localhost'
- # LDAP_ADMIN_DN: 'cn=admin,dc=devtable,dc=com'
- # LDAP_ADMIN_PASSWD: 'secret'
- # LDAP_BASE_DN: ['dc=devtable', 'dc=com']
- # LDAP_EMAIL_ATTR: 'mail'
- # LDAP_UID_ATTR: 'uid'
- # LDAP_USER_RDN: ['ou=People']
-
- # Where user files (uploaded build packs, other binary data)
- # are stored.
- USERFILES_PATH: 'datastorage/userfiles'
- USERFILES_TYPE: 'LocalUserfiles'
-
- # Required constants.
- TESTING: false
- USE_CDN: false
- FEATURE_USER_LOG_ACCESS: true
- FEATURE_BUILD_SUPPORT: false
-
+```yaml
+# A unique secret key. This should be a UUID or some other secret
+# string.
+SECRET_KEY: '(FILL IN HERE: secret key)'
+
+# Should be 'https' if SSL is used and 'http' otherwise.
+PREFERRED_URL_SCHEME: '(FILL IN HERE: "https" or "http")'
+
+# The HTTP host (and optionally the port number) of the location
+# where the registry will be accessible on the network.
+SERVER_HOSTNAME: '(FILL IN HERE: registry.mycorp.com)'
+
+# A logo to use for your enterprise
+ENTERPRISE_LOGO_URL: '(FILL IN HERE: http://someurl/...)'
+
+# Settings for SMTP and mailing. This is *required*.
+MAIL_PORT: 587
+MAIL_PASSWORD: '(FILL IN HERE: password)'
+MAIL_SERVER: '(FILL IN HERE: hostname)'
+MAIL_USERNAME: '(FILL IN HERE: username)'
+MAIL_USE_TLS: true
+
+# The database URI for your MySQL or Postgres DB.
+DB_URI: '(FILL IN HERE: database uri)'
+
+# References to the REDIS host setup above. Note that this does
+# not include the port, but merely the hostname/ip.
+BUILDLOGS_REDIS_HOSTNAME: '(FILL IN HERE: redis host)'
+USER_EVENTS_REDIS_HOSTNAME: '(FILL IN HERE: redis host)'
+
+# The usernames of your super-users, if any. Super users will
+# have the ability to view and delete other users.
+SUPER_USERS: []
+
+# Either 'Database' or 'LDAP'.
+# If LDAP, additional configuration is required below.
+AUTHENTICATION_TYPE: 'Database'
+
+# Should always be 'local'.
+DISTRIBUTED_STORAGE_PREFERENCE: ['local']
+
+# Defines the kind of storage used by the registry:
+# LocalStorage: Registry data is stored on a local mounted volume
+#
+# Required fields:
+# storage_path: The path under the mounted volume
+#
+# S3Storage: Registry data is stored in Amazon S3
+#
+# Required fields:
+# storage_path: The path under the S3 bucket
+# s3_access_key: The S3 access key
+# s3_secret_key: The S3 secret key
+# s3_bucket: The S3 bucket
+#
+# GoogleCloudStorage: Registry data is stored in GCS
+#
+# Required fields:
+# storage_path: The path under the GCS bucket
+# access_key: The GCS access key
+# secret_key: The GCS secret key
+# bucket_name: The GCS bucket
+#
+DISTRIBUTED_STORAGE_CONFIG:
+ local:
+ # The name of the storage provider
+ - LocalStorage
+
+ # Fields, in dictionary form
+ - {'storage_path': '/datastorage/registry'}
+
+# LDAP information (only needed if `LDAP` is chosen above).
+# LDAP_URI: 'ldap://localhost'
+# LDAP_ADMIN_DN: 'cn=admin,dc=devtable,dc=com'
+# LDAP_ADMIN_PASSWD: 'secret'
+# LDAP_BASE_DN: ['dc=devtable', 'dc=com']
+# LDAP_EMAIL_ATTR: 'mail'
+# LDAP_UID_ATTR: 'uid'
+# LDAP_USER_RDN: ['ou=People']
+
+# Where user files (uploaded build packs, other binary data)
+# are stored.
+USERFILES_PATH: 'datastorage/userfiles'
+USERFILES_TYPE: 'LocalUserfiles'
+
+# Required constants.
+TESTING: false
+USE_CDN: false
+FEATURE_USER_LOG_ACCESS: true
+FEATURE_BUILD_SUPPORT: false
+```
## Setting up the directories
From 137a3171275a6fafc5582c21b8152300b3a0be48 Mon Sep 17 00:00:00 2001
From: Joseph Schorr
Date: Tue, 2 Sep 2014 13:47:18 -0400
Subject: [PATCH 0211/1291] Fix indentation on the Redis pull block
---
enterprise-registry/initial-setup/index.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/enterprise-registry/initial-setup/index.md b/enterprise-registry/initial-setup/index.md
index a548776b3..483302745 100644
--- a/enterprise-registry/initial-setup/index.md
+++ b/enterprise-registry/initial-setup/index.md
@@ -37,8 +37,8 @@ Redis stores data which must be accessed quickly but doesn’t necessarily requi
To run redis, simply pull and run the Quay.io Redis image:
-```sudo docker pull quay.io/quay/redis
-sudo docker run -d -p 6379:6379 quay.io/quay/redis```
+ sudo docker pull quay.io/quay/redis
+ sudo docker run -d -p 6379:6379 quay.io/quay/redis
**NOTE**: This host will have to accept incoming connections on port 6379 from the hosts on which the registry will run.
From 6181e676ee9f700c144253c6fd6acdacf2274102 Mon Sep 17 00:00:00 2001
From: Joseph Schorr
Date: Tue, 2 Sep 2014 13:48:29 -0400
Subject: [PATCH 0212/1291] Apparently Github doesn't like my tabs
---
enterprise-registry/initial-setup/index.md | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/enterprise-registry/initial-setup/index.md b/enterprise-registry/initial-setup/index.md
index 483302745..a3e62d4c1 100644
--- a/enterprise-registry/initial-setup/index.md
+++ b/enterprise-registry/initial-setup/index.md
@@ -37,8 +37,10 @@ Redis stores data which must be accessed quickly but doesn’t necessarily requi
To run redis, simply pull and run the Quay.io Redis image:
- sudo docker pull quay.io/quay/redis
- sudo docker run -d -p 6379:6379 quay.io/quay/redis
+```
+sudo docker pull quay.io/quay/redis
+sudo docker run -d -p 6379:6379 quay.io/quay/redis
+```
**NOTE**: This host will have to accept incoming connections on port 6379 from the hosts on which the registry will run.
From 7abef3bf4080c8391d7646331ea3650ce919862e Mon Sep 17 00:00:00 2001
From: Joseph Schorr
Date: Tue, 2 Sep 2014 14:11:06 -0400
Subject: [PATCH 0213/1291] Addressing comments
---
enterprise-registry/initial-setup/index.md | 24 ++++++++++++++++------
1 file changed, 18 insertions(+), 6 deletions(-)
diff --git a/enterprise-registry/initial-setup/index.md b/enterprise-registry/initial-setup/index.md
index a3e62d4c1..8e71c1771 100644
--- a/enterprise-registry/initial-setup/index.md
+++ b/enterprise-registry/initial-setup/index.md
@@ -20,7 +20,7 @@ CoreOS Enterprise Registry requires four components to operate successfully:
## Preparing the Database
-A MySQL RDBMS or Postgres installation with an empty database is required, and a login with full access to said database. The schema will be created the first time the registry image is run.
+A MySQL RDBMS or Postgres installation with an empty database is required, and a login with full access to said database. The schema will be created the first time the registry image is run. The database install can either be pre-existing or run on CoreOS via a Docker container.
Please have the url for the login and database available in the SQLAlchemy format:
@@ -45,9 +45,9 @@ sudo docker run -d -p 6379:6379 quay.io/quay/redis
**NOTE**: This host will have to accept incoming connections on port 6379 from the hosts on which the registry will run.
-## Writing a config.yaml
+## Enterprise Registry Config File
-CoreOS Enterprise Registry requires a `config.yaml` file.
+CoreOS Enterprise Registry requires a `config.yaml` file, that stores database connection information, the storage location of your containers and other important settings.
Sample configuration can be found below. Any fields marked as `(FILL IN HERE)` are required to be edited.
@@ -143,7 +143,7 @@ FEATURE_USER_LOG_ACCESS: true
FEATURE_BUILD_SUPPORT: false
```
-## Setting up the directories
+## Setting up the Directories
CoreOS Enterprise registry requires a storage directory and a configuration directory containing the `config.yaml`, and, if SSL is used, two files named `ssl.cert` and `ssl.key`:
@@ -168,11 +168,23 @@ As part of the setup package, a set of pull credentials have been included. To p
## Running the CoreOS Enterprise Registry image
-The CoreOS Enterprise Registry is run via a `docker run` call, with the `` and `` being the directories created above.
+The CoreOS Enterprise Registry is run via a `docker run` call, with the `config` and `storage` being the directories created above.
- docker run -p 443:443 -p 80:80 --privileged=true -v :/conf/stack -v :/datastorage -d quay.io/coreos/registry
+ docker run -p 443:443 -p 80:80 --privileged=true -v config:/conf/stack -v storage:/datastorage -d quay.io/coreos/registry
## Verifying that CoreOS Enterprise Registry is running
Visit the `/status` endpoint on the registry hostname and verify it returns true for both variables.
+
+
+## Logging in
+
+### If using database authentication:
+
+Once the Enterprise Registry is running, new users can be created by clicking the `Sign Up` button. The sign up process will require an e-mail confirmation step, after which repositories, organizations and teams can be setup by the user.
+
+
+### If using LDAP authentication:
+
+Users should be able to login to the Enterprise Registry directly with their LDAP username and password.
\ No newline at end of file
From fd4b45c93633926105852fbb70fa2bb076574714 Mon Sep 17 00:00:00 2001
From: Joseph Schorr
Date: Tue, 2 Sep 2014 14:40:55 -0400
Subject: [PATCH 0214/1291] Add newline so all doc handlers make the next few
lines into a list
---
enterprise-registry/initial-setup/index.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/enterprise-registry/initial-setup/index.md b/enterprise-registry/initial-setup/index.md
index 8e71c1771..34a61aa85 100644
--- a/enterprise-registry/initial-setup/index.md
+++ b/enterprise-registry/initial-setup/index.md
@@ -12,6 +12,7 @@ weight: 5
## Introduction
CoreOS Enterprise Registry requires four components to operate successfully:
+
- A supported database (MySQL, Postgres)
- A Redis instance (for real-time events)
- A config.yaml file
From d607b60bcfba6055b3f549342b1b6105f14935d7 Mon Sep 17 00:00:00 2001
From: Joseph Schorr
Date: Tue, 2 Sep 2014 14:42:52 -0400
Subject: [PATCH 0215/1291] Make some of the titles shorter since they are
displayed in the sidebar
---
enterprise-registry/initial-setup/index.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/enterprise-registry/initial-setup/index.md b/enterprise-registry/initial-setup/index.md
index 34a61aa85..be8851433 100644
--- a/enterprise-registry/initial-setup/index.md
+++ b/enterprise-registry/initial-setup/index.md
@@ -155,7 +155,7 @@ CoreOS Enterprise registry requires a storage directory and a configuration dire
cp my-ssl-key config/ssl.key
-## Pulling the CoreOS Enterprise Registry image
+## Pulling the Registry image
As part of the setup package, a set of pull credentials have been included. To pull the CoreOS Enterprise Registry image, run a `docker login` and then a `docker pull`:
@@ -167,14 +167,14 @@ As part of the setup package, a set of pull credentials have been included. To p
docker pull quay.io/coreos/registry:latest
-## Running the CoreOS Enterprise Registry image
+## Running the Registry
The CoreOS Enterprise Registry is run via a `docker run` call, with the `config` and `storage` being the directories created above.
docker run -p 443:443 -p 80:80 --privileged=true -v config:/conf/stack -v storage:/datastorage -d quay.io/coreos/registry
-## Verifying that CoreOS Enterprise Registry is running
+## Verifying the Registry status
Visit the `/status` endpoint on the registry hostname and verify it returns true for both variables.
From 29a34a6ce6b1bb9afd3ceda5f606151b9bf5d076 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Tue, 2 Sep 2014 13:18:01 -0700
Subject: [PATCH 0216/1291] enterprise-registry: minor title changes
---
enterprise-registry/initial-setup/index.md | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/enterprise-registry/initial-setup/index.md b/enterprise-registry/initial-setup/index.md
index be8851433..2e196ac25 100644
--- a/enterprise-registry/initial-setup/index.md
+++ b/enterprise-registry/initial-setup/index.md
@@ -1,15 +1,13 @@
---
layout: docs
-title: Initial Setup of CoreOS Enterprise Registry
+title: On-Premise Installation
category: registry
sub_category: setup
forkurl: https://github.com/coreos/docs/blob/master/enterprise-registry/initial-setup/index.md
weight: 5
---
-# Initial Setup of CoreOS Enterprise Registry
-
-## Introduction
+# On-Premise Installation
CoreOS Enterprise Registry requires four components to operate successfully:
@@ -188,4 +186,4 @@ Once the Enterprise Registry is running, new users can be created by clicking th
### If using LDAP authentication:
-Users should be able to login to the Enterprise Registry directly with their LDAP username and password.
\ No newline at end of file
+Users should be able to login to the Enterprise Registry directly with their LDAP username and password.
From b0c34d1c1374b99fe8126363df37be08f2d7d62c Mon Sep 17 00:00:00 2001
From: Nestor G Pestelos Jr
Date: Wed, 3 Sep 2014 19:41:06 -0700
Subject: [PATCH 0217/1291] cluster discovery: fix typo
---
cluster-management/setup/cluster-discovery/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cluster-management/setup/cluster-discovery/index.md b/cluster-management/setup/cluster-discovery/index.md
index 7e9092bed..e66e13aad 100644
--- a/cluster-management/setup/cluster-discovery/index.md
+++ b/cluster-management/setup/cluster-discovery/index.md
@@ -59,7 +59,7 @@ There are two interesting things happening during this process.
First, each machine is configured with the same discovery URL and etcd figured out what to do. This allows you to load the same cloud-config into an auto-scaling group and it will work whether it is the first or 30th machine in the group.
-Second, machine 3 only needed to use one of the addresses stored in the discovery URL to connect to the cluster. Since etcd uses the Raft consensus algorithm, existing machines in the cluster already maintain a list of healty members in order for the algorithm to function properly. This list is given to the new machine and it starts normal operations with each of the other cluster members.
+Second, machine 3 only needed to use one of the addresses stored in the discovery URL to connect to the cluster. Since etcd uses the Raft consensus algorithm, existing machines in the cluster already maintain a list of healthy members in order for the algorithm to function properly. This list is given to the new machine and it starts normal operations with each of the other cluster members.
## Existing Clusters
From c43a1b4754c076721c0f4a4932a24bb8e65999ee Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 3 Sep 2014 18:18:02 -0700
Subject: [PATCH 0218/1291] launching-containers-fleet: update list-units
output
---
.../launching-containers-fleet/index.md | 29 +++++++++----------
1 file changed, 14 insertions(+), 15 deletions(-)
diff --git a/launching-containers/launching/launching-containers-fleet/index.md b/launching-containers/launching/launching-containers-fleet/index.md
index 8dd11a0af..ffac118d8 100644
--- a/launching-containers/launching/launching-containers-fleet/index.md
+++ b/launching-containers/launching/launching-containers-fleet/index.md
@@ -39,13 +39,12 @@ Run the start command to start up the container on the cluster:
```sh
$ fleetctl start myapp.service
```
-
-Now list all of the units in the cluster to see the current status. The unit should have been scheduled to a machine in your cluster:
+The unit should have been scheduled to a machine in your cluster:
```sh
$ fleetctl list-units
-UNIT LOAD ACTIVE SUB DESC MACHINE
-myapp.service loaded active running MyApp c9de9451.../10.10.1.3
+UNIT MACHINE ACTIVE SUB
+myapp.service c9de9451.../10.10.1.3 active running
```
You can view all of the machines in the cluster by running `list-machines`:
@@ -89,15 +88,15 @@ Let's start both units and verify that they're on two different machines:
```sh
$ fleetctl start apache.*
$ fleetctl list-units
-UNIT LOAD ACTIVE SUB DESC MACHINE
-myapp.service loaded active running MyApp c9de9451.../10.10.1.3
-apache.1.service loaded active running My Apache Frontend 491586a6.../10.10.1.2
-apache.2.service loaded active running My Apache Frontend 148a18ff.../10.10.1.1
+UNIT MACHINE ACTIVE SUB
+myapp.service c9de9451.../10.10.1.3 active running
+apache.1.service 491586a6.../10.10.1.2 active running
+apache.2.service 148a18ff.../10.10.1.1 active running
```
As you can see, the Apache units are now running on two different machines in our cluster.
-How do we route requests to these containers? The best strategy is to run a "sidekick" container that performs other duties that are related to our main container but shouldn't be directly built into that application. Examples of common sidekick containers are for service discovery and controlling external services such as cloud load balancers.
+How do we route requests to these containers? The best strategy is to run a "sidekick" container that performs other duties that are related to our main container but shouldn't be directly built into that application. Examples of common sidekick containers are for service discovery and controlling external services such as cloud load balancers or DNS.
## Run a Simple Sidekick
@@ -127,12 +126,12 @@ Let's verify that each unit was placed on to the same machine as the Apache serv
```sh
$ fleetctl start apache-discovery.1.service
$ fleetctl list-units
-UNIT LOAD ACTIVE SUB DESC MACHINE
-myapp.service loaded active running MyApp c9de9451.../10.10.1.3
-apache.1.service loaded active running My Apache Frontend 491586a6.../10.10.1.2
-apache.2.service loaded active running My Apache Frontend 148a18ff.../10.10.1.1
-apache-discovery.1.service loaded active running Announce Apache1 491586a6.../10.10.1.2
-apache-discovery.2.service loaded active running Announce Apache2 148a18ff.../10.10.1.1
+UNIT MACHINE ACTIVE SUB
+myapp.service c9de9451.../10.10.1.3 active running
+apache.1.service 491586a6.../10.10.1.2 active running
+apache.2.service 148a18ff.../10.10.1.1 active running
+apache-discovery.1.service 491586a6.../10.10.1.2 active running
+apache-discovery.2.service 148a18ff.../10.10.1.1 active running
```
Now let's verify that the service discovery is working correctly:
From f3f189ad3cf716cc33e23e2c640154e96cfa7da7 Mon Sep 17 00:00:00 2001
From: Rob Szumski
Date: Wed, 3 Sep 2014 18:14:43 -0700
Subject: [PATCH 0219/1291] launching-containers-fleet: add global units
---
.../launching-containers-fleet/index.md | 113 ++++++++++++++----
1 file changed, 88 insertions(+), 25 deletions(-)
diff --git a/launching-containers/launching/launching-containers-fleet/index.md b/launching-containers/launching/launching-containers-fleet/index.md
index 8dd11a0af..e59f93246 100644
--- a/launching-containers/launching/launching-containers-fleet/index.md
+++ b/launching-containers/launching/launching-containers-fleet/index.md
@@ -15,6 +15,40 @@ If you're not familiar with systemd units, check out our [Getting Started with s
This guide assumes you're running `fleetctl` locally from a CoreOS machine that's part of a CoreOS cluster. You can also [control your cluster remotely]({{site.url}}/docs/launching-containers/launching/fleet-using-the-client/#get-up-and-running). All of the units referenced in this blog post are contained in the [unit-examples](https://github.com/coreos/unit-examples/tree/master/simple-fleet) repository. You can clone this onto your CoreOS box to make unit submission easier.
+## Types of Fleet Units
+
+Two types of units can be run in your cluster — standard and global units. Standard units are long-running processes that are scheduled onto a single machine. If that machine goes offline, the unit will be migrated onto a new machine and started.
+
+Global units will be run on all machines in the cluster. These are ideal for common services like monitoring agents or components of higher-level orchestration systems like Kubernetes, Mesos or OpenStack. There are two fleetctl commands to view units in the cluster: `list-unit-files`, which shows the units that fleet knows about and whether or not they are global, and `list-units`, which shows the current state of units actively loaded into machines in the cluster. Here's an example cluster with 3 machines, running both types of units:
+
+```sh
+$ fleetctl list-unit-files
+UNIT HASH DSTATE STATE TMACHINE
+global-unit.service 8ff68b9 launched launched 3 of 3
+standard-unit.service 7710e8a launched launched 148a18ff.../10.10.1.1
+```
+
+You can view all of the machines in the cluster by running `list-machines`:
+
+```sh
+$ fleetctl list-machines
+MACHINE IP METADATA
+148a18ff-6e95-4cd8-92da-c9de9bb90d5a 10.10.1.1 -
+491586a6-508f-4583-a71d-bfc4d146e996 10.10.1.2 -
+c9de9451-6a6f-1d80-b7e6-46e996bfc4d1 10.10.1.3 -
+```
+
+Now when looking at the status of units, we should expect to see 3 copies of global-unit.service - one running on each machine:
+
+```sh
+$ fleetctl list-units
+UNIT MACHINE ACTIVE SUB
+global-unit.service 148a18ff.../10.10.1.1 active running
+global-unit.service 491586a6.../10.10.1.2 active running
+global-unit.service c9de9451.../10.10.1.3 active running
+standard-unit.service 148a18ff.../10.10.1.1 active running
+```
+
## Run a Container in the Cluster
Running a single container is very easy. All you need to do is provide a regular unit file without an `[Install]` section. Let's run the same unit from the [Getting Started with systemd]({{site.url}}/docs/launching-containers/launching/getting-started-with-systemd) guide. First save these contents as `myapp.service` on the CoreOS machine:
@@ -39,23 +73,12 @@ Run the start command to start up the container on the cluster:
```sh
$ fleetctl start myapp.service
```
-
-Now list all of the units in the cluster to see the current status. The unit should have been scheduled to a machine in your cluster:
+The unit should have been scheduled to a machine in your cluster:
```sh
$ fleetctl list-units
-UNIT LOAD ACTIVE SUB DESC MACHINE
-myapp.service loaded active running MyApp c9de9451.../10.10.1.3
-```
-
-You can view all of the machines in the cluster by running `list-machines`:
-
-```sh
-$ fleetctl list-machines
-MACHINE IP METADATA
-148a18ff-6e95-4cd8-92da-c9de9bb90d5a 10.10.1.1 -
-491586a6-508f-4583-a71d-bfc4d146e996 10.10.1.2 -
-c9de9451-6a6f-1d80-b7e6-46e996bfc4d1 10.10.1.3 -
+UNIT MACHINE ACTIVE SUB
+myapp.service c9de9451.../10.10.1.3 active running
```
## Run a High Availability Service
@@ -89,15 +112,15 @@ Let's start both units and verify that they're on two different machines:
```sh
$ fleetctl start apache.*
$ fleetctl list-units
-UNIT LOAD ACTIVE SUB DESC MACHINE
-myapp.service loaded active running MyApp c9de9451.../10.10.1.3
-apache.1.service loaded active running My Apache Frontend 491586a6.../10.10.1.2
-apache.2.service loaded active running My Apache Frontend 148a18ff.../10.10.1.1
+UNIT MACHINE ACTIVE SUB
+myapp.service c9de9451.../10.10.1.3 active running
+apache.1.service 491586a6.../10.10.1.2 active running
+apache.2.service 148a18ff.../10.10.1.1 active running
```
As you can see, the Apache units are now running on two different machines in our cluster.
-How do we route requests to these containers? The best strategy is to run a "sidekick" container that performs other duties that are related to our main container but shouldn't be directly built into that application. Examples of common sidekick containers are for service discovery and controlling external services such as cloud load balancers.
+How do we route requests to these containers? The best strategy is to run a "sidekick" container that performs other duties that are related to our main container but shouldn't be directly built into that application. Examples of common sidekick containers are for service discovery and controlling external services such as cloud load balancers or DNS.
## Run a Simple Sidekick
@@ -127,12 +150,12 @@ Let's verify that each unit was placed on to the same machine as the Apache serv
```sh
$ fleetctl start apache-discovery.1.service
$ fleetctl list-units
-UNIT LOAD ACTIVE SUB DESC MACHINE
-myapp.service loaded active running MyApp c9de9451.../10.10.1.3
-apache.1.service loaded active running My Apache Frontend 491586a6.../10.10.1.2
-apache.2.service loaded active running My Apache Frontend 148a18ff.../10.10.1.1
-apache-discovery.1.service loaded active running Announce Apache1 491586a6.../10.10.1.2
-apache-discovery.2.service loaded active running Announce Apache2 148a18ff.../10.10.1.1
+UNIT MACHINE ACTIVE SUB
+myapp.service c9de9451.../10.10.1.3 active running
+apache.1.service 491586a6.../10.10.1.2 active running
+apache.2.service 148a18ff.../10.10.1.1 active running
+apache-discovery.1.service 491586a6.../10.10.1.2 active running
+apache-discovery.2.service 148a18ff.../10.10.1.1 active running
```
Now let's verify that the service discovery is working correctly:
@@ -152,6 +175,46 @@ If you're running in the cloud, many services have APIs that can be automated ba
+## Run a Global Unit
+
+As mentioned earlier, global units are useful for running a unit across all of the machines in your cluster. It doesn't differ very much from a regular unit other than a new `X-Fleet` parameter called `Global=true`. Here's an example unit from a [blog post to use Data Dog with CoreOS](https://www.datadoghq.com/2014/08/monitor-coreos-scale-datadog/). You'll need to set an etcd key `ddapikey` before this example will work — more details are in the post.
+
+```ini
+[Unit]
+Description=Monitoring Service
+
+[Service]
+TimeoutStartSec=0
+ExecStartPre=-/usr/bin/docker kill dd-agent
+ExecStartPre=-/usr/bin/docker rm dd-agent
+ExecStartPre=/usr/bin/docker pull dd-agent
+ExecStart=/usr/bin/docker run --privileged --name dd-agent -h `hostname` \
+-v /var/run/docker.sock:/var/run/docker.sock \
+-v /proc/mounts:/host/proc/mounts:ro \
+-v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
+-e API_KEY=`etcdctl get /ddapikey` \
+datadog/docker-dd-agent
+
+[X-Fleet]
+Global=true
+```
+
+If we start this unit, it should be running on all 3 of our machines:
+
+```sh
+$ fleetctl start datadog.service
+$ fleetctl list-units
+UNIT MACHINE ACTIVE SUB
+myapp.service c9de9451.../10.10.1.3 active running
+apache.1.service 491586a6.../10.10.1.2 active running
+apache.2.service 148a18ff.../10.10.1.1 active running
+apache-discovery.1.service 491586a6.../10.10.1.2 active running
+apache-discovery.2.service 148a18ff.../10.10.1.1 active running
+datadog.service 148a18ff.../10.10.1.1 active running
+datadog.service 491586a6.../10.10.1.2 active running
+datadog.service c9de9451.../10.10.1.3 active running
+```
+
## Schedule Based on Machine Metadata
Applications with complex and specific requirements can target a subset of the cluster for scheduling via machine metadata. Powerful deployment topologies can be achieved — schedule units based on the machine's region, rack location, disk speed or anything else you can think of.
From 18fcdf7ab10ae8870e7a002cda3789bc44fa53f6 Mon Sep 17 00:00:00 2001
From: Alex Malinovich
Date: Thu, 4 Sep 2014 18:51:51 -0700
Subject: [PATCH 0220/1291] Fix references to GitHub
---
README.md | 2 +-
running-coreos/bare-metal/installing-to-disk/index.md | 2 +-
sdk-distributors/distributors/notes-for-distributors/index.md | 4 ++--
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/README.md b/README.md
index 5eec20c62..777893b93 100644
--- a/README.md
+++ b/README.md
@@ -20,6 +20,6 @@ documents as a pull request and follow two guidelines:
2. Add an explanation about the translated document to the top of the
file: "These documents were localized into Esperanto by Community
Member and last updated on 2014-04-04. If you
- find inaccuracies or problems please file an issue on Github."
+ find inaccuracies or problems please file an issue on GitHub."
Thank you for your contributions.
diff --git a/running-coreos/bare-metal/installing-to-disk/index.md b/running-coreos/bare-metal/installing-to-disk/index.md
index 9238cd1ed..0a990afa5 100644
--- a/running-coreos/bare-metal/installing-to-disk/index.md
+++ b/running-coreos/bare-metal/installing-to-disk/index.md
@@ -15,7 +15,7 @@ weight: 7
There is a simple installer that will destroy everything on the given target disk and install CoreOS.
Essentially it downloads an image, verifies it with gpg and then copies it bit for bit to disk.
-The script is self-contained and located [on Github here](https://raw.github.com/coreos/init/master/bin/coreos-install "coreos-install") and can be run from any Linux distribution.
+The script is self-contained and located [on GitHub here](https://raw.github.com/coreos/init/master/bin/coreos-install "coreos-install") and can be run from any Linux distribution.
If you have already booting CoreOS via PXE, the install script is already installed. By default the install script will attempt to install the same version and channel that was PXE-booted:
diff --git a/sdk-distributors/distributors/notes-for-distributors/index.md b/sdk-distributors/distributors/notes-for-distributors/index.md
index 51e8a74ab..a077ac1c1 100644
--- a/sdk-distributors/distributors/notes-for-distributors/index.md
+++ b/sdk-distributors/distributors/notes-for-distributors/index.md
@@ -40,7 +40,7 @@ End-users should be able to provide a cloud-config file to your platform while s
CoreOS machines running on Amazon EC2 utilize a two-step cloud-config process. First, a cloud-config file baked into the image runs systemd units that execute scripts to fetch the user-provided SSH key and fetch the [user-provided cloud-config][amazon-cloud-config] from the instance [user-data service][amazon-user-data-doc] on Amazon's internal network. Afterwards, the user-provided cloud-config, specified from either the web console or API, is parsed.
-You can find the [code for this process on Github][amazon-github]. End-user instructions for this process can be found on our [Amazon EC2 docs][amazon-cloud-config].
+You can find the [code for this process on GitHub][amazon-github]. End-user instructions for this process can be found on our [Amazon EC2 docs][amazon-cloud-config].
[amazon-github]: https://github.com/coreos/coreos-overlay/tree/master/coreos-base/oem-ec2-compat
[amazon-user-data-doc]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html#instancedata-user-data-retrieval
@@ -50,7 +50,7 @@ You can find the [code for this process on Github][amazon-github]. End-user inst
Rackspace passes configuration data to a VM by mounting [config-drive][config-drive-docs], a special configuration drive containing machine-specific data, to the machine. Like, Amazon EC2, CoreOS images for Rackspace contain a cloud-config file baked into the image that runs units to read from the config-drive. If a user-provided cloud-config file is found, it is parsed.
-You can find the [code for this process on Github][rackspace-github]. End-user instructions for this process can be found on our [Rackspace docs][rackspace-cloud-config].
+You can find the [code for this process on GitHub][rackspace-github]. End-user instructions for this process can be found on our [Rackspace docs][rackspace-cloud-config].
[rackspace-github]: https://github.com/coreos/coreos-overlay/tree/master/coreos-base/oem-rackspace
[rackspace-cloud-config]: {{site.url}}/docs/running-coreos/cloud-providers/rackspace#cloud-config
From 77e54b1fa9458eb05ae7034d1c9932a87eec1d27 Mon Sep 17 00:00:00 2001
From: Alex Crawford
Date: Wed, 3 Sep 2014 18:15:38 -0700
Subject: [PATCH 0221/1291] digitalocean: Add documentation
---
.../cloud-providers/digitalocean/index.md | 198 ++++++++++++++++++
.../cloud-providers/digitalocean/settings.png | Bin 0 -> 23875 bytes
2 files changed, 198 insertions(+)
create mode 100644 running-coreos/cloud-providers/digitalocean/index.md
create mode 100644 running-coreos/cloud-providers/digitalocean/settings.png
diff --git a/running-coreos/cloud-providers/digitalocean/index.md b/running-coreos/cloud-providers/digitalocean/index.md
new file mode 100644
index 000000000..0649630e9
--- /dev/null
+++ b/running-coreos/cloud-providers/digitalocean/index.md
@@ -0,0 +1,198 @@
+---
+layout: docs
+title: DigitalOcean
+category: running_coreos
+sub_category: cloud_provider
+supported: true
+weight: 1
+---
+
+# Running CoreOS on DigitalOcean
+
+## Choosing a Channel
+
+CoreOS is designed to be [updated automatically][update-docs] with different
+schedules per channel. You can [disable this feature][reboot-docs], although we
+don't recommend it. Read the [release notes][release-notes] for specific
+features and bug fixes.
+
+The following command will create a single droplet. For more details, check out
+Launching via the API.
+
+
The alpha channel closely tracks master and frequently has new releases. The newest versions of docker, etcd, and fleet will be available for testing. Current version is CoreOS {{site.data.alpha-channel.do-version}}.
CoreOS on DigitalOcean is new! There haven't been any stable images yet.
+
Alpha images can be switched to the stable channel.
+
+
+
+
+
+[update-docs]: {{site.url}}/using-coreos/updates
+[reboot-docs]: {{site.url}}/docs/cluster-management/debugging/prevent-reboot-after-update
+[release-notes]: {{site.url}}/releases
+
+## Cloud-Config
+
+CoreOS allows you to configure machine parameters, launch systemd units on
+startup, and more via cloud-config. Jump over to the [docs to learn about the
+supported features][cloud-config-docs]. Cloud-config is intended to bring up a
+cluster of machines into a minimal useful state and ideally shouldn't be used
+to configure anything that isn't standard across many hosts. Once a droplet is
+created on DigitalOcean, the cloud-config cannot be modified.
+
+You can provide raw cloud-config data to CoreOS via the DigitalOcean web
+console or via the DigitalOcean API.
+
+The most common cloud-config for DigitalOcean looks like:
+
+```yaml
+#cloud-config
+
+coreos:
+ etcd:
+ # generate a new token for each unique cluster from https://discovery.etcd.io/new
+ discovery: https://discovery.etcd.io/
+ # multi-region deployments, multi-cloud deployments, and droplets without
+ # private networking need to use $public_ipv4
+ addr: $private_ipv4:4001
+ peer-addr: $private_ipv4:7001
+ units:
+ - name: etcd.service
+ command: start
+ - name: fleet.service
+ command: start
+```
+
+The `$private_ipv4` and `$public_ipv4` substitution variables are fully
+supported in cloud-config on DigitalOcean. In order for `$private_ipv4` to be
+populated, the droplet must have private networking enabled.
+
+[do-cloud-config]: https://developers.digitalocean.com/#droplets
+[cloud-config-docs]: {{site.url}}/docs/cluster-management/setup/cloudinit-cloud-config
+
+### Adding More Machines
+To add more instances to the cluster, just launch more with the same
+cloud-config. New instances will join the cluster regardless of region.
+
+## Launching Droplets
+
+### Via the API
+
+For starters, generate a [Personal Access Token][do-token-settings] and save it
+in an environment variable:
+
+```sh
+read TOKEN
+# Enter your Personal Access Token
+```
+
+Upload your SSH key via [DigitalOcean's API][do-keys-docs] or the web console.
+Retrieve the SSH key ID via the ["list all keys"][do-list-keys-docs] method:
+
+```sh
+curl --request GET "https://api.digitalocean.com/v2/account/keys" \
+ --header "Authorization: Bearer $TOKEN"
+```
+
+Save the key ID from the previous command in an environment variable:
+
+```sh
+read SSH_KEY_ID
+# Enter your SSH key ID
+```
+
+Create a 512MB droplet with private networking in NYC3 from the CoreOS Alpha
+image:
+
+```sh
+curl --request POST "https://api.digitalocean.com/v2/droplets" \
+ --header "Content-Type: application/json" \
+ --header "Authorization: Bearer $TOKEN" \
+ --data '{
+ "region":"nyc3",
+ "image":"{{site.data.alpha-channel.do-image-path}}",
+ "size":"512mb",
+ "name":"core-1",
+ "private_networking":true,
+ "ssh_keys":['$SSH_KEY_ID'],
+ "user_data": "'"$(cat cloud-config.yaml)"'"
+}'
+
+```
+
+For more details, check out [DigitalOcean's API documentation][do-api-docs].
+
+[do-api-docs]: https://developers.digitalocean.com/#droplets
+[do-keys-docs]: https://developers.digitalocean.com/#keys
+[do-list-keys-docs]: https://developers.digitalocean.com/#list-all-keys
+[do-token-settings]: https://cloud.digitalocean.com/settings/applications
+
+### Via the Web Console
+
+1. Open the "new droplet"
+ page in the web console.
+2. Give the machine a hostname, select the size, and choose a region.
+
+
+
+
Choosing a CoreOS channel
+
+
+3. Enable User Data and add your cloud-config in the text box.
+5. Select your SSH keys.
+
+Note that DigitalOcean is not able to inject a root password into CoreOS images
+like it does with other images. You'll need to add your keys via the web
+console or add keys or passwords via your cloud-config in order to log in.
+
+## Using CoreOS
+
+Now that you have a machine booted it is time to play around.
+Check out the [CoreOS Quickstart][quick-start] guide or dig into
+[more specific topics][docs].
+
+[quick-start]: {{site.url}}/docs/quickstart
+[docs]: {{site.url}}/docs
diff --git a/running-coreos/cloud-providers/digitalocean/settings.png b/running-coreos/cloud-providers/digitalocean/settings.png
new file mode 100644
index 0000000000000000000000000000000000000000..e82264d5bd27fc246288aaf9e9da29e9cd24f34c
GIT binary patch
literal 23875
zcmd?Rby$^O)HZmK6p&5