The easiest way to deploy Cloud Foundry version 2 (a.k.a "ng" or "next generation") seems to be via Vagrant. The official way is via BOSH, but Gastón Ramos from Altoros Systems has created a method which makes it much easier to spin up a single instance of Cloud Foundry v2 on Amazon EC2. We found with BOSH we needed 14 instances to get up and running and it took much longer.
Install The Installer
You start by git-cloning the cf-vagrant-installer repository from GitHub.
$ git clone https://github.com/Altoros/cf-vagrant-installer $ cd cf-vagrant-installer $ cat README.md
As you will see in the README.md, there are a few vagrant dependencies, the first of which is Vagrant itself.
If you do not have Vagrant installed, you can install it from http://downloads.vagrantup.com/. I installed the .dmg for my Mac, which was pretty straight-forward.
Install Vagrant Plugins
The Vagrant plugins required (if they have not changed) were vagrant-berkshelf, which adds Berkshelf integration to the Chef provisioners, vagrant-omnibus, which ensures the desired version of Chef is installed via the platform-specific Omnibus packages, and vagrant-aws, which adds an AWS provider to Vagrant, allowing Vagrant to control and provision machines in EC2.
Installation of these plugins could not be simpler
$ vagrant plugin install vagrant-berkshelf $ vagrant plugin install vagrant-omnibus $ vagrant plugin install vagrant-aws
Run The Bootstrap
Next, make sure we are in the cf-vagrant-installer (which we cloned above) directory and run the rake command to download all the Cloud Foundry components.
$ rake host:bootstrap
The output of this rake command will look something like this...
(in /Users/phil/src/cfv2/cf-vagrant-installer) ==> Init Git submodules Submodule 'cloud_controller_ng' (https://github.com/cloudfoundry/cloud_controller_ng.git) registered for path 'cloud_controller_ng' Submodule 'dea_ng' (https://github.com/cloudfoundry/dea_ng.git) registered for path 'dea_ng' Submodule 'gorouter' (https://github.com/cloudfoundry/gorouter.git) registered for path 'gorouter' Submodule 'health_manager' (https://github.com/cloudfoundry/health_manager.git) registered for path 'health_manager' Submodule 'uaa' (https://github.com/cloudfoundry/uaa.git) registered for path 'uaa' Submodule 'warden' (git://github.com/cloudfoundry/warden.git) registered for path 'warden' Cloning into 'cloud_controller_ng'... remote: Counting objects: 13057, done. remote: Compressing objects: 100% (7357/7357), done. remote: Total 13057 (delta 7851), reused 10513 (delta 5512) Receiving objects: 100% (13057/13057), 4.07 MiB | 1.34 MiB/s, done. Resolving deltas: 100% (7851/7851), done. Submodule path 'cloud_controller_ng': checked out '4b9208900c54181d539c9cc93519277d7c2702b5' Submodule 'vendor/errors' (https://github.com/cloudfoundry/errors.git) registered for path 'vendor/errors' Cloning into 'vendor/errors'... remote: Counting objects: 58, done. remote: Compressing objects: 100% (45/45), done. ... (truncated) ...
Set Up AWS Credentials
Next, you will need to edit the Vagrantfile
$ vim Vagrantfile
Add the following section directly above the config.vm.provider :vmware_fusion line:
config.vm.provider :aws do |aws, override| override.vm.box_url = "http://files.vagrantup.com/precise64.box" aws.access_key_id = "YOUR AWS ACCESS KEY" aws.secret_access_key = "YOUR AWS SECRET KEY" aws.keypair_name = "YOUR AWS KEYPAIR NAME" aws.ami = "ami-23d9a94a" aws.instance_type = "m1.large" aws.region = "us-east-1" aws.security_groups = ["open"] aws.user_data = File.read('ec2-setup.sh') override.ssh.username = "vagrant" override.ssh.private_key_path = "THE LOCAL PATH TO YOUR AWS PRIVATE KEY" end
Then replace "YOUR AWS ACCESS KEY", "YOUR AWS SECRET KEY" and "YOUR AWS KEYPAIR NAME" with your own AWS credentials.
An Open Security Group
The AWS security group used in the above example is one called "open". This is just one with all open ports. You will need to create it if you do not have it already. You can do this through the AWS console.
Create An EC2 Setup Script
Next, you'll need to create an ec2-setup.sh file directly in the cf-vagrant-installer directory. It should look exactly like the following...
#!/bin/bash -ex usermod -l vagrant ubuntu groupmod -n vagrant ubuntu usermod -d /home/vagrant -m vagrant mv /etc/sudoers.d/90-cloudimg-ubuntu /etc/sudoers.d/90-cloudimg-vagrant perl -pi -e "s/ubuntu/vagrant/g;" /etc/sudoers.d/90-cloudimg-vagrant
Build The EC2 Instance Running CFv2
Finally, run "vagrant up --provider=aws" and your instance will be built.
$ vagrant up --provider=aws
My output looked something like this... (truncated)
Bringing machine 'cf-install' up with 'aws' provider... [Berkshelf] Updating Vagrant's berkshelf: '/Users/phil/.berkshelf/cf-install/vagrant/berkshelf-20130717-81754-5vjx63-cf-install' [Berkshelf] Using apt (1.10.0) [Berkshelf] Using git (2.5.2) [Berkshelf] Using sqlite (1.0.0) [Berkshelf] Using mysql (3.0.2) [Berkshelf] Using postgresql (3.0.2) [Berkshelf] Using chef-golang (1.0.1) [Berkshelf] Using java (1.12.0) [Berkshelf] Using ruby_build (0.8.0) [Berkshelf] Installing rbenv (0.7.3) from git: 'git://github.com/fnichol/chef-rbenv.git' with branch: 'master' at ref: 'e10f98d5fd07bdb8d212ebf42160b65c39036b90' [Berkshelf] Using rbenv-alias (0.0.0) at './chef/rbenv-alias' [Berkshelf] Using rbenv-sudo (0.0.1) at './chef/rbenv-sudo' [Berkshelf] Using cloudfoundry (0.0.0) at './chef/cloudfoundry' [Berkshelf] Using dmg (1.1.0) [Berkshelf] Using build-essential (1.4.0) [Berkshelf] Using yum (2.3.0) [Berkshelf] Using windows (1.10.0) [Berkshelf] Using chef_handler (1.1.4) [Berkshelf] Using runit (1.1.6) [Berkshelf] Using openssl (1.0.2) [cf-install] Warning! The AWS provider doesn't support any of the Vagrant high-level network configurations (`config.vm.network`). They will be silently ignored. [cf-install] Launching an instance with the following settings... [cf-install] -- Type: m1.large [cf-install] -- AMI: ami-23d9a94a [cf-install] -- Region: us-east-1 [cf-install] -- Security Groups: ["open"] [cf-install] Waiting for instance to become "ready"... [cf-install] Waiting for SSH to become available... [cf-install] Machine is booted and ready for use! [cf-install] Rsyncing folder: /Users/phil/src/cfv2/cf-vagrant-installer/ => /vagrant [cf-install] Rsyncing folder: /Users/phil/.berkshelf/cf-install/vagrant/berkshelf-20130717-81754-5vjx63-cf-install/ => /tmp/vagrant-chef-1/chef-solo-1/cookbooks [cf-install] Installing Chef 11.4.0 Omnibus package... [cf-install] Running provisioner: chef_solo... Generating chef JSON and uploading... Running chef-solo... stdin: is not a tty [2013-07-17T19:43:22+00:00] INFO: *** Chef 11.4.0 *** [2013-07-17T19:43:23+00:00] INFO: Setting the run_list to ["recipe[cloudfoundry::vagrant-provision-start]", "recipe[apt::default]", "recipe[git]", "recipe[chef-golang]", "recipe[ruby_build]", "recipe[rbenv::user]", "recipe[java::openjdk]", "recipe[sqlite]", "recipe[mysql::server]", "recipe[postgresql::server]", "recipe[rbenv-alias]", "recipe[rbenv-sudo]", "recipe[cloudfoundry::warden]", "recipe[cloudfoundry::dea]", "recipe[cloudfoundry::uaa]", "recipe[cloudfoundry::cf_bootstrap]", "recipe[cloudfoundry::vagrant-provision-end]"] from JSON [2013-07-17T19:43:23+00:00] INFO: Run List is [recipe[cloudfoundry::vagrant-provision-start], recipe[apt::default], recipe[git], recipe[chef-golang], recipe[ruby_build], recipe[rbenv::user], recipe[java::openjdk], recipe[sqlite], recipe[mysql::server], recipe[postgresql::server], recipe[rbenv-alias], recipe[rbenv-sudo], recipe[cloudfoundry::warden], recipe[cloudfoundry::dea], recipe[cloudfoundry::uaa], recipe[cloudfoundry::cf_bootstrap], recipe[cloudfoundry::vagrant-provision-end]] [2013-07-17T19:43:23+00:00] INFO: Run List expands to [cloudfoundry::vagrant-provision-start, apt::default, git, chef-golang, ruby_build, rbenv::user, java::openjdk, sqlite, mysql::server, postgresql::server, rbenv-alias, rbenv-sudo, cloudfoundry::warden, cloudfoundry::dea, cloudfoundry::uaa, cloudfoundry::cf_bootstrap, cloudfoundry::vagrant-provision-end] [2013-07-17T19:43:23+00:00] INFO: Starting Chef Run for ip-10-77-71-207.ec2.internal [2013-07-17T19:43:23+00:00] INFO: Running start handlers [2013-07-17T19:43:23+00:00] INFO: Start handlers complete. [2013-07-17T19:43:24+00:00] INFO: AptPreference light-weight provider already initialized -- overriding! ... (truncated) ... [2013-07-17T19:58:50+00:00] INFO: Processing package[zip] action install (cloudfoundry::dea line 9) [2013-07-17T19:58:55+00:00] INFO: Processing package[unzip] action install (cloudfoundry::dea line 13) [2013-07-17T19:58:55+00:00] INFO: Processing package[maven] action install (cloudfoundry::uaa line 1) [2013-07-17T19:59:38+00:00] INFO: Processing execute[run rake cf:bootstrap] action run (cloudfoundry::cf_bootstrap line 3) [2013-07-17T20:05:35+00:00] INFO: execute[run rake cf:bootstrap] ran successfully [2013-07-17T20:05:35+00:00] INFO: Processing bash[emit provision complete] action run (cloudfoundry::vagrant-provision-end line 2) [2013-07-17T20:05:35+00:00] INFO: bash[emit provision complete] ran successfully [2013-07-17T20:05:35+00:00] INFO: Chef Run complete in 1332.027903781 seconds [2013-07-17T20:05:35+00:00] INFO: Running report handlers [2013-07-17T20:05:35+00:00] INFO: Report handlers complete
We can now log into our new EC2 instance, which is running Cloud Foundry v2...
$ vagrant ssh
Note: All commands that follow are intended to be run on the EC2 instance.
Push An App
First, we must initialize the Cloud Foundry v2 command-line interface with the following command...
$ cd /vagrant $ rake cf:init_cf_cli
Here is the output of that command...
==> Initializing cf CLI Setting target to http://127.0.0.1:8181... OK target: http://127.0.0.1:8181 Authenticating... OK There are no spaces. You may want to create one with create-space. Creating organization myorg... OK Switching to organization myorg... OK There are no spaces. You may want to create one with create-space. Creating space myspace... OK Adding you as a manager... OK Adding you as a developer... OK Space created! Use `cf switch-space myspace` to target it. Switching to space myspace... OK Target Information (where will apps be pushed): CF instance: http://127.0.0.1:8181 (API version: 2) user: admin target app space: myspace (org: myorg)
Now you can deploy one of the test apps. We will use a Node.js "Hello World" app...
$ cd test-apps/hello-node $ cf push
We see the output
Warning: url is not a valid manifest attribute. Please remove this attribute from your manifest to get rid of this warning Using manifest file manifest.yml Creating hello-node... OK 1: hello-node 2: none Subdomain> hello-node 1: vcap.me 2: none Domain> 1 Creating route hello-node.vcap.me... OK Binding hello-node.vcap.me to hello-node... OK Uploading hello-node... OK Preparing to start hello-node... OK Checking status of app 'hello-node'........................... 0 of 1 instances running (1 starting) 0 of 1 instances running (1 starting) 1 of 1 instances running (1 running) Push successful! App 'hello-node' available at http://hello-node.vcap.me
Cloud Foundry v2 is running on localhost on our EC2 instance, so our app is not accessible from our web-browser, but we can test the app exists using curl from the EC2 instance.
$ curl http://hello-node.vcap.me/
Here is what is output by curl...
Hello from Cloud Foundry
Delete The App
To delete the app, you can use cf delete
$ cf delete
The following output is seen
Warning: url is not a valid manifest attribute. Please remove this attribute from your manifest to get rid of this warning Using manifest file manifest.yml Really delete hello-node?> y Deleting hello-node... OK
From the notes I was given...
Now, to expose apps externally, it gets trickier. First, you'll need to provision an elastic IP in the AWS console and attach it to the EC2 instance that's running the cf v2 install. Then, you'll need to set up a wildcard DNS record to point to that IP (*.domain and domain should point to that IP). xip.io might work here, but I'm not familiar enough with it to know for sure.
xip.io is actually perfect for this. All I need is my external IP, which was 18.104.22.168, and I append ".xip.io", which gives me "22.214.171.124.xip.io" as well as wildcard "*.126.96.36.199.xip.io" for the Cloud Foundry API and any apps I deploy. This is a zero configuration service. The IP that you want to resolve to is included in the hostname you create and the DNS service simply returns you the IP. This means you can have a valid globally resolvable DNS hostname instantly.
I can also get a simpler hostname by checking the DNS record of this hostname, which is actually just a CNAME.
$ host 188.8.131.52.xip.io
184.108.40.206.xip.io is an alias for hj8raq.xip.io. hj8raq.xip.io has address 220.127.116.11 Host hj8raq.xip.io not found: 3(NXDOMAIN) Host hj8raq.xip.io not found: 3(NXDOMAIN)
...so I can use hj8raq.xip.io instead, since it is shorter and I just want to use it temporarily.Updating More Config
Since we now have an external domain name, not just localhost, we need to update some configuration files within the custom_config_files directory.
$ cd /vagrant/custom_config_files
Assuming you are running under the domain "yourdomain" (or "hj8raq.xip.io" in my case), you should edit the cloud_controller.yml as follows...
$ (cd cloud_controller_ng; vim cloud_controller.yml)
- change external_domain to api.yourdomain
- change system_domain to yourdomain
- change app_domains to yourdomain
- change uaa:url to http://yourdomain:8080/uaa
Next, edit the DEA configuration.
$ (cd dea_ng; vim dea.yml)
- change domain to yourdomain
And finally, the configuration of the Health Manager...
$ (cd health_manager; vim health_manager.yml)
- change bulk_api:host to http://api.yourdomain:8181
There was a small bug on my AWS deployment that may have been fixed. This was related to a incompatibility with the JSON between the Cloud Controller and the Router when registering the API endpoint with the router. Here's the fix...
$ cd /vagrant/cloud_controller_ng/lib/cloud_controller $ vim message_bus.rb
and change the line
:uris => config[:external_domain],
:uris => [config[:external_domain]],
This will make :uris an array, not a string. Probably better to fix this in the gorouter, but this is quicker for now.Restart CC DB
Now we need to reset the Cloud Controller database.
$ cd /vagrant/ $ rake cf:bootstrap
Finally, reboot the machine.
$ sudo reboot
When the machine comes back up, we can ssh back into it
$ vagrant ssh
and run the ./start.sh command to start Cloud Foundry components.
$ cd /vagrant $ ./start.sh
Now, Cloud Foundry v2 should be running with your externally accessible endpoint.
What Comes Next?
The great thing about Cloud Foundry v2 is that the basic architecture hasn't changed from v1. That gives ActiveState the luxury of integrating one Cloud Foundry v2 component into Stackato at a time and ensuring our enterprise customers are not negatively impacted by a sudden wholesale change. (We're already using the new Health Manager, for example.) Over the next few months, you can expect to see some exciting new features and capabilities available in Stackato that build on top of Cloud Foundry v2's framework. I'll post more details as I have them.