Deployment
3667
Comment: initial commit
|
7989
|
Deletions are marked like this. | Additions are marked like this. |
Line 2: | Line 2: |
== Setting up Juju == |
|
Line 6: | Line 8: |
export AWS_SECRET_ACCESS_KEY=$EC2_SECRET_KEY export AWS_ACCESS_KEY_ID=$EC2_ACCESS_KEY |
|
Line 16: | Line 16: |
canonistacktwo: type: openstack_s3 default-instance-type: m1.medium control-bucket: juju-replace-me-with-your-bucket admin-secret: <secret> auth-url: https://keystone.canonistack.canonical.com:443/v2.0/ access-key: <access key> secret-key: <secret key> default-series: precise juju-origin: ppa ssl-hostname-verification: True default-image-id: bb636e4f-79d7-4d6b-b13b-c7d53419fd5a canonistackone: |
canonistack: |
Line 32: | Line 20: |
ec2-uri: http://91.189.93.65:8773/services/Cloud s3-uri: http://91.189.93.65:3333 |
ec2-uri: https://ec2-lcy02.canonistack.canonical.com:443/services/Cloud s3-uri: http://s3-lcy02.canonistack.canonical.com:3333 |
Line 39: | Line 27: |
origin: ppa | juju-origin: ppa |
Line 43: | Line 31: |
Now you're ready to deploy the individual charms that make up daisy.ubuntu.com and errors.ubuntu.com: | == Deploying the error tracker == Now you're ready to checkout and deploy the individual charms that make up daisy.ubuntu.com and errors.ubuntu.com, which is done by a single script: |
Line 45: | Line 35: |
mkdir -p ~/bzr/precise bzr branch lp:~ev/charms/precise/daisy/trunk ~/bzr/precise/daisy bzr branch lp:~ev/charms/precise/daisy-retracer/trunk ~/bzr/precise/daisy-retracer bzr branch lp:~ev/charms/precise/errors/trunk ~/bzr/precise/errors bzr branch lp:~ev/+junk/whoopsie-daisy-deployment ~/bzr/whoopsie-daisy-deployment |
bzr branch lp:error-tracker-deployment |
Line 52: | Line 37: |
JUJU_ENV=canonistackone ~/bzr/whoopsie-daisy-deployment | error-tracker-deployment/deploy |
Line 55: | Line 40: |
Follow along with {{{JUJU_ENV=canonistackone juju status}}}. | Follow along with {{{juju status}}}. |
Line 57: | Line 42: |
Once all the nodes and relations are out of the pending state, you should be able to start throwing crashes at it. You'll probably want to use tmux or some other terminal multiplexer for this. | Once all the nodes and relations are out of the pending state, you should be able to start throwing crashes at it. |
Line 59: | Line 44: |
Shell #1: | == Using the Juju error tracker == The following command sets up various SSH tunnels to the Juju instances of daisy and errors, redirects the local whoopsie daemon to report crashes against the Juju daisy instance instead of errors.ubuntu.com, and shows the local whoopsie and remote daisy-retracer logs until you press Control-C: |
Line 61: | Line 49: |
DAISY_ADDRESS="$(juju status daisy/0 | grep public-address | sed 's,.*public-address: \(.*\)$,\1,')" ssh -N -L 8080:$DAISY_ADDRESS:80 $DAISY_ADDRESS |
error-tracker-deployment/run-juju-daisy |
Line 65: | Line 52: |
Shell #2: | This script has a commented out alternative of the ssh command to daisy which shows the Apache logs. Enable this, and disable the default one below if you want to debug problems with uploading the .crash files. == Generating and uploading crashes == You can generate a simple crash report with e. g. |
Line 67: | Line 58: |
ERRORS_ADDRESS="$(juju status errors/0 | grep public-address | sed 's,.*public-address: \(.*\)$,\1,')" ssh -N -L 8081:$ERRORS_ADDRESS:80 $ERRORS_ADDRESS |
bash -c 'kill -SEGV $$' |
Line 71: | Line 61: |
Shell #3: | and elect to report the crash in the popping up Apport window. Now open a browser to http://localhost:8081. You should have one problem in the most common problems table. For a more systematic and regular integration test you can use an automatically generated set of .crash files for various application classes (GTK, Qt, CLI, D-BUS, Python crash) from the [[https://code.launchpad.net/~daisy-pluckers/+recipe/apport-test-crashes|test crashes recipe]], which currently builds the crashes for i386, amd64, and armhf for precise, quantal, and raring. You can download the current ones with |
Line 73: | Line 68: |
sudo stop whoopsie bzr branch lp:whoopsie ~/bzr/whoopsie cd ~/bzr/whoopsie make sudo LD_LIBRARY_PATH=src CRASH_DB_URL=http://localhost:8080 ./src/whoopsie -f |
error-tracker-deployment/fetch-test-crashes |
Line 80: | Line 71: |
Shell #4: | which will download them into `./test-crashes/`''release''`/`/''architecture''`/*.crash`. Then you can use the `submit-crash` script to feed them individually or as a whole into whoopsie: |
Line 82: | Line 74: |
JUJU_ENV=canonistackone juju ssh daisy/0 tail -f /var/log/apache2/error.log -f /var/log/apache2/access.log |
error-tracker-deployment/submit-crash test-crashes # uploads all of them error-tracker-deployment/submit-crash test-crashes/raring/amd64 error-tracker-deployment/submit-crash test-crashes/precise/armhf/_usr_bin_apport-*.crash |
Line 86: | Line 79: |
Shell #5: | == Debugging tricks == You can purge the whole Cassandra database with |
Line 88: | Line 84: |
JUJU_ENV=canonistackone juju ssh daisy/0 | ~/bzr/error-tracker-deployment/purge-db }}} Call it with `--force` to do this without confirmation. You might want to watch out for exceptions thrown by daisy or errors themselves: {{{ juju ssh daisy/0 |
Line 92: | Line 96: |
Shell #6: | If you want to use the Launchpad functionality in errors you'll need to setup Launchpad OAuth tokens and put them in `/var/www/daisy/local_config.py` on your errors server. Information regarding setting up OAuth tokens can be found [[https://wiki.ubuntu.com/ErrorTracker/Contributing/Errors|here]]. == Prodstack == We're moving the production deployment of the error tracker to prodstack. Below are instructions on setting this up on Amazon EC2 and S3. |
Line 94: | Line 103: |
JUJU_ENV=canonistackone juju ssh daisy-retracer/0 tail -f /var/log/retracer.log |
bzr branch lp:errors (cd errors; tar --exclude=.bzr -czvf ../errors.tgz .) bzr branch lp:daisy (cd daisy; tar --exclude=.bzr -czvf ../daisy.tgz .) bzr branch lp:apport cd apport LATEST_TAG=`bzr tags | sort -nrk 2 | head -n1 | awk '{print $1}'` bzr uncommit --force -r tag:$LATEST_TAG bzr revert --no-backup bzr clean-tree --ignored --unknown --force rm -f apport/packaging_impl.py ln -s ../backends/packaging-apt-dpkg.py apport/packaging_impl.py tar --exclude=.bzr -czvf ../apport.tgz . cd - bzr branch lp:error-tracker-deployment cd error-tracker-deployment mkdir -p charms/daisy/precise bzr branch lp:~daisy-pluckers/charms/precise/daisy/trunk/ charms/daisy/precise/daisy cp ../daisy.tgz charms/daisy/precise/daisy/files/ bzr branch lp:~daisy-pluckers/charms/precise/daisy-retracer/trunk/ charms/daisy/precise/daisy-retracer cp ../daisy.tgz prodstack-prep/precise/daisy-retracer/files/ cp ../apport.tgz prodstack-prep/precise/daisy-retracer/files/ bzr branch lp:~daisy-pluckers/charms/precise/errors/trunk/ charms/daisy/precise/errors cp ../daisy.tgz charms/daisy/precise/errors/files/ cp ../errors.tgz charms/daisy/precise/errors/files/ bzr branch lp:~canonical-losas/canonical-marshal/apache2/ charms/daisy/precise/apache2 bzr branch lp:~canonical-losas/canonical-marshal/cassandra/ charms/daisy/precise/cassandra bzr branch lp:~canonical-losas/canonical-marshal/gunicorn/ charms/daisy/precise/gunicorn bzr branch lp:~canonical-losas/canonical-marshal/haproxy/ charms/daisy/precise/haproxy bzr branch lp:~canonical-losas/canonical-marshal/postgresql/ charms/daisy/precise/postgresql bzr branch lp:~canonical-losas/canonical-marshal/rabbitmq-server/ charms/daisy/precise/rabbitmq-server source ~/.canonistack/lcy_02 USER=stagingstack SRV_ROOT=. SWIFT_BUCKET=core_files CONFIG_DIR=. ./scripts/deploy-error-tracker deploy # You can run the instance reaper in another window: source ~/.canonistack/lcy_02 ./scripts/instance-reaper |
Line 98: | Line 150: |
Shell #7: | == Running Local Errors Connected to Production == It can be helpful to test the Errors web site with the production database, however doing this requires read only access to the database. To set this up you first need to create a system running errors. This can be done by following the steps in the error-tracker-deployment charm for errors. After doing this some changes will need to be made. In '''/var/www/errors/views.py''' you'll want to comment out the following line: |
Line 100: | Line 161: |
gedit &; PID="$\!"; sleep 3; kill -SEGV $PID # Elect to submit the error report when the apport dialog appears. |
@can_see_stacktraces |
Line 104: | Line 164: |
Now open a browser to http://localhost:8081. You should have one problem in the most common problems table. | Those appear two times in that file. Additionally, you'll need to setup oauth access so you can query the Launchpad API. Finally, you'll need to modify '''/var/www/errors/local_config.py'''. There you will need to set '''cassandra_hosts''' to an ssh tunnel to the cassandra database, '''cassandra_username''' to your username on the database, and '''cassandra_password''' to your password. Now it can be helpful to use a python debugger to determine the cause of an issue in your code. One way to setup a debugger is to install winpdb and use rpbd2. To set up debugging you would add a line like: {{{ import rpdb2; rpdb2.start_embedded_debugger('password') }}} After restarting apache and making a request of the webserver you'll be able to use rpdb2 to attach your code where the import appears. {{{ rpdb2 password "password" attach }}} After the attach command you'll see a list of running rpdb2 processes. You can attach to yours via '''attach''' and the process number. Now you can use the python debugger to determine where you went awry. |
This document will help you create instances of http://daisy.ubuntu.com and http://errors.ubuntu.com deployed in the cloud.
Setting up Juju
First you'll need to create an environment for Juju to bootstrap to. Follow the directions here to get a basic environment going. I'd suggest doing something akin to the following to bootstrap the initial node:
source ~/.canonistack/novarc juju bootstrap -e canonistackone --constraints "instance-type=m1.medium"
This will ensure that the juju bootstrap node doesn't take ages to perform basic tasks because it's constantly going into swap.
You should end up with something similar to the following in your ~/.juju/environments.yaml:
environments: canonistack: type: ec2 control-bucket: juju-replace-me-with-your-bucket admin-secret: <secret> ec2-uri: https://ec2-lcy02.canonistack.canonical.com:443/services/Cloud s3-uri: http://s3-lcy02.canonistack.canonical.com:3333 default-image-id: ami-00000097 access-key: <access key> secret-key: <secret key> default-series: precise ssl-hostname-verification: false juju-origin: ppa authorized-keys-path: ~/.ssh/authorized_keys
Deploying the error tracker
Now you're ready to checkout and deploy the individual charms that make up daisy.ubuntu.com and errors.ubuntu.com, which is done by a single script:
bzr branch lp:error-tracker-deployment source ~/.canonistack/novarc error-tracker-deployment/deploy
Follow along with juju status.
Once all the nodes and relations are out of the pending state, you should be able to start throwing crashes at it.
Using the Juju error tracker
The following command sets up various SSH tunnels to the Juju instances of daisy and errors, redirects the local whoopsie daemon to report crashes against the Juju daisy instance instead of errors.ubuntu.com, and shows the local whoopsie and remote daisy-retracer logs until you press Control-C:
error-tracker-deployment/run-juju-daisy
This script has a commented out alternative of the ssh command to daisy which shows the Apache logs. Enable this, and disable the default one below if you want to debug problems with uploading the .crash files.
Generating and uploading crashes
You can generate a simple crash report with e. g.
bash -c 'kill -SEGV $$'
and elect to report the crash in the popping up Apport window.
Now open a browser to http://localhost:8081. You should have one problem in the most common problems table.
For a more systematic and regular integration test you can use an automatically generated set of .crash files for various application classes (GTK, Qt, CLI, D-BUS, Python crash) from the test crashes recipe, which currently builds the crashes for i386, amd64, and armhf for precise, quantal, and raring. You can download the current ones with
error-tracker-deployment/fetch-test-crashes
which will download them into ./test-crashes/release//architecture/*.crash. Then you can use the submit-crash script to feed them individually or as a whole into whoopsie:
error-tracker-deployment/submit-crash test-crashes # uploads all of them error-tracker-deployment/submit-crash test-crashes/raring/amd64 error-tracker-deployment/submit-crash test-crashes/precise/armhf/_usr_bin_apport-*.crash
Debugging tricks
You can purge the whole Cassandra database with
~/bzr/error-tracker-deployment/purge-db
Call it with --force to do this without confirmation.
You might want to watch out for exceptions thrown by daisy or errors themselves:
juju ssh daisy/0 watch ls /srv/local-oopses-whoopsie
If you want to use the Launchpad functionality in errors you'll need to setup Launchpad OAuth tokens and put them in /var/www/daisy/local_config.py on your errors server. Information regarding setting up OAuth tokens can be found here.
Prodstack
We're moving the production deployment of the error tracker to prodstack. Below are instructions on setting this up on Amazon EC2 and S3.
bzr branch lp:errors (cd errors; tar --exclude=.bzr -czvf ../errors.tgz .) bzr branch lp:daisy (cd daisy; tar --exclude=.bzr -czvf ../daisy.tgz .) bzr branch lp:apport cd apport LATEST_TAG=`bzr tags | sort -nrk 2 | head -n1 | awk '{print $1}'` bzr uncommit --force -r tag:$LATEST_TAG bzr revert --no-backup bzr clean-tree --ignored --unknown --force rm -f apport/packaging_impl.py ln -s ../backends/packaging-apt-dpkg.py apport/packaging_impl.py tar --exclude=.bzr -czvf ../apport.tgz . cd - bzr branch lp:error-tracker-deployment cd error-tracker-deployment mkdir -p charms/daisy/precise bzr branch lp:~daisy-pluckers/charms/precise/daisy/trunk/ charms/daisy/precise/daisy cp ../daisy.tgz charms/daisy/precise/daisy/files/ bzr branch lp:~daisy-pluckers/charms/precise/daisy-retracer/trunk/ charms/daisy/precise/daisy-retracer cp ../daisy.tgz prodstack-prep/precise/daisy-retracer/files/ cp ../apport.tgz prodstack-prep/precise/daisy-retracer/files/ bzr branch lp:~daisy-pluckers/charms/precise/errors/trunk/ charms/daisy/precise/errors cp ../daisy.tgz charms/daisy/precise/errors/files/ cp ../errors.tgz charms/daisy/precise/errors/files/ bzr branch lp:~canonical-losas/canonical-marshal/apache2/ charms/daisy/precise/apache2 bzr branch lp:~canonical-losas/canonical-marshal/cassandra/ charms/daisy/precise/cassandra bzr branch lp:~canonical-losas/canonical-marshal/gunicorn/ charms/daisy/precise/gunicorn bzr branch lp:~canonical-losas/canonical-marshal/haproxy/ charms/daisy/precise/haproxy bzr branch lp:~canonical-losas/canonical-marshal/postgresql/ charms/daisy/precise/postgresql bzr branch lp:~canonical-losas/canonical-marshal/rabbitmq-server/ charms/daisy/precise/rabbitmq-server source ~/.canonistack/lcy_02 USER=stagingstack SRV_ROOT=. SWIFT_BUCKET=core_files CONFIG_DIR=. ./scripts/deploy-error-tracker deploy # You can run the instance reaper in another window: source ~/.canonistack/lcy_02 ./scripts/instance-reaper
Running Local Errors Connected to Production
It can be helpful to test the Errors web site with the production database, however doing this requires read only access to the database. To set this up you first need to create a system running errors. This can be done by following the steps in the error-tracker-deployment charm for errors.
After doing this some changes will need to be made. In /var/www/errors/views.py you'll want to comment out the following line:
@can_see_stacktraces
Those appear two times in that file. Additionally, you'll need to setup oauth access so you can query the Launchpad API. Finally, you'll need to modify /var/www/errors/local_config.py. There you will need to set cassandra_hosts to an ssh tunnel to the cassandra database, cassandra_username to your username on the database, and cassandra_password to your password.
Now it can be helpful to use a python debugger to determine the cause of an issue in your code. One way to setup a debugger is to install winpdb and use rpbd2. To set up debugging you would add a line like:
import rpdb2; rpdb2.start_embedded_debugger('password')
After restarting apache and making a request of the webserver you'll be able to use rpdb2 to attach your code where the import appears.
rpdb2 password "password" attach
After the attach command you'll see a list of running rpdb2 processes. You can attach to yours via attach and the process number. Now you can use the python debugger to determine where you went awry.
ErrorTracker/Deployment (last edited 2014-05-26 11:54:50 by brian-murray)