Deployment

Differences between revisions 14 and 15
Revision 14 as of 2013-01-23 16:22:36
Size: 6242
Editor: ev
Comment:
Revision 15 as of 2013-01-24 18:33:48
Size: 7865
Editor: brian-murray
Comment:
Deletions are marked like this. Additions are marked like this.
Line 145: Line 145:

== Running Local Errors Connected to Production ==

It can be helpful to test the Errors web site with the production database,
however doing this requires read only access to the database. To set this up
you first need to create a system running errors. This can be done by
following the steps in the error-tracker-deployment charm for errors.

After doing this some changes will need to be made. In
'''/var/www/errors/views.py''' you'll want to comment out the following lines:

{{{
@login_required
@user_passes_test
}}}

Those appear two times in that file. Additionally, you'll need to setup
oauth access so you can query the Launchpad API. Finally, you'll need to
modify '''/var/www/errors/local_config.py'''. There you will need to set
'''cassandra_hosts''' to an ssh tunnel to the cassandra database,
'''cassandra_username''' to your username on the database, and
'''cassandra_password''' to your password.

Now it can be helpful to use a python debugger to determine the cause of an issue in your code. One way to setup a debugger is to install winpdb and use rpbd2. To set up debugging you would add a line like:

{{{
import rpdb2; rpdb2.start_embedded_debugger('password')
}}}

After restarting apache and making a request of the webserver you'll be able to use rpdb2 to attach your code where the import appears.

{{{
rpdb2
password "password"
attach
}}}

After the attach command you'll see a list of running rpdb2 processes. You can attach to yours via '''attach''' and the process number. Now you can use the python debugger to determine where you went awry.

This document will help you create instances of http://daisy.ubuntu.com and http://errors.ubuntu.com deployed in the cloud.

Setting up Juju

First you'll need to create an environment for Juju to bootstrap to. Follow the directions here to get a basic environment going. I'd suggest doing something akin to the following to bootstrap the initial node:

source ~/.canonistack/novarc
juju bootstrap -e canonistackone --constraints "instance-type=m1.medium"

This will ensure that the juju bootstrap node doesn't take ages to perform basic tasks because it's constantly going into swap.

You should end up with something similar to the following in your ~/.juju/environments.yaml:

environments:
  canonistack:
  type: ec2
    control-bucket: juju-replace-me-with-your-bucket
    admin-secret: <secret>
    ec2-uri: https://ec2-lcy02.canonistack.canonical.com:443/services/Cloud
    s3-uri: http://s3-lcy02.canonistack.canonical.com:3333
    default-image-id: ami-00000097
    access-key: <access key>
    secret-key: <secret key>
    default-series: precise
    ssl-hostname-verification: false
    juju-origin: ppa
    authorized-keys-path: ~/.ssh/authorized_keys

Deploying the error tracker

Now you're ready to checkout and deploy the individual charms that make up daisy.ubuntu.com and errors.ubuntu.com, which is done by a single script:

bzr branch lp:error-tracker-deployment
source ~/.canonistack/novarc
error-tracker-deployment/deploy

Follow along with juju status.

Once all the nodes and relations are out of the pending state, you should be able to start throwing crashes at it.

Using the Juju error tracker

The following command sets up various SSH tunnels to the Juju instances of daisy and errors, redirects the local whoopsie daemon to report crashes against the Juju daisy instance instead of errors.ubuntu.com, and shows the local whoopsie and remote daisy-retracer logs until you press Control-C:

error-tracker-deployment/run-juju-daisy

This script has a commented out alternative of the ssh command to daisy which shows the Apache logs. Enable this, and disable the default one below if you want to debug problems with uploading the .crash files.

Generating and uploading crashes

You can generate a simple crash report with e. g.

bash -c 'kill -SEGV $$'

and elect to report the crash in the popping up Apport window.

Now open a browser to http://localhost:8081. You should have one problem in the most common problems table.

For a more systematic and regular integration test you can use an automatically generated set of .crash files for various application classes (GTK, Qt, CLI, D-BUS, Python crash) from the test crashes recipe, which currently builds the crashes for i386, amd64, and armhf for precise, quantal, and raring. You can download the current ones with

error-tracker-deployment/fetch-test-crashes

which will download them into ./test-crashes/release//architecture/*.crash. Then you can use the submit-crash script to feed them individually or as a whole into whoopsie:

error-tracker-deployment/submit-crash test-crashes  # uploads all of them
error-tracker-deployment/submit-crash test-crashes/raring/amd64
error-tracker-deployment/submit-crash test-crashes/precise/armhf/_usr_bin_apport-*.crash

Debugging tricks

You can purge the whole Cassandra database with

~/bzr/error-tracker-deployment/purge-db

Call it with --force to do this without confirmation.

You might want to watch out for exceptions thrown by daisy or errors themselves:

juju ssh daisy/0
watch ls /srv/local-oopses-whoopsie

If you want to use the Launchpad functionality in errors you'll need to setup Launchpad OAuth tokens and put them in /var/www/daisy/local_config.py on your errors server. Information regarding setting up OAuth tokens can be found here.

Prodstack

We're moving the production deployment of the error tracker to prodstack. Below are instructions on setting this up on Amazon EC2 and S3.

bzr branch lp:errors
(cd errors; tar -czvf ../errors.tgz .)

bzr branch lp:~ev/daisy/prodstack-prep daisy
(cd daisy; tar -czvf ../daisy.tgz .)

bzr branch lp:apport
cd apport
LATEST_TAG=`bzr tags | sort -nrk 2 | head -n1 | awk '{print $1}'`
bzr uncommit --force -r tag:$LATEST_TAG
bzr revert --no-backup
bzr clean-tree --ignored --unknown --force
rm -f apport/packaging_impl.py
ln -s ../backends/packaging-apt-dpkg.py apport/packaging_impl.py
tar -czvf ../apport.tgz .
cd -

bzr branch lp:~ev/+junk/prodstack-prep prodstack-prep
bzr branch lp:~ev/charms/precise/daisy/prodstack-prep
prodstack-prep/precise/daisy
cp daisy.tgz prodstack-prep/precise/daisy/files/

bzr branch lp:~ev/charms/precise/daisy-retracer/prodstack-prep prodstack-prep/precise/daisy-retracer
cp daisy.tgz prodstack-prep/precise/daisy-retracer/files/
cp apport.tgz prodstack-prep/precise/daisy-retracer/files/

bzr branch lp:~ev/charms/precise/errors/prodstack-prep prodstack-prep/precise/errors
cp daisy.tgz prodstack-prep/precise/errors/files/
cp errors.tgz prodstack-prep/precise/errors/files/

bzr branch lp:~canonical-losas/canonical-marshal/apache2/ prodstack-prep/precise/apache2
bzr branch lp:~canonical-losas/canonical-marshal/cassandra/ prodstack-prep/precise/cassandra
bzr branch lp:~canonical-losas/canonical-marshal/gunicorn/ prodstack-prep/precise/gunicorn
bzr branch lp:~canonical-losas/canonical-marshal/haproxy/ prodstack-prep/precise/haproxy
bzr branch lp:~ev/charms/precise/postgresql/prodstack-prep/ prodstack-prep/precise/postgresql
bzr branch lp:~canonical-losas/canonical-marshal/rabbitmq-server/ prodstack-prep/precise/rabbitmq-server

cd prodstack-prep
EC2_ACCESS_KEY="accesskey" EC2_SECRET_KEY="secretkey" EC2_HOST="
s3.amazonaws.com" JUJU_ENV=aws CONFIG_DIR=. JUJU_REPOSITORY=.
./deploy-error-tracker deploy

Running Local Errors Connected to Production

It can be helpful to test the Errors web site with the production database, however doing this requires read only access to the database. To set this up you first need to create a system running errors. This can be done by following the steps in the error-tracker-deployment charm for errors.

After doing this some changes will need to be made. In /var/www/errors/views.py you'll want to comment out the following lines:

@login_required
@user_passes_test

Those appear two times in that file. Additionally, you'll need to setup oauth access so you can query the Launchpad API. Finally, you'll need to modify /var/www/errors/local_config.py. There you will need to set cassandra_hosts to an ssh tunnel to the cassandra database, cassandra_username to your username on the database, and cassandra_password to your password.

Now it can be helpful to use a python debugger to determine the cause of an issue in your code. One way to setup a debugger is to install winpdb and use rpbd2. To set up debugging you would add a line like:

import rpdb2; rpdb2.start_embedded_debugger('password')

After restarting apache and making a request of the webserver you'll be able to use rpdb2 to attach your code where the import appears.

rpdb2
password "password"
attach

After the attach command you'll see a list of running rpdb2 processes. You can attach to yours via attach and the process number. Now you can use the python debugger to determine where you went awry.

ErrorTracker/Deployment (last edited 2014-05-26 11:54:50 by brian-murray)