Skip to content. | Skip to navigation

Personal tools
Sections

by Ken Manheimer last modified Aug 27, 2011 01:00 PM
Filed under: , ,
Technical info

getting plone + blobs in a working standalone instance

comment 8 in andi zeidler's tracker issue gives some commands for obtaining two different buildout recipes for the current plone/blob integration. the versions differ in the handling of some of the ingredients, but plone.app.blob requires the plone-3.x buildout.  [difference_between_plone-3.x_and_ploneout_buildouts]

  • do the plone.app.blob plone-3.x checkout in a directory named "Servers", situating the checkout in a directory named Plone3Cluster. we create Servers in /usr/local, but you can situate it anywhere, just adjust the paths i mention below:

    sudo mkdir /usr/local/src/Servers
    sudo chown $USER /usr/local/src/Servers
    cd /usr/local/Servers
    svn co http://svn.plone.org/svn/plone/plone.app.blob/buildouts/plone-3.x Plone3Cluster
    

    (in the

    putting the "cluster" in "Plone3Cluster" section, below, we'll cover actually incorporating zeo.)

  • change to the Plone3Cluster directory:

    cd Plone3Cluster
    
  • initialize and run the build:

    python2.4 bootstrap.py
    ./bin/buildout -v
    

    some incidental details:

    • the bootstrap.py command will be quiet and over quickly, the buildout command will spew a bunch of stuff and take a while.

    • you need to use python 2.4.3 or better

    • you may need to add PIL to your python2.4. when it's there, the command python2.4 -c"import Image" should execute without complaint. for recent PIL version 1.1.6, i did:

      pushd some-PIL-build-directory
      wget http://effbot.org/downloads/Imaging-1.1.6.tar.gz
      tar -xzf Imaging-1.1.6.tar.gz
      cd Imaging-1.1.6
      sudo python2.4 setup.py install
      popd # to return to where we were
      

      then, python2.4 -c "import Image"should execute without complaint.

  • optionally, run the plain (non-zeo) instance, so we can confirm in the next step that blobs work.

    (this is not necessary, but can uncover problems before introducing the additional layers of the cluster.)

    the built instance is located in parts/instance. the default configuration arranges for the storage directories to be within the var subdirectory of the top-level build directory. regular filestorage is in var/filestorage/Data.fs and blobs in var/blobstorage.

    here are the specific steps:

    • to set the admin user's account password to something different than the buildout default (admin:admin):

      ./bin/instance adduser admin <password>
      
    • to configure the instance to use different ports than the default (8080):

      in parts/instance/etc/zope.conf set the config variable port-base to a value that will added to the default 8080. for instance, i use 11000 to situate zope's http port on 19080 and ftp port at 19022:

      port-base 11000
      

      (note that changes directly to parts/instance/etc/zope.conf will not be retained when you redo the ./bin/buildout command. we describe more lasting configuration in the putting the "cluster" in "Plone3Cluster" section.)

    • if you will be starting this as a system application, using the root account, you need to set the effective-user variable. this is standard zope configuration, well documented elsewhere. one gotcha, though - be sure that the var directories are writable by the effective user. see the effective-user note, below, for details.

    • start the instance:

      ./bin/instance fg
      

      it'll chug for a bit, tell you that it's starting (and on what port), and spewing a bunch of stuff, eventually tell you INFO Ready to handle requests.

  • here are the steps to confirm blobs in a running instance:
    • once the site is started, visit the zope management interface with your web browser athttp://localhost:19080/manage(substituting your host and port) and log in as your admin user.
    • add a Plone Site, selecting theplone.app.blob: ATFile replacementExtension Profiles among your settings. (you can instead use the plain plone.app.blob profile. rather than replacing the portal File object with blobs, it leaves them as a regular ZODB residents and instead adds an additional blob object that is saved on the filesystem, outside of but coupled with the ZODB. the ATFile replacement profile makes the File object reside in the filesystem, and adds no extra blob object.)
    • before adding a blob, check the contents of the site's blobstorage dir, ./var/blobstorage. it should contain only a tmp directory, before there are any blobs. if not empty, note what's there so you can tell when something is added.
    • visit and view your new plone site, and use theAdd new... content menu to add a File, choosing some arbitrary file to upload from your computer. (any file will do - when it comes to blobs, bits is bits.)
    • check the blobstorage dir again, to see that there is a hex-numbered directory there, for your new blob object.

one important feature of the blob storage - the allocated directory and file for each blob will remain around after the blob content object has been deleted, until it's removed in the scope of a database pack. this is consistent with transactional database behavior, and enables transactional atomicity and consistency, user-level undo, and so on.

(as of February 2009, and probably earlier) the control panel Database Size page no longer includes the space used for blobstorage in the tally.

putting the "cluster" in "Plone3Cluster"

with some adjustments to the buildout recipes, we can arrange to have buildout create the elements of a zeo cluster server and a pair of clients along with the plone.app.blob provisions.

i have custom versions of buildout.cfg and devel.cfg which extends the zeo-based template described in theplone.app.blob installation section (including addition of a site.cfg for cluster-specific settings).

  • about buildout.cfg:

    this is the top-level recipe for building the cluster - just those details that someone actually building might want to adjust.

    my zeo-cluster buildout.cfg differs from the one described in the plone.app.blob installation section primarily in parameterizing the cluster-specific settings to use those in site.cfg. this way, different clusters on the same system - eg, the production cluster versus a development/tinkering one - can use the exact same buildout.cfg, and just have distinct versions of site.cfg with adjustments for zeo server and client ports, add-on products, and so on.

    additional notes:

    • see the cheeseshop plone.recipe.zope2instance and plone.recipe.zope2zeoserver entries for details about the many configuration options for these instances. here are some that i set for both kinds of recipes:

      effective-user = plone

      when running as a system service and started by the root account, zope requires us to run without dangerous privileges.

      if you set this, you must be sure that the various writable directories allow acces to the plone user - see the effective-user note, below.

      zeo-address = 19100

      so i don't compete with other installations. the same zeo-address must be used in the server and each of the clients. they default to 8100 when unset.

      using my buildout.cfg, you can just adjust the zeo-address in the site.cfg and it will be set identically in each of the cluster elements.

      zodb-cache-size = 100MB

      it defaults to something like 20MB, which was large years ago but is tiny nowadays. 500MB or 1000MB may make sense for substantial production sites.

      for just the zope2instance recipe (the plain instance and zeo clients), the following options are also available and interesting:

      user = admin:changeme

      ... substituting a distinct password of your own choosing for changeme. in general, don't use passwords shipped with software - or the ones advertised on web pages like this, for that matter...

      port-base = 100

      offset for all service ports used by the instance, useful to choose a clear segment of the port space for the http, ftp, webdav, and any other protocols service by the instances. for example, a port-baseof 100 is added to the default http 8080 address to result in http serving on 8180.

      you must use a different port-base in each of the clients.

      zeo-client-cache-size = 100MB

      it defaults to something like 30MB.

    • the parts section and each of the instance sections have commented-out lines for optional useful developer products, like the zope profiler and various debuggers. you can activate any of those lines in any of the instance sections by removing the leading comment # hash, but if you do be sure that the corresponding product name in the parts section is un-commented, as well, or the build will fail.

  • if you will be running plone as a system application, using the root account, you will need to set the effective-user variable. this is standard zope configuration, well documented elsewhere. one gotcha, though - be sure that the var directories are writable by the effective user, preferably according to their group and with the sticky group bit set. this includes the buildout-wide var directory as well as the ones in the server and client parts. eg, if youreffective-user is plone:

    sudo chown plone /usr/local/Servers/Plone3Cluster/var
    
  • rerun the buildout, to revise the instance's configuration according to our buildout changes:

    ./bin/buildout -v
    
  • add some enhanced startup scripts:

    the automatically built scripts in the cluster bin directory, zeoclient1, and client2, provide the means to start the cluster elements individually. the following cluster control scripts provide handy controls and monitoring of the elements in combination - particularly useful, eg, for cluster startup on system boot.

    these scripts were adapted from the plone unified installer cluster control scripts, with some enhancements, and one style change (which you may or may not wish to adopt). situate them in your buildout's bin directory, and make them executable with chmod +x.

    startcluster.sh

    starts the zeo server and client1 - it doesn't start client2, leaving that available for debugging, etc. it's easily controlled at will using one of the other scripts, clientXctl.

    the script takes an optional 'restart' argument, to perform restart instead of start commands. this is for use from restartcluster.sh, so that the two inherently stay symmetric.

    restartcluster.sh

    restart whatever startclustersh starts. invokes startcluster.sh with an optional argument, to stay symmetric without duplicating code.

    shutdowncluster.sh

    stop the zeo server and any clients that are running.

    (i've changed the shutdown order so clients are stopped before the zeo server, which makes more sense than the reverse.)

    clusterstatus.sh

    report the operational status of the cluster instances.

    clientXctl.sh

    launch zopectl for whatever client is specified by the first argument - eg, ./bin/clientXctl client2. additional arguments are passed to zopectl.

finally, we can test the cluster as we did with the standalone instance. instead of using ./bin/instance, issue the command ./bin/startcluster.sh to start it all. the log files reside in ./var/log, and are named for the operating element - zeo.logclient1.log and client1-Z2.log. check the step where we  for the location of the blobstorage directory, to verify creation of the blob.

incorporating a third-party application that has its own build

while distributing a product as a buildout is a very handy and reliable way to deliver the product, it can get in the way of combining that product with other features delivered in their own buildout. i believe this is a problem in the buildout system, rather than a mistake on the part of those using it to deliver products - buildout should make it easy to combine buildouts delivering independent features of the same underlying system, like plone. until it does, it's valuable to have some examples for combining a third party product not designed to be included in other buildouts.

quills is a good example, because it is one of the few or only substantial weblog products that worked with plone 3 [at the time of writing], and because it has several components, requiring more than trivial buildout changes.

  • the primary changes are in devel.cfg, and entail appending these sections to it:

    [quills]
    recipe = plone.recipe.distros
    urls =
        http://plone.org/products/quills/releases/1.6/quills-1-6-beta1.tgz
    
    [quills-settings]
    develop =
        parts/quills/Quills-1.6-beta1/src/quills.core
        parts/quills/Quills-1.6-beta1/src/quills.app
        parts/quills/Quills-1.6-beta1/src/quills.trackback
        parts/quills/Quills-1.6-beta1/src/quills.remoteblogging
    
    extra-paths =
        parts/quills/Quills-1.6-beta1/lib/python
    products =
        parts/quills/Quills-1.6-beta1/Products
    zcml =
        quills.core
        quills.app
        quills.trackback
        quills.remoteblogging
    

    the [quills] section is a recipe that identifies a gzipped tar archive file of a specific quills distribution to be used as a part in our buildout. the [quills-settings] section designates settings to be used for hooking the quills machinery into your built plone.

once the devel.cfg and buildout.cfg changes are made, you can rerun buildout (./bin/buildout -v) now you should be able to start plone and add Quills 1.6 via the site setup addon products activity, and then add quills blogs to your site.

Footnotes

[tracked-issues] many of the details are in the plone blobs trac ticket. as of feb 12, 2008 there is also an issue tracking the work to make standard file content types use ZODB BLOB support. in addition, this account has helped identify and spur some fixes .
[buildout-challenges]

without intense buildout expertise, trying to combine two buildout-defined plone distributions can be too hard. this is unfortunate, since buildout is a great way to deliver a system configuration. a plone.org tutorial or other guide to combining buildout recipes, if such a thing is possible, might make things easier here.

in general, the nuances of buildouts are intensely diverse and too often obscure. it has so many complex ingredients - zc.buildout, eggs, zcml, svn, setuptools, easy_install, python code, pypi, etc - and that is compounded by the layering of recipe upon recipe, and on top of all that an undocumented syntax. then it's growing and changing as its pieces grow and change.

this would be less painful if there was some was some canonical description, growing along with the system - a reference or even a guide - but i haven't found such. instead, examples in the pypi recipes use something like ipython to describe behavior of buildout snippets from the python prompt. have i just missed the right place to look, or is there no such thing?

(in the course of my flailing i discovered jim fulton's user-group buildout tutorial on the grok site, and it seems to tell the central story behind buildout. i'm thankful for that - but still feel the lack of a thorough reference.)

[difference between plone-3.x and ploneout buildouts]

the plone-3.x versions obtain plone and other ingredients at release points rather than as progressive trunk checkouts. this means that they will be at known states, but not as incrementally updatable to the latest checkins in their lineage. it turns out that this provides a more reliable target for consistent compatability with the plone.app.blob addon, and is what has become for use with plone.app.blob.

the alternative, ploneout-based version, obtains more plone ingredients as checkouts - of the most recent lineage, and incrementally updatable if/when desired. while more finely incremental, plone.app.blob no longer is ensured to be compatable, so we stick with the plone-3.x version.

in either case, the blobs-integration parts of plone are updatable checkouts.

[hand-editing] much hand editing of the created instances was originally required by these instructions, but gradually the checked-out code was rectified (eg, [recipe-fixes]), and now all of the configuration is properly applied via the recipes.
[recipe-fixes] (1,  234)

 

Document Actions