7140224 package downloaded messages displayed twice for each zone
authorEdward Pilatowicz <edward.pilatowicz@oracle.com>
Mon, 11 Jul 2011 13:49:50 -0700
changeset 2690 11a8cae074e0
parent 2689 ac4c03208948
child 2691 11dc2fd8be9b
7140224 package downloaded messages displayed twice for each zone 7140127 pkg update with zones takes too long 7139809 image plan save logic should save merged actions
doc/client_api_versions.txt
doc/parallel-linked-images.txt
src/client.py
src/modules/actions/generic.py
src/modules/actions/license.py
src/modules/client/__init__.py
src/modules/client/actuator.py
src/modules/client/api.py
src/modules/client/api_errors.py
src/modules/client/image.py
src/modules/client/imageconfig.py
src/modules/client/imageplan.py
src/modules/client/linkedimage/__init__.py
src/modules/client/linkedimage/common.py
src/modules/client/linkedimage/system.py
src/modules/client/linkedimage/zone.py
src/modules/client/pkgdefs.py
src/modules/client/pkgplan.py
src/modules/client/pkgremote.py
src/modules/client/plandesc.py
src/modules/client/progress.py
src/modules/client/transport/transport.py
src/modules/facet.py
src/modules/fmri.py
src/modules/gui/misc_non_gui.py
src/modules/lint/engine.py
src/modules/manifest.py
src/modules/misc.py
src/modules/pipeutils.py
src/modules/version.py
src/pkg/external_deps.txt
src/pkg/manifests/developer:opensolaris:pkg5.p5m
src/pkg/manifests/package:pkg.p5m
src/pkgdep.py
src/setup.py
src/sysrepo.py
src/tests/api/t_async_rpc.py
src/tests/api/t_linked_image.py
src/tests/api/t_misc.py
src/tests/cli/t_pkg_linked.py
src/tests/cli/t_pkg_temp_sources.py
src/tests/pkg5unittest.py
src/tests/pylintrc
src/tests/run.py
--- a/doc/client_api_versions.txt	Fri Jun 15 16:58:18 2012 -0700
+++ b/doc/client_api_versions.txt	Mon Jul 11 13:49:50 2011 -0700
@@ -1,3 +1,28 @@
+Version 72:
+Incompatible with clients using versions 0-71.
+
+    pkg.client.api.ImageInterface has changed as follows:
+
+        * New functions added: img_plandir(), is_active_liveroot_be(),
+            isparent(), linked_publisher_check(), load_plan(), set_alt_repos()
+        * Functions removed: set_stage()
+        * Removed "runid" parameter from: __init__()
+        * Removed "accept" parameter from: gen_plan_*()
+        * Removed "show_licenses" parameter from: gen_plan_*()
+        * Added "pubcheck" parameter to:
+            gen_plan_sync(), gen_plan_update()
+
+    pkg.client.api.PlanDescription has changed as follows:
+        * Removed __init__() parameters: img, backup_be, backup_be_name,
+            new_be, be_activate, be_name
+        * Added __init__() parameters: op
+        * Removed methods: get_parsable_mediators(), get_parsable_varcets(),
+            get_salvaged(), get_services()
+        * Removed properties: is_active_root_be
+        * Added methods: getstate(), setstate(), fromstate()
+        * Added properties: executed, services, mediators, varcets,
+            plan_desc, salvaged, plan_type, update_index
+
 Version 71:
 Compatible with clients using versions 66-70.
 
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc/parallel-linked-images.txt	Mon Jul 11 13:49:50 2011 -0700
@@ -0,0 +1,205 @@
+.. This document is formatted using reStructuredText, which is a Markup
+   Syntax and Parser Component of Docutils for Python.  An html version
+   of this document can be generated using the following command:
+     rst2html.py doc/parallel-linked-images.txt >doc/parallel-linked-images.html
+
+======================
+Parallel Linked Images
+======================
+
+:Author: Edward Pilatowicz
+:Version: 0.1
+
+
+Problems
+========
+
+Currently linked image recursion is done serially and in stages.  For
+example, when we perform an "pkg update" on an image then for each child
+image we will execute multiple pkg.1 cli operations.  The multiple pkg.1
+invocations on a single child image correspond with the following
+sequential stages of pkg.1 execution:
+
+1) publisher check: sanity check child publisher configuration against
+   parent publisher configuration.
+2) planning: plan fmri and action changes.
+3) preparation: download content needed to execute planned changes.
+4) execution: execute planned changes.
+
+So to update an image with children, we invoke pkg.1 four times for each
+child image.  This architecture is inefficient for multiple reasons:
+
+- we don't do any operations on child images in parallel
+
+- when executing multiple pkg.1 invocations to perform a single
+  operation on a child image, we are constantly throwing out and
+  re-initializing lots of pkg.1 state.
+
+To make matters worse, when as we execute stages 3 and 4 on a child
+image the pkg client also re-executes previous stages.  For example,
+when we start stage 4 (execution) we re-execute stages 2 and 3.  So for
+each child we update we end up invoking stage 2 three times, and stage 3
+twice.  This leads to bugs like 18393 (where it seems that we download
+packages twice).  It also means that we have caching code buried within
+the packaging system that attempts to cache internal state to disk in an
+effort to speed up subsequent re-runs of previous stages.
+
+
+Solutions
+=========
+
+
+Eliminate duplicate work
+------------------------
+
+We want to eliminate a lot of the duplicate work done when executing
+packaging operations on children in stages.  To do this we will update
+the pkg client api to allow callers to:
+
+- Save an image plan to disk.
+- Load an image plan from disk.
+- Execute a loaded plan from disk without first "preparing" it.  (This
+  assumes that the caller has already "prepared" the plan in a previous
+  invocation.)
+
+In addition to eliminating duplicated work during staged execution, this
+will also allow us to stop caching intermediate state internally within
+the package system.  Instead client.py will be enhanced to cache the
+image plan and it will be the only component that knows about "staging".
+
+To allow us to save and restore plans, all image plan data will be saved
+within a PlanDescription object, and we will support serializing this
+object into a json format.  The json format for saved image plans is an
+internal, unstable, and unversioned private interface.  We will not
+support saving an image plan to disk and then executing it later with a
+different version of the packaging system on a different host.  Also,
+even though we will be adding data into the PlanDescription object we
+will also not be exposing any new information about an image plan to via
+the PlanDescription object to api consumers.
+
+An added advantage of allowing api consumers to save an image plan to
+disk is that it should help with our plans to have the api.gen_plan_*()
+functions to be able to return PlanDescription object for child images.
+A file descriptor (or path) associated with a saved image plan would be
+one way for child images to pass image plans back to their parent (which
+could then load them and yield them as results to api.gen_plan_*()).
+
+
+Update children in parallel
+---------------------------
+
+We want to enhance the package client so that it can update child images
+in parallel.
+
+Due to potential resource constraints (cpu, memory, and disk io) we
+cannot entirely remove the ability to operate on child images serially.
+Instead, we plan to allow for a concurrency setting that specifies how
+many child images we are willing to update in parallel.  By default when
+operating on child images we will use a concurrency setting of 1, this
+maintains the current behavior of the packaging system.  If a user wants
+to specify a higher concurrency setting, they can use the "-C N" option
+to subcommands that recurse (like "install", "update", etc) or they can
+set the environment variable "PKG_CONCURRENCY=N".  (In both cases N is
+an integer which specifies the desired concurrency level.)
+
+Currently, pkg.1 worker subprocesses are invoked via the pkg.1 cli
+interfaces.  When switching to parallel execution this will be changed
+to use a json encoded rpc execution model.  This richer interface is
+needed to allow worker processes to pause and resume execution between
+stages so that we can do multi-staged operations in a single process.
+
+Unfortunately, the current implementation does not yet retain child
+processes across different stages of execution.  Instead, whenever we
+start a new stage of execution, we spawn one process for each child
+images, then we make a remote procedure call into N images at once
+(where N is our concurrency level).  When an RPC returns, that child
+process exits and we start a call for the next available child.
+
+Ultimately, we'd like to move to model where we have a pool of N worker
+processes, and those processes can operate on different images as
+necessary.  These processes would be persistent across all stages of
+execution, and ideally, when moving from one stage to another these
+processes could cache in memory the state for at least N child images so
+that the processes could simply resume execution where they last left
+off.
+
+The client side of this rpc interface will live in a new module called
+PkgRemote.  The linked image subsystem will use the PkgRemote module to
+initiate operations on child images.  One PkgRemote instance will be
+allocated for each child that we are operating on.  Currently, this
+PkgRemote module will only support the sync and update operations used
+within linked images, but in the future it could easily be expanded to
+support other remote pkg.1 operations so that we can support recursive
+linked image operations (see 7140357).  When PkgRemote invokes an
+operation on a child image it will fork off a new pkg.1 worker process
+as follows:
+
+	pkg -R /path/to/linked/image remote --ctlfd=5
+
+this new pkg.1 worker process will function as an rpc server which the
+client will make requests to.
+
+Rpc communication between the client and server will be done via json
+encoded rpc.  These requests will be sent between the client and server
+via a pipe.  The communication pipe is created by the client, and its
+file descriptor is passed to the server via fork/exec.  The server is
+told about the pipe file descriptor via the --ctlfd parameter.  To avoid
+issues with blocking IO, all communication via this pipe will be done by
+passing file descriptors.  For example, if the client wants to send a
+rpc request to the server, it will write that rpc request into a
+temporary file and then send the fd associated with the temporary file
+over the pipe.  Any reply from the server will be similarly serialized
+and then sent via a file descriptor over the pipe.  This should ensure
+that no matter the size of the request or the response, we will not
+block when sending or receiving requests via the pipe.  (Currently, the
+limit of fds that can be queued in a pipe is around 700.  Given that our
+rpc model includes matched requests and responses, it seems unlikely
+that we'd ever hit this limit.)
+
+In the pkg.1 worker server process, we will have a simple json rpc
+server that lives within client.py.  This server will listen for
+requests from the client and invoke client.py subcommand interfaces
+(like update()).  The client.py subcommand interfaces were chosen to be
+the target for remote interfaces for rpc calls for the following
+reasons:
+
+- Least amount of encoding / decoding.  Since these interfaces are
+  invoked just after parsing user arguments, they mostly involve simple
+  arguments (strings, integers, etc) which have a direct json encoding.
+  Additionally, the return values from these calls are simple return
+  code integers, not objects, which means the results are also easy to
+  encode.  This means that we don't need lots of extra serialization /
+  de-serialization logic (for things like api exceptions, etc).
+
+- Output and exception handling.  The client.py interfaces already
+  handle exceptions and output for the client.  This means that we don't
+  have to create new output classes and build our own output and
+  exception management handling code, instead we leverage the existing
+  code.
+
+- Future recursion support.  Currently when recursing into child images
+  we only execute "sync" and "update" operations.  Eventually we want to
+  support pkg.1 subcommand recursion into linked images (see 7140357)
+  for many more operations.  If we do this, the client.py interfaces
+  provide a nice boundary since there will be an almost 1:1 mapping
+  between parent and child subcommand operations.
+
+
+Child process output and progress management
+--------------------------------------------
+
+Currently, since child execution happens serially, all child images have
+direct access to standard out and display their progress directly there.
+Once we start updating child images in parallel this will no longer be
+possible.  Instead, all output from children will be logged to temporary
+files and displayed by the parent when a child completes a given stage
+of execution.
+
+Additionally, since child images will no longer have access to standard
+out, we will need a new mechanism to indicate progress while operating
+on child images.  To do this we will have a progress pipe between each
+parent and child image.  The child image will write one byte to this
+pipe whenever one of the ProgressTracker`*_progress() interfaces are
+invoked.  The parent process can read from this pipe to detect progress
+within children and update its user visible progress tracker
+accordingly.
--- a/src/client.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/client.py	Mon Jul 11 13:49:50 2011 -0700
@@ -55,11 +55,13 @@
         import locale
         import logging
         import os
+        import re
         import socket
         import sys
         import textwrap
         import time
         import traceback
+        import tempfile
 
         import pkg
         import pkg.actions as actions
@@ -71,6 +73,7 @@
         import pkg.client.publisher as publisher
         import pkg.fmri as fmri
         import pkg.misc as misc
+        import pkg.pipeutils as pipeutils
         import pkg.version as version
 
         from pkg.client import global_settings
@@ -87,7 +90,7 @@
         import sys
         sys.exit(1)
 
-CLIENT_API_VERSION = 71
+CLIENT_API_VERSION = 72
 PKG_CLIENT_NAME = "pkg"
 
 JUST_UNKNOWN = 0
@@ -95,6 +98,7 @@
 JUST_RIGHT = 1
 
 logger = global_settings.logger
+pkg_timer = pkg.misc.Timer("pkg client")
 
 valid_special_attrs = ["action.hash", "action.key", "action.name", "action.raw"]
 
@@ -147,18 +151,18 @@
             "version"]
 
         basic_usage["install"] = _(
-            "[-nvq] [-g path_or_uri ...] [--accept]\n"
+            "[-nvq] [-C n] [-g path_or_uri ...] [--accept]\n"
             "            [--licenses] [--no-be-activate] [--no-index] [--no-refresh]\n"
             "            [--no-backup-be | --require-backup-be] [--backup-be-name name]\n"
             "            [--deny-new-be | --require-new-be] [--be-name name]\n"
             "            [--reject pkg_fmri_pattern ... ] pkg_fmri_pattern ...")
         basic_usage["uninstall"] = _(
-            "[-nvq] [--no-be-activate] [--no-index]\n"
+            "[-nvq] [-C n] [--no-be-activate] [--no-index]\n"
             "            [--no-backup-be | --require-backup-be] [--backup-be-name]\n"
             "            [--deny-new-be | --require-new-be] [--be-name name]\n"
             "            pkg_fmri_pattern ...")
         basic_usage["update"] = _(
-            "[-fnvq] [-g path_or_uri ...] [--accept]\n"
+            "[-fnvq] [-C n] [-g path_or_uri ...] [--accept]\n"
             "            [--licenses] [--no-be-activate] [--no-index] [--no-refresh]\n"
             "            [--no-backup-be | --require-backup-be] [--backup-be-name]\n"
             "            [--deny-new-be | --require-new-be] [--be-name name]\n"
@@ -237,14 +241,14 @@
             "            [--facet <facet_spec>=(True|False) ...]\n"
             "            [(-p|--publisher) [<name>=]<repo_uri>] dir")
         adv_usage["change-variant"] = _(
-            "[-nvq] [-g path_or_uri ...]\n"
+            "[-nvq] [-C n] [-g path_or_uri ...]\n"
             "            [--accept] [--licenses] [--no-be-activate]\n"
             "            [--no-backup-be | --require-backup-be] [--backup-be-name name]\n"
             "            [--deny-new-be | --require-new-be] [--be-name name]\n"
             "            <variant_spec>=<instance> ...")
 
         adv_usage["change-facet"] = _(
-            "[-nvq] [-g path_or_uri ...]\n"
+            "[-nvq] [-C n] [-g path_or_uri ...]\n"
             "            [--accept] [--licenses] [--no-be-activate]\n"
             "            [--no-backup-be | --require-backup-be] [--backup-be-name name]\n"
             "            [--deny-new-be | --require-new-be] [--be-name name]\n"
@@ -302,7 +306,7 @@
 
         priv_usage["list-linked"] = _("-H")
         priv_usage["attach-linked"] = _(
-            "[-fnvq] [--accept] [--licenses] [--no-index]\n"
+            "[-fnvq] [-C n] [--accept] [--licenses] [--no-index]\n"
             "            [--no-refresh] [--no-pkg-updates] [--linked-md-only]\n"
             "            [--allow-relink]\n"
             "            [--prop-linked <propname>=<propvalue> ...]\n"
@@ -311,8 +315,9 @@
             "[-fnvq] [-a|-l <li-name>] [--linked-md-only]")
         priv_usage["property-linked"] = _("[-H] [-l <li-name>] [propname ...]")
         priv_usage["audit-linked"] = _("[-a|-l <li-name>]")
+        priv_usage["pubcheck-linked"] = ""
         priv_usage["sync-linked"] = _(
-            "[-nvq] [--accept] [--licenses] [--no-index]\n"
+            "[-nvq] [-C n] [--accept] [--licenses] [--no-index]\n"
             "            [--no-refresh] [--no-parent-sync] [--no-pkg-updates]\n"
             "            [--linked-md-only] [-a|-l <name>]")
         priv_usage["set-property-linked"] = _(
@@ -601,17 +606,20 @@
 def get_tracker(parsable_version=None, quiet=False, verbose=0):
         if quiet:
                 progresstracker = progress.QuietProgressTracker(
-                    parsable_version=parsable_version)
+                    parsable_version=parsable_version,
+                    progfd=global_settings.client_output_progfd)
         else:
                 try:
                         progresstracker = \
                             progress.FancyUNIXProgressTracker(
                                 parsable_version=parsable_version, quiet=quiet,
-                                verbose=verbose)
+                                verbose=verbose,
+                                progfd=global_settings.client_output_progfd)
                 except progress.ProgressTrackerException:
                         progresstracker = progress.CommandLineProgressTracker(
                             parsable_version=parsable_version, quiet=quiet,
-                            verbose=verbose)
+                            verbose=verbose,
+                            progfd=global_settings.client_output_progfd)
         return progresstracker
 
 def fix_image(api_inst, args):
@@ -910,8 +918,7 @@
 
         plan = api_inst.describe()
 
-        if api_inst.is_liveroot and not api_inst.is_zone and \
-            not plan.is_active_root_be:
+        if not api_inst.is_active_liveroot_be:
                 # Warn the user since this isn't likely what they wanted.
                 if plan.new_be:
                         logger.warning(_("""\
@@ -983,7 +990,7 @@
 
                 if not plan.new_be:
                         cond_show(_("Services to change:"), "%d",
-                            len(plan.get_services()))
+                            len(plan.services))
 
         if "boot-archive" in disp:
                 status.append((_("Rebuild boot archive:"),
@@ -1064,7 +1071,7 @@
 
         if "services" in disp and not plan.new_be:
                 last_action = None
-                for action, smf_fmri in plan.get_services():
+                for action, smf_fmri in plan.services:
                         if last_action is None:
                                 logger.info("Services:")
                         if action != last_action:
@@ -1121,7 +1128,7 @@
                                 removed_fmris.append(str(rem))
                         else:
                                 added_fmris.append(str(add))
-                variants_changed, facets_changed = plan.get_parsable_varcets()
+                variants_changed, facets_changed = plan.varcets
                 backup_be_created = plan.backup_be
                 new_be_created = plan.new_be
                 backup_be_name = plan.backup_be_name
@@ -1130,8 +1137,8 @@
                 be_activated = plan.activate_be
                 space_available = plan.bytes_avail
                 space_required = plan.bytes_added
-                services_affected = plan.get_services()
-                mediators_changed = plan.get_parsable_mediators()
+                services_affected = plan.services
+                mediators_changed = plan.mediators
                 for dfmri, src_li, dest_li, acc, disp in \
                     plan.get_licenses():
                         src_tup = None
@@ -1235,13 +1242,11 @@
                 __display_parsable_plan(api_inst, parsable_version,
                     child_image_plans)
 
-def __api_prepare(operation, api_inst, accept=False):
+def __api_prepare_plan(operation, api_inst):
         # Exceptions which happen here are printed in the above level, with
         # or without some extra decoration done here.
         # XXX would be nice to kick the progress tracker.
         try:
-                if accept:
-                        accept_plan_licenses(api_inst)
                 api_inst.prepare()
         except (api_errors.PermissionsException, api_errors.UnknownErrors), e:
                 # Prepend a newline because otherwise the exception will
@@ -1349,7 +1354,7 @@
                         exc_type, exc_value, exc_tb = sys.exc_info()
 
                 try:
-                        salvaged = api_inst.describe().get_salvaged()
+                        salvaged = api_inst.describe().salvaged
                         if salvaged:
                                 logger.error("")
                                 logger.error(_("The following unexpected or "
@@ -1371,7 +1376,7 @@
 
         return rval
 
-def __api_alloc(imgdir, exact_match, pkg_image_used, quiet, runid=-1):
+def __api_alloc(imgdir, exact_match, pkg_image_used, quiet):
         progresstracker = get_tracker(quiet=quiet)
 
         def qv(val):
@@ -1384,7 +1389,7 @@
         try:
                 return api.ImageInterface(imgdir, CLIENT_API_VERSION,
                     progresstracker, None, PKG_CLIENT_NAME,
-                    exact_match=exact_match, runid=runid)
+                    exact_match=exact_match)
         except api_errors.ImageNotFoundException, e:
                 if e.user_specified:
                         if pkg_image_used:
@@ -1499,30 +1504,20 @@
         raise
         # NOTREACHED
 
-def __api_op(_op, _api_inst, _accept=False, _li_ignore=None, _noexecute=False,
+def __api_plan(_op, _api_inst, _accept=False, _li_ignore=None, _noexecute=False,
     _origins=None, _parsable_version=None, _quiet=False,
     _review_release_notes=False, _show_licenses=False, _stage=API_STAGE_DEFAULT,
     _verbose=0, **kwargs):
-        """Do something that involves the api.
-
-        Arguments prefixed with '_' are primarily used within this
-        function.  All other arguments must be specified via keyword
-        assignment and will be passed directly on to the api
-        interfaces being invoked."""
-
-        # massage arguments
-        if type(_li_ignore) == list:
-                # parse any linked image names specified on the command line
-                _li_ignore = _api_inst.parse_linked_name_list(_li_ignore)
 
         # All the api interface functions that we inovke have some
         # common arguments.  Set those up now.
         if _op != PKG_OP_REVERT:
-                kwargs["accept"] = _accept
                 kwargs["li_ignore"] = _li_ignore
         kwargs["noexecute"] = _noexecute
-        if _origins != None:
+        if _origins:
                 kwargs["repos"] = _origins
+        if _stage != API_STAGE_DEFAULT:
+                kwargs["pubcheck"] = False
 
         # display plan debugging information
         if _verbose > 2:
@@ -1548,61 +1543,183 @@
         elif _op == PKG_OP_UPDATE:
                 api_plan_func = _api_inst.gen_plan_update
         else:
-                raise RuntimeError("__api_op() invalid op: %s" % _op)
-
-        first_plan = True
-        plan_displayed = False
+                raise RuntimeError("__api_plan() invalid op: %s" % _op)
+
+        planned_self = False
         child_plans = []
         try:
                 for pd in api_plan_func(**kwargs):
-                        if not first_plan:
-                                #
+                        if planned_self:
                                 # we don't display anything for child images
                                 # since they currently do their own display
-                                # work unless parsable output is requested.
-                                #
+                                # work (unless parsable output is requested).
                                 child_plans.append(pd)
                                 continue
 
                         # the first plan description is always for ourself.
-                        first_plan = False
+                        planned_self = True
+                        pkg_timer.record("planning", logger=logger)
+
+                        # if we're in parsable mode don't display anything
+                        # until after we finish planning for all children
                         if _parsable_version is None:
                                 display_plan(_api_inst, [], _noexecute,
                                     _op, _parsable_version, _quiet,
                                     _show_licenses, _stage, _verbose)
-                                plan_displayed = True
+
+                        # if requested accept licenses for child images.  we
+                        # have to do this before recursing into children.
+                        if _accept:
+                                accept_plan_licenses(_api_inst)
         except:
                 rv = __api_plan_exception(_op, _noexecute, _verbose, _api_inst)
                 if rv != EXIT_OK:
+                        pkg_timer.record("planning", logger=logger)
                         return rv
 
-        if not plan_displayed:
+        if not planned_self:
+                # if we got an exception we didn't do planning for children
+                pkg_timer.record("planning", logger=logger)
+
+        elif _api_inst.isparent(_li_ignore):
+                # if we didn't get an exception and we're a parent image then
+                # we should have done planning for child images.
+                pkg_timer.record("planning children", logger=logger)
+
+        # if we didn't display our own plan (due to an exception), or if we're
+        # in parsable mode, then display our plan now.
+        if not planned_self or _parsable_version is not None:
                 try:
-                        display_plan(_api_inst, child_plans, _noexecute, _op,
-                            _parsable_version, _quiet, _show_licenses, _stage,
-                            _verbose)
+                        display_plan(_api_inst, child_plans, _noexecute,
+                            _op, _parsable_version, _quiet, _show_licenses,
+                            _stage, _verbose)
                 except api_errors.ApiException, e:
                         error(e, cmd=_op)
                         return EXIT_OOPS
 
-        stuff_to_do = not _api_inst.planned_nothingtodo()
-        if not stuff_to_do:
-                return EXIT_NOP
-
-        if _noexecute or _stage in [API_STAGE_PUBCHECK, API_STAGE_PLAN]:
-                return EXIT_OK
+        # if we didn't accept licenses (due to an exception) then do that now.
+        if not planned_self and _accept:
+                accept_plan_licenses(_api_inst)
+
+        return EXIT_OK
+
+def __api_plan_file(api_inst):
+        """Return the path to the PlanDescription save file."""
+
+        plandir = api_inst.img_plandir
+        return os.path.join(plandir, "plandesc")
+
+def __api_plan_save(api_inst):
+        """Save an image plan to a file."""
+
+        # get a pointer to the plan
+        plan = api_inst.describe()
+
+        # save the PlanDescription to a file
+        path = __api_plan_file(api_inst)
+        oflags = os.O_CREAT | os.O_TRUNC | os.O_WRONLY
+        try:
+                fd = os.open(path, oflags, 0644)
+                with os.fdopen(fd, "wb") as fobj:
+                        plan._save(fobj)
+
+                # cleanup any old style imageplan save files
+                for f in os.listdir(api_inst.img_plandir):
+                        path = os.path.join(api_inst.img_plandir, f)
+                        if re.search("^actions\.[0-9]+\.json$", f):
+                                os.unlink(path)
+                        if re.search("^pkgs\.[0-9]+\.json$", f):
+                                os.unlink(path)
+        except OSError, e:
+                raise api_errors._convert_error(e)
+
+        pkg_timer.record("saving plan", logger=logger)
+
+def __api_plan_load(api_inst, stage, origins):
+        """Loan an image plan from a file."""
+
+        # load an existing plan
+        path = __api_plan_file(api_inst)
+        plan = api.PlanDescription()
+        try:
+                with open(path) as fobj:
+                        plan._load(fobj)
+        except OSError, e:
+                raise api_errors._convert_error(e)
+
+        pkg_timer.record("loading plan", logger=logger)
+
+        api_inst.reset()
+        api_inst.set_alt_repos(origins)
+        api_inst.load_plan(plan, prepared=(stage == API_STAGE_EXECUTE))
+        pkg_timer.record("re-initializing plan", logger=logger)
+
+        if stage == API_STAGE_EXECUTE:
+                __api_plan_delete(api_inst)
+
+def __api_plan_delete(api_inst):
+        """Delete an image plan file."""
+
+        path = __api_plan_file(api_inst)
+        try:
+                os.unlink(path)
+        except OSError, e:
+                raise api_errors._convert_error(e)
+
+def __api_op(_op, _api_inst, _accept=False, _li_ignore=None, _noexecute=False,
+    _origins=None, _parsable_version=None, _quiet=False,
+    _review_release_notes=False, _show_licenses=False,
+    _stage=API_STAGE_DEFAULT, _verbose=0, **kwargs):
+        """Do something that involves the api.
+
+        Arguments prefixed with '_' are primarily used within this
+        function.  All other arguments must be specified via keyword
+        assignment and will be passed directly on to the api
+        interfaces being invoked."""
+
+        if _stage in [API_STAGE_DEFAULT, API_STAGE_PLAN]:
+                # create a new plan
+                rv = __api_plan(_op=_op, _api_inst=_api_inst,
+                    _accept=_accept, _li_ignore=_li_ignore,
+                    _noexecute=_noexecute, _origins=_origins,
+                    _parsable_version=_parsable_version, _quiet=_quiet,
+                    _review_release_notes=_review_release_notes,
+                    _show_licenses=_show_licenses, _stage=_stage,
+                    _verbose=_verbose, **kwargs)
+
+                if rv != EXIT_OK:
+                        return rv
+                if not _noexecute and _stage == API_STAGE_PLAN:
+                        # We always save the plan, even if it is a noop.  We
+                        # do this because we want to be able to verify that we
+                        # can load and execute a noop plan.  (This mimics
+                        # normal api behavior which doesn't prevent an api
+                        # consumer from creating a noop plan and then
+                        # preparing and executing it.)
+                        __api_plan_save(_api_inst)
+                if _api_inst.planned_nothingtodo():
+                        return EXIT_NOP
+                if _noexecute or _stage == API_STAGE_PLAN:
+                        return EXIT_OK
+        else:
+                assert _stage in [API_STAGE_PREPARE, API_STAGE_EXECUTE]
+                __api_plan_load(_api_inst, _stage, _origins)
 
         # Exceptions which happen here are printed in the above level,
         # with or without some extra decoration done here.
-        ret_code = __api_prepare(_op, _api_inst, accept=_accept)
-        if ret_code != EXIT_OK:
-                return ret_code
-
-        if _stage == API_STAGE_PREPARE:
-                return EXIT_OK
+        if _stage in [API_STAGE_DEFAULT, API_STAGE_PREPARE]:
+                ret_code = __api_prepare_plan(_op, _api_inst)
+                pkg_timer.record("preparing", logger=logger)
+
+                if ret_code != EXIT_OK:
+                        return ret_code
+                if _stage == API_STAGE_PREPARE:
+                        return EXIT_OK
 
         ret_code = __api_execute_plan(_op, _api_inst)
-        if _review_release_notes and ret_code == 0 and \
+        pkg_timer.record("executing", logger=logger)
+
+        if _review_release_notes and ret_code == EXIT_OK and \
             _stage == API_STAGE_DEFAULT and _api_inst.solaris_image():
                 msg("\n" + "-" * 75)
                 msg(_("NOTE: Please review release notes posted at:\n" ))
@@ -1707,7 +1824,7 @@
                 # add to ignore list
                 li_ignore.append(li_name)
 
-        opts_new["li_ignore"] = li_ignore
+        opts_new["li_ignore"] = api_inst.parse_linked_name_list(li_ignore)
 
 def opts_table_cb_li_no_psync(op, api_inst, opts, opts_new):
         # if a target child linked image was specified, the no-parent-sync
@@ -1762,7 +1879,8 @@
                 # add to ignore list
                 li_target_list.append(li_name)
 
-        opts_new["li_target_list"] = li_target_list
+        opts_new["li_target_list"] = \
+            api_inst.parse_linked_name_list(li_target_list)
 
 def opts_table_cb_li_target1(op, api_inst, opts, opts_new):
         # figure out which option the user specified
@@ -1883,6 +2001,63 @@
         if opts_new["summary"] and opts_new["verbose"]:
                 opts_err_incompat("-s", "-v", op)
 
+def opts_cb_int(k, op, api_inst, opts, opts_new, minimum=None):
+        if k not in opts:
+                usage(_("missing required parameter: %s" % k), cmd=op)
+                return
+
+        # get the original argument value
+        v = opts[k]
+
+        # make sure it is an integer
+        try:
+                v = int(v)
+        except (ValueError, TypeError):
+                # not a valid integer
+                err = _("invalid '%s' value: %s") % (k, v)
+                usage(err, cmd=op)
+
+        # check the minimum bounds
+        if minimum is not None and v < minimum:
+                err = _("'%s' must be >= %d") % (k, minimum)
+                usage(err, cmd=op)
+
+        # update the new options array to make the value an integer
+        opts_new[k] = v
+
+def opts_cb_fd(k, op, api_inst, opts, opts_new):
+        opts_cb_int(k, op, api_inst, opts, opts_new, minimum=0)
+
+        err = _("invalid '%s' value: %s") % (k, opts_new[k])
+        try:
+                os.fstat(opts_new[k])
+        except OSError:
+                # not a valid file descriptor
+                usage(err, cmd=op)
+
+def opts_cb_remote(op, api_inst, opts, opts_new):
+        opts_cb_fd("ctlfd", op, api_inst, opts, opts_new)
+        opts_cb_fd("progfd", op, api_inst, opts, opts_new)
+
+        # move progfd from opts_new into a global
+        global_settings.client_output_progfd = opts_new["progfd"]
+        del opts_new["progfd"]
+
+def opts_table_cb_concurrency(op, api_inst, opts, opts_new):
+        if opts["concurrency"] is None:
+                # remove concurrency from parameters dict
+                del opts_new["concurrency"]
+                return
+
+        # make sure we have an integer
+        opts_cb_int("concurrency", op, api_inst, opts, opts_new)
+
+        # update global concurrency setting
+        global_settings.client_concurrency = opts_new["concurrency"]
+
+        # remove concurrency from parameters dict
+        del opts_new["concurrency"]
+
 #
 # options common to multiple pkg(1) subcommands.  The format for specifying
 # options is a list which can contain:
@@ -1910,6 +2085,11 @@
     ("",  "require-new-be",    "require_new_be",     False),
 ]
 
+opts_table_concurrency = [
+    opts_table_cb_concurrency,
+    ("C", "concurrency=",      "concurrency",        None),
+]
+
 opts_table_force = [
     ("f", "",                "force",                False),
 ]
@@ -2009,6 +2189,7 @@
 #
 opts_install = \
     opts_table_beopts + \
+    opts_table_concurrency + \
     opts_table_li_ignore + \
     opts_table_li_no_psync + \
     opts_table_licenses + \
@@ -2095,6 +2276,7 @@
 
 opts_uninstall = \
     opts_table_beopts + \
+    opts_table_concurrency + \
     opts_table_li_ignore + \
     opts_table_no_index + \
     opts_table_nqv + \
@@ -2139,6 +2321,84 @@
     ("u", "",               "list_upgradable",       False),
 ]
 
+opts_remote = [
+    opts_cb_remote,
+    ("",  "ctlfd",           "ctlfd",                None),
+    ("",  "progfd",          "progfd",               None),
+]
+
+
+class RemoteDispatch(object):
+        """RPC Server Class which invoked by the PipedRPCServer when a RPC
+        request is recieved."""
+
+        def __dispatch(self, op, pwargs):
+
+                pkg_timer.record("rpc dispatch wait", logger=logger)
+
+                # if we were called with no arguments then pwargs will be []
+                if pwargs == []:
+                        pwargs = {}
+
+                op_supported = [
+                    PKG_OP_AUDIT_LINKED,
+                    PKG_OP_DETACH,
+                    PKG_OP_PUBCHECK,
+                    PKG_OP_SYNC,
+                    PKG_OP_UPDATE,
+                ]
+                if op not in op_supported:
+                        raise Exception(
+                            'method "%s" is not supported' % op)
+
+                # if a stage was specified, get it.
+                stage = pwargs.get("stage", API_STAGE_DEFAULT)
+                assert stage in api_stage_values
+
+                # if we're starting a new operation, reset the api.  we do
+                # this just in case our parent updated our linked image
+                # metadata.
+                if stage in [API_STAGE_DEFAULT, API_STAGE_PLAN]:
+                        _api_inst.reset()
+
+                op_func = cmds[op][0]
+                rv = op_func(op, _api_inst, pargs, **pwargs)
+
+                if DebugValues["timings"]:
+                        msg(str(pkg_timer))
+                pkg_timer.reset()
+
+                return rv
+
+        def _dispatch(self, op, pwargs):
+                """Primary RPC dispatch function.
+
+                This function must be kept super simple because if we take an
+                exception here then no output will be generated and this
+                package remote process will silently exit with a non-zero
+                return value (and the lack of an exception message makes this
+                failure very difficult to debug).  Hence we wrap the real
+                remote dispatch routine with a call to handle_errors(), which
+                will catch and display any exceptions encountered."""
+
+                # flush output before and after every operation.
+                misc.flush_output()
+                misc.truncate_file(sys.stdout)
+                misc.truncate_file(sys.stderr)
+                rv = handle_errors(self.__dispatch, True, op, pwargs)
+                misc.flush_output()
+                return rv
+
+def remote(op, api_inst, pargs, ctlfd):
+        """Execute commands from a remote pipe"""
+
+        rpc_server = pipeutils.PipedRPCServer(ctlfd)
+        rpc_server.register_introspection_functions()
+        rpc_server.register_instance(RemoteDispatch())
+
+        pkg_timer.record("rpc startup", logger=logger)
+        rpc_server.serve_forever()
+
 def change_variant(op, api_inst, pargs,
     accept, backup_be, backup_be_name, be_activate, be_name, li_ignore,
     li_parent_sync, new_be, noexecute, origins, parsable_version, quiet,
@@ -2299,10 +2559,10 @@
             be_name=be_name, new_be=new_be, _parsable_version=parsable_version,
             pkgs_to_uninstall=pargs, update_index=update_index)
 
-def update(op, api_inst, pargs,
-    accept, backup_be, backup_be_name, be_activate, be_name, force, li_ignore,
-    li_parent_sync, new_be, noexecute, origins, parsable_version, quiet,
-    refresh_catalogs, reject_pats, show_licenses, stage, update_index, verbose):
+def update(op, api_inst, pargs, accept, backup_be, backup_be_name, be_activate,
+    be_name, force, li_ignore, li_parent_sync, new_be, noexecute, origins,
+    parsable_version, quiet, refresh_catalogs, reject_pats, show_licenses,
+    stage, update_index, verbose):
         """Attempt to take all installed packages specified to latest
         version."""
 
@@ -2317,8 +2577,6 @@
         if not xrval:
                 return EXIT_OOPS
 
-        api_inst.set_stage(stage)
-
         if res:
                 # If there are specific installed packages to update,
                 # then take only those packages to the latest version
@@ -2543,7 +2801,7 @@
         if noexecute:
                 return EXIT_OK
 
-        ret_code = __api_prepare(op, api_inst, accept=False)
+        ret_code = __api_prepare_plan(op, api_inst)
         if ret_code != EXIT_OK:
                 return ret_code
 
@@ -2621,7 +2879,7 @@
         if noexecute:
                 return EXIT_OK
 
-        ret_code = __api_prepare(op, api_inst, accept=False)
+        ret_code = __api_prepare_plan(op, api_inst)
         if ret_code != EXIT_OK:
                 return ret_code
 
@@ -2750,7 +3008,7 @@
                     "time": _("DATE"),
                     "comment": _("COMMENT")
                 })
-        
+
         for pfmri, comment, timestamp in lst:
                 vertext = pfmri.version.get_short_version()
                 ts = pfmri.version.get_timestamp()
@@ -4953,9 +5211,6 @@
 
         api_inst.progresstracker = get_tracker(quiet=omit_headers)
 
-        if li_ignore and type(li_ignore) == list:
-                li_ignore = api_inst.parse_linked_name_list(li_ignore)
-
         li_list = api_inst.list_linked(li_ignore)
         if len(li_list) == 0:
                 return EXIT_OK
@@ -4975,6 +5230,20 @@
                 msg(fmt % tuple(row))
         return EXIT_OK
 
+def pubcheck_linked(op, api_inst, pargs):
+        """If we're a child image, verify that the parent image
+        publisher configuration is a subset of our publisher configuration.
+        If we have any children, recurse into them and perform a publisher
+        check."""
+
+        try:
+                api_inst.linked_publisher_check()
+        except api_errors.ImageLockedError, e:
+                error(e)
+                return EXIT_LOCKED
+
+        return EXIT_OK
+
 def __parse_linked_props(args, op):
         """"Parse linked image property options that were specified on the
         command line into a dictionary.  Make sure duplicate properties were
@@ -5066,7 +5335,11 @@
         return EXIT_OK
 
 def audit_linked(op, api_inst, pargs,
-    li_parent_sync, li_target_all, li_target_list, omit_headers, quiet):
+    li_parent_sync,
+    li_target_all,
+    li_target_list,
+    omit_headers,
+    quiet):
         """pkg audit-linked [-a|-l <li-name>]
 
         Audit one or more child images to see if they are in sync
@@ -5074,8 +5347,6 @@
 
         api_inst.progresstracker = get_tracker(quiet=omit_headers)
 
-        li_target_list = api_inst.parse_linked_name_list(li_target_list)
-
         # audit the requested child image(s)
         if not li_target_all and not li_target_list:
                 # audit the current image
@@ -5105,12 +5376,11 @@
                 error(err, cmd=op)
         return rv
 
-def sync_linked(op, api_inst, pargs,
-    accept, backup_be, backup_be_name, be_activate, be_name, li_ignore,
-    li_parent_sync, new_be, noexecute, origins, parsable_version, quiet,
-    refresh_catalogs, reject_pats, show_licenses, update_index, verbose,
-    li_md_only, li_pkg_updates, li_target_all, li_target_list, stage):
-
+def sync_linked(op, api_inst, pargs, accept, backup_be, backup_be_name,
+    be_activate, be_name, li_ignore, li_md_only, li_parent_sync,
+    li_pkg_updates, li_target_all, li_target_list, new_be, noexecute, origins,
+    parsable_version, quiet, refresh_catalogs, reject_pats, show_licenses,
+    stage, update_index, verbose):
         """pkg audit-linked [-a|-l <li-name>]
             [-nvq] [--accept] [--licenses] [--no-index] [--no-refresh]
             [--no-parent-sync] [--no-pkg-updates]
@@ -5125,10 +5395,6 @@
         if not xrval:
                 return EXIT_OOPS
 
-        api_inst.set_stage(stage)
-
-        li_target_list = api_inst.parse_linked_name_list(li_target_list)
-
         if not li_target_all and not li_target_list:
                 # sync the current image
                 return __api_op(op, api_inst, _accept=accept,
@@ -5245,8 +5511,6 @@
 
         api_inst.progresstracker = get_tracker(quiet=quiet, verbose=verbose)
 
-        li_target_list = api_inst.parse_linked_name_list(li_target_list)
-
         if not li_target_all and not li_target_list:
                 # detach the current image
                 return __api_op(op, api_inst, _noexecute=noexecute,
@@ -5571,7 +5835,7 @@
                         error(str(e), cmd="history")
                         sys.exit(EXIT_OOPS)
 
-        for he in gen_entries(): 
+        for he in gen_entries():
                 # populate a dictionary containing our output
                 output = {}
                 for col in history_cols:
@@ -5782,15 +6046,94 @@
 
 # To allow exception handler access to the image.
 _api_inst = None
+pargs = None
 img = None
 orig_cwd = None
 
+#
+# cmds dictionary is used to dispatch subcommands.  The format of this
+# dictionary is:
+#
+#       "subcommand-name" : (
+#               subcommand-cb,
+#               subcommand-opts-table,
+#               arguments-allowed
+#       )
+#
+#       subcommand-cb: the callback function invoked for this subcommand
+#       subcommand-opts-table: an arguments options table that is passed to
+#               the common options processing function misc.opts_parse().
+#               if None then misc.opts_parse() is not invoked.
+#
+#       arguments-allowed (optional): the number of additional arguments
+#               allowed to this function, which is also passed to
+#               misc.opts_parse()
+#
+# placeholders in this lookup table for image-create, help and version
+# which don't have dedicated methods
+#
+cmds = {
+    "add-property-value"    : (property_add_value, None),
+    "attach-linked"         : (attach_linked, opts_attach_linked, 2),
+    "avoid"                 : (avoid, None),
+    "audit-linked"          : (audit_linked, opts_audit_linked),
+    "authority"             : (publisher_list, None),
+    "change-facet"          : (change_facet, opts_install, -1),
+    "change-variant"        : (change_variant, opts_install, -1),
+    "contents"              : (list_contents, None),
+    "detach-linked"         : (detach_linked, opts_detach_linked),
+    "facet"                 : (facet_list, None),
+    "fix"                   : (fix_image, None),
+    "freeze"                : (freeze, None),
+    "help"                  : (None, None),
+    "history"               : (history_list, None),
+    "image-create"          : (None, None),
+    "info"                  : (info, None),
+    "install"               : (install, opts_install, -1),
+    "list"                  : (list_inventory, opts_list_inventory, -1),
+    "list-linked"           : (list_linked, opts_list_linked),
+    "mediator"              : (list_mediators, opts_list_mediator, -1),
+    "property"              : (property_list, None),
+    "property-linked"       : (list_property_linked,
+                                  opts_list_property_linked, -1),
+    "pubcheck-linked"       : (pubcheck_linked, []),
+    "publisher"             : (publisher_list, None),
+    "purge-history"         : (history_purge, None),
+    "rebuild-index"         : (rebuild_index, None),
+    "refresh"               : (publisher_refresh, None),
+    "remote"                : (remote, opts_remote, 0),
+    "remove-property-value" : (property_remove_value, None),
+    "revert"                : (revert, opts_revert, -1),
+    "search"                : (search, None),
+    "set-authority"         : (publisher_set, None),
+    "set-mediator"          : (set_mediator, opts_set_mediator, -1),
+    "set-property"          : (property_set, None),
+    "set-property-linked"   : (set_property_linked,
+                                  opts_set_property_linked, -1),
+    "set-publisher"         : (publisher_set, None),
+    "sync-linked"           : (sync_linked, opts_sync_linked),
+    "unavoid"               : (unavoid, None),
+    "unfreeze"              : (unfreeze, None),
+    "uninstall"             : (uninstall, opts_uninstall, -1),
+    "unset-authority"       : (publisher_unset, None),
+    "unset-property"        : (property_unset, None),
+    "update-format"         : (update_format, None),
+    "unset-mediator"        : (unset_mediator, opts_unset_mediator, -1),
+    "unset-publisher"       : (publisher_unset, None),
+    "update"                : (update, opts_update, -1),
+    "update-format"         : (update_format, None),
+    "variant"               : (variant_list, None),
+    "verify"                : (verify_image, None),
+    "version"               : (None, None),
+}
+
 def main_func():
         global_settings.client_name = PKG_CLIENT_NAME
 
         global _api_inst
         global img
         global orig_cwd
+        global pargs
 
         try:
                 orig_cwd = os.getcwd()
@@ -5808,7 +6151,7 @@
         except getopt.GetoptError, e:
                 usage(_("illegal global option -- %s") % e.opt)
 
-        runid = os.getpid()
+        runid = None
         show_usage = False
         for opt, arg in opts:
                 if opt == "-D" or opt == "--debug":
@@ -5830,61 +6173,6 @@
                 elif opt in ("--help", "-?"):
                         show_usage = True
 
-        # placeholders in this lookup table for image-create, help and version
-        # which don't have dedicated methods
-        cmds = {
-            "add-property-value"    : (property_add_value, None),
-            "attach-linked"         : (attach_linked, opts_attach_linked, 2),
-            "avoid"                 : (avoid, None),
-            "audit-linked"          : (audit_linked, opts_audit_linked),
-            "authority"             : (publisher_list, None),
-            "change-facet"          : (change_facet, opts_install, -1),
-            "change-variant"        : (change_variant, opts_install, -1),
-            "contents"              : (list_contents, None),
-            "detach-linked"         : (detach_linked, opts_detach_linked),
-            "facet"                 : (facet_list, None),
-            "fix"                   : (fix_image, None),
-            "freeze"                : (freeze, None),
-            "help"                  : (None, None),
-            "history"               : (history_list, None),
-            "image-create"          : (None, None),
-            "info"                  : (info, None),
-            "install"               : (install, opts_install, -1),
-            "list"                  : (list_inventory, opts_list_inventory, -1),
-            "list-linked"           : (list_linked, opts_list_linked),
-            "mediator"              : (list_mediators, opts_list_mediator, -1),
-            "property"              : (property_list, None),
-            "property-linked"       : (list_property_linked,
-                                          opts_list_property_linked, -1),
-            "publisher"             : (publisher_list, None),
-            "purge-history"         : (history_purge, None),
-            "rebuild-index"         : (rebuild_index, None),
-            "refresh"               : (publisher_refresh, None),
-            "remove-property-value" : (property_remove_value, None),
-            "revert"                : (revert, opts_revert, -1),
-            "search"                : (search, None),
-            "set-authority"         : (publisher_set, None),
-            "set-mediator"          : (set_mediator, opts_set_mediator, -1),
-            "set-property"          : (property_set, None),
-            "set-property-linked"   : (set_property_linked,
-                                          opts_set_property_linked, -1),
-            "set-publisher"         : (publisher_set, None),
-            "sync-linked"           : (sync_linked, opts_sync_linked),
-            "unavoid"               : (unavoid, None),
-            "unfreeze"              : (unfreeze, None),
-            "uninstall"             : (uninstall, opts_uninstall, -1),
-            "unset-authority"       : (publisher_unset, None),
-            "unset-property"        : (property_unset, None),
-            "update-format"         : (update_format, None),
-            "unset-mediator"        : (unset_mediator, opts_unset_mediator, -1),
-            "unset-publisher"       : (publisher_unset, None),
-            "update"                : (update, opts_update, -1),
-            "update-format"         : (update_format, None),
-            "variant"               : (variant_list, None),
-            "verify"                : (verify_image, None),
-            "version"               : (None, None),
-        }
-
         subcommand = None
         if pargs:
                 subcommand = pargs.pop(0)
@@ -5913,11 +6201,12 @@
                 usage(retcode=0, full=True)
         if not subcommand:
                 usage(_("no subcommand specified"))
-        if runid:
+        if runid is not None:
                 try:
                         runid = int(runid)
                 except:
                         usage(_("runid must be an integer"))
+                global_settings.client_runid = runid
 
         for opt in ["--help", "-?"]:
                 if opt in pargs:
@@ -5940,6 +6229,7 @@
                         usage(_("-R not allowed for %s subcommand") %
                               subcommand, cmd=subcommand)
                 try:
+                        pkg_timer.record("client startup", logger=logger)
                         ret = func(pargs)
                 except getopt.GetoptError, e:
                         usage(_("illegal option -- %s") % e.opt, cmd=subcommand)
@@ -5962,8 +6252,7 @@
                 return EXIT_OOPS
 
         # Get ImageInterface and image object.
-        api_inst = __api_alloc(mydir, provided_image_dir, pkg_image_used, False,
-            runid=runid)
+        api_inst = __api_alloc(mydir, provided_image_dir, pkg_image_used, False)
         if api_inst is None:
                 return EXIT_OOPS
         _api_inst = api_inst
@@ -5981,6 +6270,7 @@
 
                 opts, pargs = misc.opts_parse(subcommand, api_inst, pargs,
                     opts_cmd, pargs_limit, usage)
+                pkg_timer.record("client startup", logger=logger)
                 return func(op=subcommand, api_inst=api_inst,
                     pargs=pargs, **opts)
 
@@ -6136,6 +6426,10 @@
         warnings.simplefilter('error')
 
         __retval = handle_errors(main_func)
+        if DebugValues["timings"]:
+                def __display_timings():
+                        msg(str(pkg_timer))
+                handle_errors(__display_timings)
         try:
                 logging.shutdown()
         except IOError:
--- a/src/modules/actions/generic.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/actions/generic.py	Mon Jul 11 13:49:50 2011 -0700
@@ -118,6 +118,18 @@
 
                 return type.__new__(mcs, name, bases, dict)
 
+        @staticmethod
+        def getstate(obj, je_state=None):
+                """Returns the serialized state of this object in a format
+                that that can be easily stored using JSON, pickle, etc."""
+                return str(obj)
+
+        @staticmethod
+        def fromstate(state, jd_state=None):
+                """Allocate a new object using previously serialized state
+                obtained via getstate()."""
+                return pkg.actions.fromstr(state)
+
 
 class Action(object):
         """Class representing a generic packaging object.
@@ -707,10 +719,10 @@
                                 self.validate(fmri=fmri)
 
                         # Otherwise, the user is unknown; attempt to report why.
-                        ip = pkgplan.image.imageplan
-                        if owner in ip.removed_users:
+                        pd = pkgplan.image.imageplan.pd
+                        if owner in pd.removed_users:
                                 # What package owned the user that was removed?
-                                src_fmri = ip.removed_users[owner]
+                                src_fmri = pd.removed_users[owner]
 
                                 raise pkg.actions.InvalidActionAttributesError(
                                     self, [("owner", _("'%(path)s' cannot be "
@@ -719,7 +731,7 @@
                                     "path": path, "owner": owner,
                                     "src_fmri": src_fmri })],
                                     fmri=fmri)
-                        elif owner in ip.added_users:
+                        elif owner in pd.added_users:
                                 # This indicates an error on the part of the
                                 # caller; the user should have been added
                                 # before attempting to install the file.
@@ -750,10 +762,10 @@
 
                         # Otherwise, the group is unknown; attempt to report
                         # why.
-                        ip = pkgplan.image.imageplan
-                        if group in ip.removed_groups:
+                        pd = pkgplan.image.imageplan.pd
+                        if group in pd.removed_groups:
                                 # What package owned the group that was removed?
-                                src_fmri = ip.removed_groups[group]
+                                src_fmri = pd.removed_groups[group]
 
                                 raise pkg.actions.InvalidActionAttributesError(
                                     self, [("group", _("'%(path)s' cannot be "
@@ -762,7 +774,7 @@
                                     "path": path, "group": group,
                                     "src_fmri": src_fmri })],
                                     fmri=pkgplan.destination_fmri)
-                        elif group in ip.added_groups:
+                        elif group in pd.added_groups:
                                 # This indicates an error on the part of the
                                 # caller; the group should have been added
                                 # before attempting to install the file.
--- a/src/modules/actions/license.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/actions/license.py	Mon Jul 11 13:49:50 2011 -0700
@@ -75,6 +75,9 @@
                 owner = 0
                 group = 0
 
+                # ensure "path" is initialized.  it may not be if we've loaded
+                # a plan that was previously prepared.
+                self.preinstall(pkgplan, orig)
                 path = self.attrs["path"]
 
                 stream = self.data()
--- a/src/modules/client/__init__.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/client/__init__.py	Mon Jul 11 13:49:50 2011 -0700
@@ -57,6 +57,29 @@
                 self.__info_log_handler = None
                 self.__error_log_handler = None
                 self.__verbose = False
+
+                # runid, used by the pkg.1 client and the linked image
+                # subsystem when when generating temporary files.
+                self.client_runid = os.getpid()
+
+                # file descriptor used by ProgressTracker classes when running
+                # "pkg remote" to indicate progress back to the parent/client
+                # process.
+                self.client_output_progfd = None
+
+                # concurrency value used for linked image recursion
+                self.client_concurrency_default = 1
+                self.client_concurrency = self.client_concurrency_default
+                try:
+                        self.client_concurrency = int(os.environ.get(
+                            "PKG_CONCURRENCY",
+                            self.client_concurrency_default))
+                        # remove PKG_CONCURRENCY from the environment so child
+                        # processes don't inherit it.
+                        os.environ.pop("PKG_CONCURRENCY", None)
+                except ValueError:
+                        pass
+
                 self.client_name = None
                 self.client_args = sys.argv[:]
                 # Default maximum number of redirects received before
--- a/src/modules/client/actuator.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/client/actuator.py	Mon Jul 11 13:49:50 2011 -0700
@@ -21,12 +21,14 @@
 #
 
 #
-# Copyright (c) 2008, 2011, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2008, 2012, Oracle and/or its affiliates. All rights reserved.
 #
 
 import pkg.smf as smf
 import os
 
+import pkg.misc
+
 from pkg.client.debugvalues import DebugValues
 from pkg.client.imagetypes import IMG_USER, IMG_ENTIRE
 
@@ -97,6 +99,30 @@
             "disable_fmri"      # disable this service prior to removal
         ])
 
+        __state__desc = {
+                "install": {
+                    "disable_fmri": set(),
+                    "reboot-needed": set(),
+                    "refresh_fmri": set(),
+                    "restart_fmri": set(),
+                    "suspend_fmri": set(),
+                },
+                "removal": {
+                    "disable_fmri": set(),
+                    "reboot-needed": set(),
+                    "refresh_fmri": set(),
+                    "restart_fmri": set(),
+                    "suspend_fmri": set(),
+                },
+                "update": {
+                    "disable_fmri": set(),
+                    "reboot-needed": set(),
+                    "refresh_fmri": set(),
+                    "restart_fmri": set(),
+                    "suspend_fmri": set(),
+                },
+        }
+
         def __init__(self):
                 GenericActuator.__init__(self)
                 self.suspend_fmris = None
@@ -104,6 +130,36 @@
                 self.do_nothing = True
                 self.cmd_path = ""
 
+        @staticmethod
+        def getstate(obj, je_state=None):
+                """Returns the serialized state of this object in a format
+                that that can be easily stored using JSON, pickle, etc."""
+                return pkg.misc.json_encode(Actuator.__name__, obj.__dict__,
+                    Actuator.__state__desc, je_state=je_state)
+
+        @staticmethod
+        def setstate(obj, state, jd_state=None):
+                """Update the state of this object using previously serialized
+                state obtained via getstate()."""
+
+                # get the name of the object we're dealing with
+                name = type(obj).__name__
+
+                # decode serialized state into python objects
+                state = pkg.misc.json_decode(name, state,
+                    Actuator.__state__desc, jd_state=jd_state)
+
+                # bulk update
+                obj.__dict__.update(state)
+
+        @staticmethod
+        def fromstate(state, jd_state=None):
+                """Allocate a new object using previously serialized state
+                obtained via getstate()."""
+                rv = Actuator()
+                Actuator.setstate(rv, state, jd_state)
+                return rv
+
         def __bool__(self):
                 return self.install or self.removal or self.update
 
--- a/src/modules/client/api.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/client/api.py	Mon Jul 11 13:49:50 2011 -0700
@@ -27,26 +27,46 @@
 """This module provides the supported, documented interface for clients to
 interface with the pkg(5) system.
 
-Refer to pkg.api_common for additional core class documentation.
+Refer to pkg.api_common and pkg.plandesc for additional core class
+documentation.
 
 Consumers should catch ApiException when calling any API function, and
 may optionally catch any subclass of ApiException for further, specific
 error handling.
 """
 
+#
+# this file is not completely pylint clean
+#
+# pylint: disable-msg=C0111,C0301,C0321,E0702,R0201,W0102
+# pylint: disable-msg=W0212,W0511,W0612,W0613,W0702
+#
+# C0111 Missing docstring
+# C0301 Line too long
+# C0321 More than one statement on a single line
+# E0702 Raising NoneType while only classes, instances or string are allowed
+# R0201 Method could be a function
+# W0102 Dangerous default value %s as argument
+# W0212 Access to a protected member %s of a client class
+# W0511 XXX
+# W0612 Unused variable '%s'
+# W0613 Unused argument '%s'
+# W0702 No exception type(s) specified
+#
+
 import collections
 import copy
 import datetime
 import errno
 import fnmatch
 import glob
-import operator
 import os
 import shutil
+import simplejson as json
 import sys
 import tempfile
+import threading
 import time
-import threading
 import urllib
 
 import pkg.client.api_errors as apx
@@ -54,10 +74,11 @@
 import pkg.client.history as history
 import pkg.client.image as image
 import pkg.client.imageconfig as imgcfg
-import pkg.client.imageplan as ip
+import pkg.client.imageplan as imageplan
 import pkg.client.imagetypes as imgtypes
 import pkg.client.indexer as indexer
 import pkg.client.pkgdefs as pkgdefs
+import pkg.client.plandesc as plandesc
 import pkg.client.publisher as publisher
 import pkg.client.query_parser as query_p
 import pkg.fmri as fmri
@@ -74,12 +95,16 @@
     _get_pkg_cat_data)
 from pkg.client import global_settings
 from pkg.client.debugvalues import DebugValues
-
-from pkg.client.pkgdefs import *
+from pkg.client.pkgdefs import * # pylint: disable-msg=W0401
 from pkg.smf import NonzeroExitException
 
-CURRENT_API_VERSION = 71
-COMPATIBLE_API_VERSIONS = frozenset([66, 67, 68, 69, 70, CURRENT_API_VERSION])
+# we import PlanDescription here even though it isn't used so that consumers
+# of the api still have access to the class definition and are able to do
+# things like help(pkg.client.api.PlanDescription)
+from pkg.client.plandesc import PlanDescription # pylint: disable-msg=W0611
+
+CURRENT_API_VERSION = 72
+COMPATIBLE_API_VERSIONS = frozenset([CURRENT_API_VERSION])
 CURRENT_P5I_VERSION = 1
 
 # Image type constants.
@@ -201,10 +226,17 @@
                                                 instance._disable_cancel()
                                         except apx.CanceledException:
                                                 instance._cancel_done()
+                                                # if f() acquired the image
+                                                # lock, drop it
+                                                if instance._img.locked:
+                                                        instance._img.unlock()
                                                 instance._activity_lock.release()
                                                 raise
                                 else:
                                         instance._cancel_cleanup_exception()
+                                # if f() acquired the image lock, drop it
+                                if instance._img.locked:
+                                        instance._img.unlock()
                                 instance._activity_lock.release()
 
                 return wrapper
@@ -236,49 +268,9 @@
         MATCH_FMRI = 1
         MATCH_GLOB = 2
 
-        # Private constants used for tracking which type of plan was made.
-        __INSTALL     = "install"
-        __MEDIATORS   = "mediators"
-        __REVERT      = "revert"
-        __SYNC        = "sync"
-        __UNINSTALL   = "uninstall"
-        __UPDATE      = "update"
-        __VARCET      = "varcet"
-        __plan_values = frozenset([
-            __INSTALL,
-            __MEDIATORS,
-            __REVERT,
-            __SYNC,
-            __UNINSTALL,
-            __UPDATE,
-            __VARCET,
-        ])
-
-        __api_op_2_plan = {
-            API_OP_ATTACH:         __SYNC,
-            API_OP_SET_MEDIATOR:   __MEDIATORS,
-            API_OP_CHANGE_FACET:   __VARCET,
-            API_OP_CHANGE_VARIANT: __VARCET,
-            API_OP_DETACH:         __UNINSTALL,
-            API_OP_INSTALL:        __INSTALL,
-            API_OP_REVERT:         __REVERT,
-            API_OP_SYNC:           __SYNC,
-            API_OP_UNINSTALL:      __UNINSTALL,
-            API_OP_UPDATE:         __UPDATE,
-        }
-
-        __stage_2_ip_mode = {
-            API_STAGE_DEFAULT:  ip.IP_MODE_DEFAULT,
-            API_STAGE_PUBCHECK: ip.IP_MODE_SAVE,
-            API_STAGE_PLAN:     ip.IP_MODE_SAVE,
-            API_STAGE_PREPARE:  ip.IP_MODE_LOAD,
-            API_STAGE_EXECUTE:  ip.IP_MODE_LOAD,
-        }
-
-
         def __init__(self, img_path, version_id, progresstracker,
             cancel_state_callable, pkg_client_name, exact_match=True,
-            cmdpath=None, runid=-1):
+            cmdpath=None):
                 """Constructs an ImageInterface object.
 
                 'img_path' is the absolute path to an existing image or to a
@@ -337,10 +329,6 @@
                 if global_settings.client_name is None:
                         global_settings.client_name = pkg_client_name
 
-                if runid < 0:
-                        runid = os.getpid()
-                self.__runid = runid
-
                 if cmdpath == None:
                         cmdpath = misc.api_cmdpath()
                 self.cmdpath = cmdpath
@@ -362,8 +350,7 @@
                         self._img = image.Image(img_path,
                             progtrack=progresstracker,
                             user_provided_dir=exact_match,
-                            cmdpath=self.cmdpath,
-                            runid=runid)
+                            cmdpath=self.cmdpath)
 
                         # Store final image path.
                         self._img_path = self._img.get_root()
@@ -383,7 +370,6 @@
                 self.__progresstracker.set_linked_name(lin)
 
                 self.__cancel_state_callable = cancel_state_callable
-                self.__stage = API_STAGE_DEFAULT
                 self.__plan_type = None
                 self.__api_op = None
                 self.__plan_desc = None
@@ -423,6 +409,11 @@
                 else:
                         self._img.set_alt_pkg_sources(None)
 
+        @_LockedCancelable()
+        def set_alt_repos(self, repos):
+                """Public function to specify alternate package sources."""
+                self.__set_img_alt_sources(repos)
+
         blocking_locks = property(lambda self: self.__blocking_locks,
             __set_blocking_locks, doc="A boolean value indicating whether "
             "the API should wait until the image interface can be locked if "
@@ -461,6 +452,32 @@
                 return self._img.is_zone()
 
         @property
+        def is_active_liveroot_be(self):
+                """A boolean indicating whether the image to be modified is
+                the active BE for the system's root image."""
+
+                if not self._img.is_liveroot() or self._img.is_zone():
+                        return False
+
+                try:
+                        be_name, be_uuid = bootenv.BootEnv.get_be_name(
+                            self._img.root)
+                        return be_name == \
+                            bootenv.BootEnv.get_activated_be_name()
+                except apx.BEException:
+                        # If boot environment logic isn't supported, return
+                        # False.  This is necessary for user images and for
+                        # the test suite.
+                        return False
+
+        @property
+        def img_plandir(self):
+                """A path to the image planning directory."""
+                plandir = self._img.plandir
+                misc.makedirs(plandir)
+                return plandir
+
+        @property
         def last_modified(self):
                 """A datetime object representing when the image's metadata was
                 last updated."""
@@ -543,14 +560,6 @@
                 bootenv.BootEnv.check_be_name(be_name)
                 return True
 
-        def set_stage(self, stage):
-                """Tell the api which stage of execution we're in.  This is
-                used when executing in child images during recursive linked
-                operations."""
-
-                assert stage in api_stage_values
-                self.__stage = stage
-
         def __cert_verify(self, log_op_end=None):
                 """Verify validity of certificates.  Any apx.ExpiringCertificate
                 exceptions are caught here, a message is displayed, and
@@ -869,9 +878,10 @@
                 # information in the plan.  We have to save it here and restore
                 # it later because __reset_unlock() torches it.
                 if exc_type == apx.ConflictingActionErrors:
-                        plan_desc = PlanDescription(self._img, self.__backup_be,
+                        self._img.imageplan.set_be_options(self.__backup_be,
                             self.__backup_be_name, self.__new_be,
                             self.__be_activate, self.__be_name)
+                        plan_desc = self._img.imageplan.describe()
 
                 self.__reset_unlock()
 
@@ -929,12 +939,12 @@
 
                 raise apx.IpkgOutOfDateException()
 
-        def __plan_op(self, _op, _accept=False, _ad_kwargs=None,
+        def __plan_op(self, _op, _ad_kwargs=None,
             _backup_be=None, _backup_be_name=None, _be_activate=True,
             _be_name=None, _ipkg_require_latest=False, _li_ignore=None,
             _li_md_only=False, _li_parent_sync=True, _new_be=False,
-            _noexecute=False, _refresh_catalogs=True, _repos=None,
-            _update_index=True, **kwargs):
+            _noexecute=False, _pubcheck=True, _refresh_catalogs=True,
+            _repos=None, _update_index=True, **kwargs):
                 """Contructs a plan to change the package or linked image
                 state of an image.
 
@@ -978,26 +988,21 @@
                 # make some perf optimizations
                 if _li_md_only:
                         _refresh_catalogs = _update_index = False
-                if self.__stage not in [API_STAGE_DEFAULT, API_STAGE_PUBCHECK]:
+                if _op in [API_OP_DETACH, API_OP_SET_MEDIATOR]:
+                        # these operations don't change fmris and don't need
+                        # to recurse, so disable a bunch of linked image
+                        # operations.
                         _li_parent_sync = False
-                if self.__stage not in [API_STAGE_DEFAULT, API_STAGE_PLAN]:
-                        _refresh_catalogs = False
-                        _ipkg_require_latest = False
-
-                # if we have any children we don't support operations using
-                # temporary repositories.
-                if _repos and self._img.linked.list_related(_li_ignore):
-                        raise apx.PlanCreationException(no_tmp_origins=True)
-
-                # All the image interface functions that we inovke have some
+                        _pubcheck = False
+                        _li_ignore = [] # ignore all children
+
+                # All the image interface functions that we invoke have some
                 # common arguments.  Set those up now.
                 args_common = {}
                 args_common["op"] = _op
                 args_common["progtrack"] = self.__progresstracker
                 args_common["check_cancel"] = self.__check_cancel
                 args_common["noexecute"] = _noexecute
-                args_common["ip_mode"] = self.__stage_2_ip_mode[self.__stage]
-
 
                 # make sure there is no overlap between the common arguments
                 # supplied to all api interfaces and the arguments that the
@@ -1012,35 +1017,22 @@
                     _backup_be_name, _new_be, _be_name, _be_activate)
 
                 try:
-                        # reset any child recursion state we might have
-                        self._img.linked.reset_recurse()
-
-                        # prepare to recurse into child images
-                        self._img.linked.init_recurse(_op, _li_ignore,
-                            _accept, _refresh_catalogs,
-                            _update_index, kwargs)
-
                         if _op == API_OP_ATTACH:
                                 self._img.linked.attach_parent(**_ad_kwargs)
                         elif _op == API_OP_DETACH:
                                 self._img.linked.detach_parent(**_ad_kwargs)
 
                         if _li_parent_sync:
-                                # try to refresh linked image
-                                # constraints from the parent image.
-                                self._img.linked.syncmd_from_parent(_op)
-
-                        if self.__stage in [API_STAGE_DEFAULT,
-                            API_STAGE_PUBCHECK]:
-
-                                # do a linked image publisher check
-                                self._img.linked.check_pubs(_op)
-                                self._img.linked.do_recurse(API_STAGE_PUBCHECK)
-
-                                if self.__stage == API_STAGE_PUBCHECK:
-                                        # If this was just a publisher check
-                                        # then return immediately.
-                                        return
+                                # refresh linked image data from parent image.
+                                self._img.linked.syncmd_from_parent(api_op=_op)
+
+                        # initialize recursion state
+                        self._img.linked.api_recurse_init(
+                                li_ignore=_li_ignore, repos=_repos)
+
+                        if _pubcheck:
+                                # check that linked image pubs are in sync
+                                self.__linked_pubcheck(_op)
 
                         if _refresh_catalogs:
                                 self.__refresh_publishers()
@@ -1074,9 +1066,6 @@
                                 raise RuntimeError("Unknown api op: %s" % _op)
 
                         self.__api_op = _op
-                        self.__accept = _accept
-                        if not _noexecute:
-                                self.__plan_type = self.__api_op_2_plan[_op]
 
                         if self._img.imageplan.nothingtodo():
                                 # no package changes mean no index changes
@@ -1084,9 +1073,12 @@
 
                         self._disable_cancel()
                         self.__set_be_creation()
-                        self.__plan_desc = PlanDescription(self._img,
+                        self._img.imageplan.set_be_options(
                             self.__backup_be, self.__backup_be_name,
                             self.__new_be, self.__be_activate, self.__be_name)
+                        self.__plan_desc = self._img.imageplan.describe()
+                        if not _noexecute:
+                                self.__plan_type = self.__plan_desc.plan_type
 
                         # Yield to our caller so they can display our plan
                         # before we recurse into child images.  Drop the
@@ -1101,11 +1093,13 @@
                         # either a dictionary representing the parsable output
                         # from the child image operation, or None.  Eventually
                         # these will yield plan descriptions objects instead.
-                        if self.__stage in [API_STAGE_DEFAULT, API_STAGE_PLAN]:
-                                plans = self._img.linked.do_recurse(
-                                    API_STAGE_PLAN, ip=self._img.imageplan)
-                                for rv, p_dict in plans:
-                                        yield p_dict
+                        for p_dict in self._img.linked.api_recurse_plan(
+                            api_kwargs=kwargs,
+                            refresh_catalogs=_refresh_catalogs,
+                            update_index=_update_index,
+                            progtrack=self.__progresstracker):
+                                yield p_dict
+
                         self.__planned_children = True
 
                 except:
@@ -1129,6 +1123,105 @@
                 self._img.imageplan.update_index = _update_index
                 self.__plan_common_finish()
 
+                if DebugValues["plandesc_validate"]:
+                        # save, load, and get a new json copy of the plan,
+                        # then compare that new copy against our current one.
+                        # this regressions tests the plan save/load code.
+                        pd_json1 = self.__plan_desc.getstate(self.__plan_desc,
+                            reset_volatiles=True)
+                        fobj = tempfile.TemporaryFile()
+                        json.dump(pd_json1, fobj, encoding="utf-8")
+                        pd_new = plandesc.PlanDescription(_op)
+                        pd_new._load(fobj)
+                        pd_json2 = pd_new.getstate(pd_new, reset_volatiles=True)
+                        del fobj, pd_new
+                        pkg.misc.json_diff("PlanDescription", \
+                            pd_json1, pd_json2)
+                        del pd_json1, pd_json2
+
+        @_LockedCancelable()
+        def load_plan(self, plan, prepared=False):
+                """Load a previously generated PlanDescription."""
+
+                # Prevent loading a plan if one has been already.
+                if self.__plan_type is not None:
+                        raise apx.PlanExistsException(self.__plan_type)
+
+                # grab image lock.  we don't worry about dropping the image
+                # lock since __activity_lock will drop it for us us after we
+                # return (or if we generate an exception).
+                self._img.lock()
+
+                # load the plan
+                self.__plan_desc = plan
+                self.__plan_type = plan.plan_type
+                self.__planned_children = True
+                self.__prepared = prepared
+
+                # load BE related plan settings
+                self.__new_be = plan.new_be
+                self.__be_activate = plan.activate_be
+                self.__be_name = plan.be_name
+
+                # sanity check: verify the BE name
+                if self.__be_name is not None:
+                        self.check_be_name(self.__be_name)
+                        if not self._img.is_liveroot():
+                                raise apx.BENameGivenOnDeadBE(self.__be_name)
+
+                # sanity check: verify that all the fmris in the plan are in
+                # the known catalog
+                pkg_cat = self._img.get_catalog(self._img.IMG_CATALOG_KNOWN)
+                for pp in plan.pkg_plans:
+                        if pp.destination_fmri:
+                                assert pkg_cat.get_entry(pp.destination_fmri), \
+                                     "fmri part of plan, but currently " \
+                                     "unknown: %s" % pp.destination_fmri
+
+                # allocate an image plan based on the supplied plan
+                self._img.imageplan = imageplan.ImagePlan(self._img, plan._op,
+                    self.__progresstracker, check_cancel=self.__check_cancel,
+                    pd=plan)
+
+                if prepared:
+                        self._img.imageplan.skip_preexecute()
+
+                # create a history entry
+                self.log_operation_start(plan.plan_type)
+
+        def __linked_pubcheck(self, api_op=None):
+                """Private interface to perform publisher check on this image
+                and its children."""
+
+                if api_op in [API_OP_DETACH, API_OP_SET_MEDIATOR]:
+                        # we don't need to do a pubcheck for detach or
+                        # changing mediators
+                        return
+
+                # check the current image
+                self._img.linked.pubcheck()
+
+                # check child images
+                self._img.linked.api_recurse_pubcheck()
+
+        @_LockedCancelable()
+        def linked_publisher_check(self):
+                """If we're a child image, verify that the parent image's
+                publisher configuration is a subset of the child image's
+                publisher configuration.  If we have any children, recurse
+                into them and perform a publisher check."""
+
+                # grab image lock.  we don't worry about dropping the image
+                # lock since __activity_lock will drop it for us us after we
+                # return (or if we generate an exception).
+                self._img.lock(allow_unprivileged=True)
+
+                # get ready to recurse
+                self._img.linked.api_recurse_init()
+
+                # check that linked image pubs are in sync
+                self.__linked_pubcheck()
+
         def planned_nothingtodo(self, li_ignore_all=False):
                 """Once an operation has been planned check if there is
                 something todo.
@@ -1142,11 +1235,6 @@
                 li_ignore_all is true, then we'll report that there's nothing
                 todo."""
 
-                if self.__stage == API_STAGE_PUBCHECK:
-                        # if this was just a publisher check then report
-                        # that there is something todo so we continue with
-                        # the operation.
-                        return False
                 if not self._img.imageplan:
                         # if theres no plan there nothing to do
                         return True
@@ -1185,11 +1273,12 @@
                         continue
                 return (not self.planned_nothingtodo(), self.solaris_image())
 
-        def gen_plan_update(self, pkgs_update=None, accept=False,
-            backup_be=None, backup_be_name=None, be_activate=True,
-            be_name=None, force=False, li_ignore=None, li_parent_sync=True,
-            new_be=True, noexecute=False, refresh_catalogs=True,
+        def gen_plan_update(self, pkgs_update=None, backup_be=None,
+            backup_be_name=None, be_activate=True, be_name=None,
+            force=False, li_ignore=None, li_parent_sync=True, new_be=True,
+            noexecute=False, pubcheck=True, refresh_catalogs=True,
             reject_list=misc.EmptyI, repos=None, update_index=True):
+
                 """This is a generator function that yields a PlanDescription
                 object.  If parsable_version is set, it also yields dictionaries
                 containing plan information for child images.
@@ -1212,6 +1301,11 @@
                 'force' indicates whether update should skip the package
                 system up to date check.
 
+                'pubcheck' indicates that we should skip the child image
+                publisher check before creating a plan for this image.  only
+                pkg.1 should use this parameter, other callers should never
+                specify it.
+
                 For all other parameters, refer to the 'gen_plan_install'
                 function for an explanation of their usage and effects."""
 
@@ -1221,12 +1315,12 @@
                         ipkg_require_latest = True
 
                 op = API_OP_UPDATE
-                return self.__plan_op(op, _accept=accept,
+                return self.__plan_op(op,
                     _backup_be=backup_be, _backup_be_name=backup_be_name,
                     _be_activate=be_activate, _be_name=be_name,
                     _ipkg_require_latest=ipkg_require_latest,
                     _li_ignore=li_ignore, _li_parent_sync=li_parent_sync,
-                    _new_be=new_be, _noexecute=noexecute,
+                    _new_be=new_be, _noexecute=noexecute, _pubcheck=pubcheck,
                     _refresh_catalogs=refresh_catalogs, _repos=repos,
                     _update_index=update_index, pkgs_update=pkgs_update,
                     reject_list=reject_list)
@@ -1244,7 +1338,7 @@
                         continue
                 return not self.planned_nothingtodo()
 
-        def gen_plan_install(self, pkgs_inst, accept=False, backup_be=None,
+        def gen_plan_install(self, pkgs_inst, backup_be=None,
             backup_be_name=None, be_activate=True, be_name=None, li_ignore=None,
             li_parent_sync=True, new_be=False, noexecute=False,
             refresh_catalogs=True, reject_list=misc.EmptyI, repos=None,
@@ -1272,7 +1366,7 @@
                 are being installed and tagged with reboot-needed, a backup
                 boot environment will be created.
 
-                'backup_be_name' is a string to use as the name of any backup 
+                'backup_be_name' is a string to use as the name of any backup
                 boot environment created during the operation.
 
                 'be_name' is a string to use as the name of any new boot
@@ -1325,7 +1419,7 @@
                 assert pkgs_inst and type(pkgs_inst) == list
 
                 op = API_OP_INSTALL
-                return self.__plan_op(op, _accept=accept,
+                return self.__plan_op(op,
                     _backup_be=backup_be, _backup_be_name=backup_be_name,
                     _be_activate=be_activate, _be_name=be_name,
                     _li_ignore=li_ignore, _li_parent_sync=li_parent_sync,
@@ -1334,11 +1428,10 @@
                     _update_index=update_index, pkgs_inst=pkgs_inst,
                     reject_list=reject_list)
 
-        def gen_plan_sync(self, accept=False, backup_be=None,
-            backup_be_name=None, be_activate=True, be_name=None,
-            li_ignore=None, li_md_only=False, li_parent_sync=True,
-            li_pkg_updates=True, new_be=False, noexecute=False,
-            refresh_catalogs=True,
+        def gen_plan_sync(self, backup_be=None, backup_be_name=None,
+            be_activate=True, be_name=None, li_ignore=None, li_md_only=False,
+            li_parent_sync=True, li_pkg_updates=True, new_be=False,
+            noexecute=False, pubcheck=True, refresh_catalogs=True,
             reject_list=misc.EmptyI, repos=None, update_index=True):
                 """This is a generator function that yields a PlanDescription
                 object.  If parsable_version is set, it also yields dictionaries
@@ -1364,28 +1457,31 @@
                 (other than the constraints package) need updating to bring
                 the image in sync with its parent.
 
-                For all other parameters, refer to the 'gen_plan_install'
-                function for an explanation of their usage and effects."""
-
-                # verify that the current image is a linked image by trying to
-                # access its name.
-                self._img.linked.child_name
+                For all other parameters, refer to 'gen_plan_install' and
+                'gen_plan_update' for an explanation of their usage and
+                effects."""
+
+                # we should only be invoked on a child image.
+                if not self.ischild():
+                        raise apx.LinkedImageException(
+                            self_not_child=self._img_path)
 
                 op = API_OP_SYNC
-                return self.__plan_op(op, _accept=accept,
+                return self.__plan_op(op,
                     _backup_be=backup_be, _backup_be_name=backup_be_name,
                     _be_activate=be_activate, _be_name=be_name,
                     _li_ignore=li_ignore, _li_md_only=li_md_only,
                     _li_parent_sync=li_parent_sync, _new_be=new_be,
-                    _noexecute=noexecute, _refresh_catalogs=refresh_catalogs,
-                    _repos=repos, _update_index=update_index,
+                    _noexecute=noexecute, _pubcheck=pubcheck,
+                    _refresh_catalogs=refresh_catalogs,
+                    _repos=repos,
+                    _update_index=update_index,
                     li_pkg_updates=li_pkg_updates, reject_list=reject_list)
 
-        def gen_plan_attach(self, lin, li_path, accept=False,
-            allow_relink=False, backup_be=None, backup_be_name=None,
-            be_activate=True, be_name=None, force=False, li_ignore=None,
-            li_md_only=False, li_pkg_updates=True, li_props=None, new_be=False,
-            noexecute=False, refresh_catalogs=True,
+        def gen_plan_attach(self, lin, li_path, allow_relink=False,
+            backup_be=None, backup_be_name=None, be_activate=True, be_name=None,
+            force=False, li_ignore=None, li_md_only=False, li_pkg_updates=True,
+            li_props=None, new_be=False, noexecute=False, refresh_catalogs=True,
             reject_list=misc.EmptyI, repos=None, update_index=True):
                 """This is a generator function that yields a PlanDescription
                 object.  If parsable_version is set, it also yields dictionaries
@@ -1427,7 +1523,7 @@
                     "path": li_path,
                     "props": li_props,
                 }
-                return self.__plan_op(op, _accept=accept,
+                return self.__plan_op(op,
                     _backup_be=backup_be, _backup_be_name=backup_be_name,
                     _be_activate=be_activate, _be_name=be_name,
                     _li_ignore=li_ignore, _li_md_only=li_md_only,
@@ -1436,7 +1532,7 @@
                     _update_index=update_index, _ad_kwargs=ad_kwargs,
                     li_pkg_updates=li_pkg_updates, reject_list=reject_list)
 
-        def gen_plan_detach(self, accept=False, backup_be=None,
+        def gen_plan_detach(self, backup_be=None,
             backup_be_name=None, be_activate=True, be_name=None, force=False,
             li_ignore=None, new_be=False, noexecute=False):
                 """This is a generator function that yields a PlanDescription
@@ -1460,7 +1556,7 @@
                 ad_kwargs = {
                     "force": force
                 }
-                return self.__plan_op(op, _accept=accept, _ad_kwargs=ad_kwargs,
+                return self.__plan_op(op, _ad_kwargs=ad_kwargs,
                     _backup_be=backup_be, _backup_be_name=backup_be_name,
                     _be_activate=be_activate, _be_name=be_name,
                     _li_ignore=li_ignore, _new_be=new_be,
@@ -1476,7 +1572,7 @@
                         continue
                 return not self.planned_nothingtodo()
 
-        def gen_plan_uninstall(self, pkgs_to_uninstall, accept=False,
+        def gen_plan_uninstall(self, pkgs_to_uninstall,
             backup_be=None, backup_be_name=None, be_activate=True,
             be_name=None, li_ignore=None, new_be=False, noexecute=False,
             update_index=True):
@@ -1502,12 +1598,13 @@
                 assert pkgs_to_uninstall and type(pkgs_to_uninstall) == list
 
                 op = API_OP_UNINSTALL
-                return self.__plan_op(op, _accept=accept,
+                return self.__plan_op(op,
                     _backup_be=backup_be, _backup_be_name=backup_be_name,
                     _be_activate=be_activate, _be_name=be_name,
                     _li_ignore=li_ignore, _li_parent_sync=False,
                     _new_be=new_be, _noexecute=noexecute,
-                    _refresh_catalogs=False, _update_index=update_index,
+                    _refresh_catalogs=False,
+                    _update_index=update_index,
                     pkgs_to_uninstall=pkgs_to_uninstall)
 
         def gen_plan_set_mediators(self, mediators, backup_be=None,
@@ -1552,7 +1649,7 @@
                 function for an explanation of their usage and effects."""
 
                 assert mediators
-                return self.__plan_op(API_OP_SET_MEDIATOR, _accept=True,
+                return self.__plan_op(API_OP_SET_MEDIATOR,
                     _backup_be=backup_be, _backup_be_name=backup_be_name,
                     _be_activate=be_activate, _be_name=be_name,
                     _li_ignore=li_ignore, _li_parent_sync=li_parent_sync,
@@ -1571,10 +1668,10 @@
                 return not self.planned_nothingtodo()
 
         def gen_plan_change_varcets(self, facets=None, variants=None,
-            accept=False, backup_be=None, backup_be_name=None,
-            be_activate=True, be_name=None, li_ignore=None, li_parent_sync=True,
-            new_be=None, noexecute=False, refresh_catalogs=True,
-            reject_list=misc.EmptyI, repos=None, update_index=True):
+            backup_be=None, backup_be_name=None, be_activate=True, be_name=None,
+            li_ignore=None, li_parent_sync=True, new_be=None, noexecute=False,
+            refresh_catalogs=True, reject_list=misc.EmptyI, repos=None,
+            update_index=True):
                 """This is a generator function that yields a PlanDescription
                 object.  If parsable_version is set, it also yields dictionaries
                 containing plan information for child images.
@@ -1605,12 +1702,13 @@
                 else:
                         op = API_OP_CHANGE_FACET
 
-                return self.__plan_op(op, _accept=accept, _backup_be=backup_be,
+                return self.__plan_op(op, _backup_be=backup_be,
                     _backup_be_name=backup_be_name, _be_activate=be_activate,
                     _be_name=be_name, _li_ignore=li_ignore,
                     _li_parent_sync=li_parent_sync, _new_be=new_be,
                     _noexecute=noexecute, _refresh_catalogs=refresh_catalogs,
-                    _repos=repos, _update_index=update_index, variants=variants,
+                    _repos=repos,
+                    _update_index=update_index, variants=variants,
                     facets=facets, reject_list=reject_list)
 
         def plan_revert(self, args, tagged=False, noexecute=True, be_name=None,
@@ -1664,9 +1762,6 @@
                 'li_props' optional linked image properties to apply to the
                 child image.
 
-                'accept' indicates whether we should accept package licenses
-                for any packages being installed during the child image sync.
-
                 'allow_relink' indicates whether we should allow linking of a
                 child image that is already linked (the child may already
                 be a child or a parent image).
@@ -1691,10 +1786,6 @@
                 in solution; installed packages matching these patterns
                 are removed.
 
-                'show_licenses' indicates whether we should display package
-                licenses for any packages being installed during the child
-                image sync.
-
                 'update_index' determines whether client search indexes will
                 be updated in the child after the sync operation completes.
 
@@ -1732,8 +1823,7 @@
                 error."""
 
                 return self._img.linked.detach_children(li_list,
-                    force=force, noexecute=noexecute,
-                    progtrack=self.__progresstracker)
+                    force=force, noexecute=noexecute)
 
         def detach_linked_rvdict2rv(self, rvdict):
                 """Convenience function that takes a dictionary returned from
@@ -1809,6 +1899,10 @@
                 """Indicates whether the current image is a child image."""
                 return self._img.linked.ischild()
 
+        def isparent(self, li_ignore=None):
+                """Indicates whether the current image is a parent image."""
+                return self._img.linked.isparent(li_ignore)
+
         @staticmethod
         def __utc_format(time_str, utc_now):
                 """Given a local time value string, formatted with
@@ -1867,8 +1961,8 @@
         def __get_history_range(self, start, finish):
                 """Given a start and finish date, formatted as UTC date strings
                 as per __utc_format(), return a list of history filenames that
-                fall within that date range.  A range of two equal dates is 
-                equivalent of just retrieving history for that single date
+                fall within that date range.  A range of two equal dates is
+                the equivalent of just retrieving history for that single date
                 string."""
 
                 entries = []
@@ -2047,8 +2141,6 @@
 
                         if self.__prepared:
                                 raise apx.AlreadyPreparedException()
-                        assert self.__plan_type in self.__plan_values, \
-                            "self.__plan_type = %s" % self.__plan_type
 
                         self._enable_cancel()
 
@@ -2098,8 +2190,7 @@
                                 pass
                         self._activity_lock.release()
 
-                if self.__stage in [API_STAGE_DEFAULT, API_STAGE_PREPARE]:
-                        self._img.linked.do_recurse(API_STAGE_PREPARE)
+                self._img.linked.api_recurse_prepare(self.__progresstracker)
 
         def execute_plan(self):
                 """Executes the plan. This is uncancelable once it begins.
@@ -2124,9 +2215,6 @@
                         if self.__executed:
                                 raise apx.AlreadyExecutedException()
 
-                        assert self.__plan_type in self.__plan_values, \
-                            "self.__plan_type = %s" % self.__plan_type
-
                         try:
                                 be = bootenv.BootEnv(self._img)
                         except RuntimeError:
@@ -2243,9 +2331,8 @@
                                 self.log_operation_end(error=exc_type)
                                 raise
 
-                        if self.__stage in \
-                            [API_STAGE_DEFAULT, API_STAGE_EXECUTE]:
-                                self._img.linked.do_recurse(API_STAGE_EXECUTE)
+                        self._img.linked.api_recurse_execute(
+                            self.__progresstracker)
 
                         self.__finished_execution(be)
                         if raise_later:
@@ -2258,7 +2345,7 @@
                         self._activity_lock.release()
 
         def __finished_execution(self, be):
-                if self._img.imageplan.state != ip.EXECUTED_OK:
+                if self._img.imageplan.state != plandesc.EXECUTED_OK:
                         if self.__new_be == True:
                                 be.restore_image()
                         else:
@@ -2314,7 +2401,7 @@
                         if not self._img.imageplan:
                                 raise apx.PlanMissingException()
 
-                        for pp in self._img.imageplan.pkg_plans:
+                        for pp in self.__plan_desc.pkg_plans:
                                 if pp.destination_fmri == pfmri:
                                         pp.set_license_status(plicense,
                                             accepted=accepted,
@@ -3858,13 +3945,13 @@
 
                 self._img.cleanup_downloads()
                 self._img.transport.shutdown()
+
                 # Recreate the image object using the path the api
                 # object was created with instead of the current path.
                 self._img = image.Image(self._img_path,
                     progtrack=self.__progresstracker,
                     user_provided_dir=True,
-                    cmdpath=self.cmdpath,
-                    runid=self.__runid)
+                    cmdpath=self.cmdpath)
                 self._img.blocking_locks = self.__blocking_locks
 
                 lin = None
@@ -4863,227 +4950,6 @@
                     num_to_return, start_point)
 
 
-class PlanDescription(object):
-        """A class which describes the changes the plan will make."""
-
-        def __init__(self, img, backup_be, backup_be_name, new_be, be_activate,
-            be_name):
-                self.__plan = img.imageplan
-                self._img = img
-                self.__backup_be = backup_be
-                self.__backup_be_name = backup_be_name
-                self.__new_be = new_be
-                self.__be_activate = be_activate
-                self.__be_name = be_name
-
-        def get_services(self):
-                """Returns a list of services affected in this plan."""
-                return self.__plan.services
-
-        def get_mediators(self):
-                """Returns a list of strings contianing mediator changes in this
-                plan"""
-                return self.__plan.mediators_to_strings()
-
-        def get_parsable_mediators(self):
-                """Returns a list of mediator changes in this plan"""
-                return self.__plan.mediators
-        
-        def get_varcets(self):
-                """Returns a formatted list of strings representing the
-                variant/facet changes in this plan"""
-                vs, fs = self.__plan.varcets
-                ret = []
-                ret.extend(["variant %s: %s" % a for a in vs])
-                ret.extend(["  facet %s: %s" % a for a in fs])
-                return ret
-
-        def get_parsable_varcets(self):
-                """Returns a tuple of two lists containing the facet and variant
-                changes in this plan."""
-                return self.__plan.varcets
-
-        def get_changes(self):
-                """A generation function that yields tuples of PackageInfo
-                objects of the form (src_pi, dest_pi).
-
-                If 'src_pi' is None, then 'dest_pi' is the package being
-                installed.
-
-                If 'src_pi' is not None, and 'dest_pi' is None, 'src_pi'
-                is the package being removed.
-
-                If 'src_pi' is not None, and 'dest_pi' is not None,
-                then 'src_pi' is the original version of the package,
-                and 'dest_pi' is the new version of the package it is
-                being upgraded to."""
-
-                for pp in sorted(self.__plan.pkg_plans,
-                    key=operator.attrgetter("origin_fmri", "destination_fmri")):
-                        yield (PackageInfo.build_from_fmri(pp.origin_fmri),
-                            PackageInfo.build_from_fmri(pp.destination_fmri))
-
-        def get_actions(self):
-                """A generator function that returns action changes for all
-                the package plans"""
-                for a in self.__plan.gen_verbose_strs():
-                        yield(a)
-
-        def get_licenses(self, pfmri=None):
-                """A generator function that yields information about the
-                licenses related to the current plan in tuples of the form
-                (dest_fmri, src, dest, accepted, displayed) for the given
-                package FMRI or all packages in the plan.  This is only
-                available for licenses that are being installed or updated.
-
-                'dest_fmri' is the FMRI of the package being installed.
-
-                'src' is a LicenseInfo object if the license of the related
-                package is being updated; otherwise it is None.
-
-                'dest' is the LicenseInfo object for the license that is being
-                installed.
-
-                'accepted' is a boolean value indicating that the license has
-                been marked as accepted for the current plan.
-
-                'displayed' is a boolean value indicating that the license has
-                been marked as displayed for the current plan."""
-
-                for pp in self.__plan.pkg_plans:
-                        dfmri = pp.destination_fmri
-                        if pfmri and dfmri != pfmri:
-                                continue
-
-                        for lid, entry in pp.get_licenses():
-                                src = entry["src"]
-                                src_li = None
-                                if src:
-                                        src_li = LicenseInfo(pp.origin_fmri,
-                                            src, img=self._img)
-
-                                dest = entry["dest"]
-                                dest_li = None
-                                if dest:
-                                        dest_li = LicenseInfo(
-                                            pp.destination_fmri, dest,
-                                            img=self._img)
-
-                                yield (pp.destination_fmri, src_li, dest_li,
-                                    entry["accepted"], entry["displayed"])
-
-                        if pfmri:
-                                break
-
-        def get_salvaged(self):
-                """Returns a list of tuples of items that were salvaged during
-                plan execution.  Each tuple is of the form (original_path,
-                salvage_path).  Where 'original_path' is the path of the item
-                before it was salvaged, and 'salvage_path' is where the item was
-                moved to.  This method only has useful information after plan
-                execution."""
-
-                if self.__plan.state not in (ip.EXECUTED_OK, ip.EXECUTED_ERROR):
-                        # Return an empty list so that the type matches with
-                        # self.__plan.salvaged.
-                        return []
-                return copy.copy(self.__plan.salvaged)
-
-        def get_solver_errors(self):
-                """Returns a list of strings for all FMRIs evaluated by the
-                solver explaining why they were rejected.  (All packages
-                found in solver's trim database.)  Only available if
-                DebugValues["plan"] was set when the plan was created.
-                """
-
-                if not DebugValues["plan"]:
-                        return []
-
-                return self.__plan.get_solver_errors()
-
-        @property
-        def activate_be(self):
-                """A boolean value indicating whether any new boot environment
-                will be set active on next boot."""
-                return self.__be_activate
-
-        @property
-        def backup_be(self):
-                """A boolean value indicating that execution of the plan will
-                result in a backup clone of the current live environment."""
-                return self.__backup_be
-
-        @property
-        def backup_be_name(self):
-                """A value containing either the name of the backup boot
-                environment to create or None."""
-                return self.__backup_be_name
- 
-        @property
-        def be_name(self):
-                """A value containing either the name of the boot environment to
-                create or None."""
-                return self.__be_name
-
-        @property
-        def is_active_root_be(self):
-                """A boolean indicating whether the image to be modified is the
-                active BE for the system's root image."""
-
-                if not self._img.is_liveroot() or self._img.is_zone():
-                        return False
-
-                try:
-                        be_name, be_uuid = bootenv.BootEnv.get_be_name(
-                            self._img.root)
-                        return be_name == \
-                            bootenv.BootEnv.get_activated_be_name()
-                except apx.BEException:
-                        # If boot environment logic isn't supported, return
-                        # False.  This is necessary for user images and for
-                        # the test suite.
-                        return False
-
-        @property
-        def reboot_needed(self):
-                """A boolean value indicating that execution of the plan will
-                require a restart of the system to take effect if the target
-                image is an existing boot environment."""
-                return self.__plan.reboot_needed()
-
-        @property
-        def new_be(self):
-                """A boolean value indicating that execution of the plan will
-                take place in a clone of the current live environment"""
-                return self.__new_be
-
-        @property
-        def update_boot_archive(self):
-                """A boolean value indicating whether or not the boot archive
-                will be rebuilt"""
-                return self.__plan.boot_archive_needed()
-
-        @property
-        def bytes_added(self):
-                """Estimated number of bytes added"""
-                return self.__plan.bytes_added
-
-        @property
-        def cbytes_added(self):
-                """Estimated number of download cache bytes added"""
-                return self.__plan.cbytes_added
-
-        @property
-        def bytes_avail(self):
-                """Estimated number of bytes available in image /"""
-                return self.__plan.bytes_avail
-
-        @property
-        def cbytes_avail(self):
-                """Estimated number of bytes available in download cache"""
-                return self.__plan.cbytes_avail
-
-
 def get_default_image_root(orig_cwd=None):
         """Returns a tuple of (root, exact_match) where 'root' is the absolute
         path of the default image root based on current environment given the
--- a/src/modules/client/api_errors.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/client/api_errors.py	Mon Jul 11 13:49:50 2011 -0700
@@ -2550,7 +2550,8 @@
             parent_not_in_altroot=None,
             pkg_op_failed=None,
             self_linked=None,
-            self_not_child=None):
+            self_not_child=None,
+            unparsable_output=None):
 
                 self.attach_bad_prop = attach_bad_prop
                 self.attach_bad_prop_value = attach_bad_prop_value
@@ -2580,6 +2581,7 @@
                 self.pkg_op_failed = pkg_op_failed
                 self.self_linked = self_linked
                 self.self_not_child = self_not_child
+                self.unparsable_output = unparsable_output
 
                 # first deal with an error bundle
                 if bundle:
@@ -2748,23 +2750,37 @@
 
                 if pkg_op_failed:
                         assert lin
-                        assert len(pkg_op_failed) == 3
-                        op = pkg_op_failed[0]
-                        exitrv = pkg_op_failed[1]
-                        errout = pkg_op_failed[2]
-
-                        err = _("""
+                        (op, exitrv, errout, e) = pkg_op_failed
+
+                        if e is None:
+                                err = _("""
 A '%(op)s' operation failed for child '%(lin)s' with an unexpected
-return value of %(exitrv)d and the following error message:
+return value of %(exitrv)d and generated the following output:
 %(errout)s
 
 """
-                        ) % {
-                            "lin": lin,
-                            "op": op,
-                            "exitrv": exitrv,
-                            "errout": errout,
-                        }
+                                ) % {
+                                    "lin": lin,
+                                    "op": op,
+                                    "exitrv": exitrv,
+                                    "errout": errout,
+                                }
+                        else:
+                                err = _("""
+A '%(op)s' operation failed for child '%(lin)s' with an unexpected
+exception:
+%(e)s
+
+The child generated the following output:
+%(errout)s
+
+"""
+                                ) % {
+                                    "lin": lin,
+                                    "op": op,
+                                    "errout": errout,
+                                    "e": e,
+                                }
 
                 if self_linked:
                         err = _("Current image already a linked child: %s") % \
@@ -2776,6 +2792,24 @@
                         err = _("Current image is not a linked child: %s") % \
                             self_not_child
 
+                if unparsable_output:
+                        (op, errout, e) = unparsable_output
+                        err = _("""
+A '%(op)s' operation for child '%(lin)s' generated non-json output.
+The json parser failed with the following error:
+%(e)s
+
+The child generated the following output:
+%(errout)s
+
+"""
+                                ) % {
+                                    "lin": lin,
+                                    "op": op,
+                                    "e": e,
+                                    "errout": errout,
+                                }
+
                 # set default error return value
                 if exitrv == None:
                         exitrv = pkgdefs.EXIT_OOPS
@@ -2878,15 +2912,3 @@
                     "found": self.found,
                     "loc": self.loc,
                 }
-
-class UnparsableJSON(ApiException):
-        """Used when JSON has been asked to parse an unparsable string."""
-
-        def __init__(self, s, e):
-                self.unparsable = s
-                self.json_exception = e
-
-        def __str__(self):
-                return _("Because of this error:\n%(err)s\nJSON could not "
-                    "parse the following data:\n%(data)s") % \
-                    {"err": str(self.json_exception), "data": self.unparsable}
--- a/src/modules/client/image.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/client/image.py	Mon Jul 11 13:49:50 2011 -0700
@@ -55,6 +55,7 @@
 import pkg.client.linkedimage           as li
 import pkg.client.pkgdefs               as pkgdefs
 import pkg.client.pkgplan               as pkgplan
+import pkg.client.plandesc              as plandesc
 import pkg.client.progress              as progress
 import pkg.client.publisher             as publisher
 import pkg.client.sigpolicy             as sigpolicy
@@ -117,7 +118,7 @@
         def __init__(self, root, user_provided_dir=False, progtrack=None,
             should_exist=True, imgtype=None, force=False,
             augment_ta_from_parent_image=True, allow_ondisk_upgrade=None,
-            props=misc.EmptyDict, cmdpath=None, runid=-1):
+            props=misc.EmptyDict, cmdpath=None):
 
                 if should_exist:
                         assert(imgtype is None)
@@ -131,10 +132,6 @@
                 self.__alt_known_cat = None
                 self.__alt_pkg_sources_loaded = False
 
-                if (runid < 0):
-                        runid = os.getpid()
-                self.runid = runid
-
                 # Determine identity of client executable if appropriate.
                 if cmdpath == None:
                         cmdpath = misc.api_cmdpath()
@@ -1638,15 +1635,21 @@
         def get_root(self):
                 return self.root
 
-        def get_last_modified(self):
-                """Returns a UTC datetime object representing the time the
-                image's state last changed or None if unknown."""
+        def get_last_modified(self, string=False):
+                """Return the UTC time of the image's last state change or
+                None if unknown.  By default the time is returned via datetime
+                object.  If 'string' is true and a time is available, then the
+                time is returned as a string (instead of as a datetime
+                object)."""
 
                 # Always get last_modified time from known catalog.  It's
                 # retrieved from the catalog itself since that is accurate
                 # down to the micrsecond (as opposed to the filesystem which
                 # has an OS-specific resolution).
-                return self.__get_catalog(self.IMG_CATALOG_KNOWN).last_modified
+                rv = self.__get_catalog(self.IMG_CATALOG_KNOWN).last_modified
+                if rv is None or not string:
+                        return rv
+                return rv.strftime("%Y-%m-%dT%H:%M:%S.%f")
 
         def gen_publishers(self, inc_disabled=False):
                 if not self.cfg:
@@ -2284,8 +2287,6 @@
                 if self.version < self.CURRENT_VERSION:
                         raise apx.ImageFormatUpdateNeeded(self.root)
 
-                ilm = self.get_last_modified()
-
                 # Allow garbage collection of previous plan.
                 self.imageplan = None
 
@@ -2308,26 +2309,27 @@
                         pp.evaluate(self.list_excludes(), self.list_excludes())
                         pps.append(pp)
 
-                ip = imageplan.ImagePlan(self, progtrack, lambda: False)
-                ip._image_lm = ilm
-                ip._planned_op = ip.PLANNED_FIX
+                # Always start with most current (on-disk) state information.
+                self.__init_catalogs()
+
+                ip = imageplan.ImagePlan(self, pkgdefs.API_OP_REPAIR,
+                    progtrack, lambda: False)
+
+                ip.pd._image_lm = self.get_last_modified(string=True)
                 self.imageplan = ip
 
                 ip.update_index = False
-                ip.state = imageplan.EVALUATED_PKGS
+                ip.pd.state = plandesc.EVALUATED_PKGS
                 progtrack.evaluate_start()
 
-                # Always start with most current (on-disk) state information.
-                self.__init_catalogs()
-
-                ip.pkg_plans = pps
+                ip.pd.pkg_plans = pps
 
                 ip.evaluate()
                 if ip.reboot_needed() and self.is_liveroot():
                         raise apx.RebootNeededOnLiveImageException()
 
                 logger.info("\n")
-                for pp in ip.pkg_plans:
+                for pp in ip.pd.pkg_plans:
                         for lic, entry in pp.get_licenses():
                                 dest = entry["dest"]
                                 lic = dest.attrs["license"]
@@ -3920,21 +3922,19 @@
                 names; ignore versions."""
 
                 with self.locked_op("avoid"):
-                        ip = imageplan.ImagePlan(self, progtrack, check_cancel,
-                            noexecute=False)
-
+                        ip = imageplan.ImagePlan
                         self._avoid_set_save(self.avoid_set_get() |
-                            set(ip.match_user_stems(pat_list, ip.MATCH_UNINSTALLED)))
+                            set(ip.match_user_stems(self, pat_list,
+                            ip.MATCH_UNINSTALLED)))
 
         def unavoid_pkgs(self, pat_list, progtrack, check_cancel):
                 """Unavoid the specified packages... use pattern matching on
                 names; ignore versions."""
 
                 with self.locked_op("unavoid"):
-
-                        ip = imageplan.ImagePlan(self, progtrack, check_cancel,
-                            noexecute=False)
-                        unavoid_set = set(ip.match_user_stems(pat_list, ip.MATCH_ALL))
+                        ip = imageplan.ImagePlan
+                        unavoid_set = set(ip.match_user_stems(self, pat_list,
+                            ip.MATCH_ALL))
                         current_set = self.avoid_set_get()
                         not_avoided = unavoid_set - current_set
                         if not_avoided:
@@ -3987,9 +3987,8 @@
                         return p
 
                 def __calc_frozen():
-                        ip = imageplan.ImagePlan(self, progtrack, check_cancel,
-                            noexecute=False)
-                        stems_and_pats = ip.freeze_pkgs_match(pat_list)
+                        stems_and_pats = imageplan.ImagePlan.freeze_pkgs_match(
+                            self, pat_list)
                         return dict([(s, __make_publisherless_fmri(p))
                             for s, p in stems_and_pats.iteritems()])
                 if dry_run:
@@ -4023,15 +4022,14 @@
                 frozen."""
 
                 def __calc_unfrozen():
-                        ip = imageplan.ImagePlan(self, progtrack, check_cancel,
-                            noexecute=False)
                         # Get existing dictionary of frozen packages.
                         d = self.__freeze_dict_load()
                         # Match the user's patterns against the frozen packages
                         # and return the stems which matched, and the dictionary
                         # of the currently frozen packages.
-                        return set(ip.match_user_stems(pat_list, ip.MATCH_ALL,
-                            raise_unmatched=False,
+                        ip = imageplan.ImagePlan
+                        return set(ip.match_user_stems(self, pat_list,
+                            ip.MATCH_ALL, raise_unmatched=False,
                             universe=[(None, k) for k in d.keys()])), d
 
                 if dry_run:
@@ -4070,7 +4068,7 @@
                             ip.get_plan(full=False)
 
         def __make_plan_common(self, _op, _progtrack, _check_cancel,
-            _ip_mode, _noexecute, _ip_noop=False, **kwargs):
+            _noexecute, _ip_noop=False, **kwargs):
                 """Private helper function to perform base plan creation and
                 cleanup.
                 """
@@ -4078,8 +4076,8 @@
                 # Allow garbage collection of previous plan.
                 self.imageplan = None
 
-                ip = imageplan.ImagePlan(self, _progtrack, _check_cancel,
-                    noexecute=_noexecute, mode=_ip_mode)
+                ip = imageplan.ImagePlan(self, _op, _progtrack, _check_cancel,
+                    noexecute=_noexecute)
 
                 _progtrack.evaluate_start()
 
@@ -4089,7 +4087,7 @@
                 try:
                         try:
                                 if _ip_noop:
-                                        ip.plan_noop()
+                                        ip.plan_noop(**kwargs)
                                 elif _op in [
                                     pkgdefs.API_OP_ATTACH,
                                     pkgdefs.API_OP_DETACH,
@@ -4128,7 +4126,7 @@
                 finally:
                         self.__cleanup_alt_pkg_certs()
 
-        def make_install_plan(self, op, progtrack, check_cancel, ip_mode,
+        def make_install_plan(self, op, progtrack, check_cancel,
             noexecute, pkgs_inst=None, reject_list=misc.EmptyI):
                 """Take a list of packages, specified in pkgs_inst, and attempt
                 to assemble an appropriate image plan.  This is a helper
@@ -4136,11 +4134,11 @@
                 """
 
                 self.__make_plan_common(op, progtrack, check_cancel,
-                    ip_mode, noexecute, pkgs_inst=pkgs_inst,
+                    noexecute, pkgs_inst=pkgs_inst,
                     reject_list=reject_list)
 
         def make_change_varcets_plan(self, op, progtrack, check_cancel,
-            ip_mode, noexecute, facets=None, reject_list=misc.EmptyI,
+            noexecute, facets=None, reject_list=misc.EmptyI,
             variants=None):
                 """Take a list of variants and/or facets and attempt to
                 assemble an image plan which changes them.  This is a helper
@@ -4152,11 +4150,11 @@
                         cur = set(self.cfg.variants.iteritems())
                         variants = dict(new - cur)
 
-                self.__make_plan_common(op, progtrack, check_cancel, ip_mode,
+                self.__make_plan_common(op, progtrack, check_cancel,
                     noexecute, new_variants=variants, new_facets=facets,
                     reject_list=reject_list)
 
-        def make_set_mediators_plan(self, op, progtrack, check_cancel, ip_mode,
+        def make_set_mediators_plan(self, op, progtrack, check_cancel,
             noexecute, mediators):
                 """Take a dictionary of mediators and attempt to assemble an
                 appropriate image plan to set or revert them based on the
@@ -4213,26 +4211,26 @@
                             invalid_mediations=invalid_mediations)
 
                 self.__make_plan_common(op, progtrack, check_cancel,
-                    ip_mode, noexecute, new_mediators=new_mediators)
-
-        def make_sync_plan(self, op, progtrack, check_cancel, ip_mode,
+                    noexecute, new_mediators=new_mediators)
+
+        def make_sync_plan(self, op, progtrack, check_cancel,
             noexecute, li_pkg_updates=True, reject_list=misc.EmptyI):
                 """Attempt to create an appropriate image plan to bring an
                 image in sync with it's linked image constraints.  This is a
                 helper routine for some common operations in the client."""
 
-                self.__make_plan_common(op, progtrack, check_cancel, ip_mode,
+                self.__make_plan_common(op, progtrack, check_cancel,
                     noexecute, reject_list=reject_list,
                     li_pkg_updates=li_pkg_updates)
 
-        def make_uninstall_plan(self, op, progtrack, check_cancel, ip_mode,
+        def make_uninstall_plan(self, op, progtrack, check_cancel,
             noexecute, pkgs_to_uninstall):
                 """Create uninstall plan to remove the specified packages."""
 
                 self.__make_plan_common(op, progtrack, check_cancel,
-                    ip_mode, noexecute, pkgs_to_uninstall=pkgs_to_uninstall)
-
-        def make_update_plan(self, op, progtrack, check_cancel, ip_mode,
+                    noexecute, pkgs_to_uninstall=pkgs_to_uninstall)
+
+        def make_update_plan(self, op, progtrack, check_cancel,
             noexecute, pkgs_update=None, reject_list=misc.EmptyI):
                 """Create a plan to update all packages or the specific ones as
                 far as possible.  This is a helper routine for some common
@@ -4240,25 +4238,25 @@
                 """
 
                 self.__make_plan_common(op, progtrack, check_cancel,
-                    ip_mode, noexecute, pkgs_update=pkgs_update,
+                    noexecute, pkgs_update=pkgs_update,
                     reject_list=reject_list)
 
-        def make_revert_plan(self, op, progtrack, check_cancel, ip_mode,
+        def make_revert_plan(self, op, progtrack, check_cancel,
             noexecute, args, tagged):
                 """Revert the specified files, or all files tagged as specified
                 in args to their manifest definitions.
                 """
 
                 self.__make_plan_common(op, progtrack, check_cancel,
-                    ip_mode, noexecute, args=args, tagged=tagged)
-
-        def make_noop_plan(self, op, progtrack, check_cancel, ip_mode,
+                    noexecute, args=args, tagged=tagged)
+
+        def make_noop_plan(self, op, progtrack, check_cancel,
             noexecute):
                 """Create an image plan that doesn't update the image in any
                 way."""
 
                 self.__make_plan_common(op, progtrack, check_cancel,
-                    ip_mode, noexecute, _ip_noop=True)
+                    noexecute, _ip_noop=True)
 
         def ipkg_is_up_to_date(self, check_cancel, noexecute,
             refresh_allowed=True, progtrack=None):
@@ -4370,8 +4368,7 @@
                 # XXX call to progress tracker that the package is being
                 # refreshed
                 img.make_install_plan(pkgdefs.API_OP_INSTALL, progtrack,
-                    check_cancel, pkgdefs.API_STAGE_DEFAULT, noexecute,
-                    pkgs_inst=["pkg:/package/pkg"])
+                    check_cancel, noexecute, pkgs_inst=["pkg:/package/pkg"])
 
                 return img.imageplan.nothingtodo()
 
--- a/src/modules/client/imageconfig.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/client/imageconfig.py	Mon Jul 11 13:49:50 2011 -0700
@@ -21,7 +21,7 @@
 #
 
 #
-# Copyright (c) 2007, 2012, Oracle and/or its affiliates.  All rights reserved.
+# Copyright (c) 2007, 2012, Oracle and/or its affiliates. All rights reserved.
 #
 
 import errno
@@ -385,8 +385,8 @@
                 self.variants.update(idx.get("variant", {}))
                 # facets are encoded so they can contain '/' characters.
                 for k, v in idx.get("facet", {}).iteritems():
-                        self.facets[urllib.unquote(k)] = v
-
+                        # convert facet name from unicode to a string
+                        self.facets[str(urllib.unquote(k))] = v
 
                 # Ensure architecture and zone variants are defined.
                 if "variant.arch" not in self.variants:
@@ -452,6 +452,9 @@
                 # Load mediator data.
                 for entry, value in idx.get("mediators", {}).iteritems():
                         mname, mtype = entry.rsplit(".", 1)
+                        # convert mediator name+type from unicode to a string
+                        mname = str(mname)
+                        mtype = str(mtype)
                         self.mediators.setdefault(mname, {})[mtype] = value
 
                 # Now re-enable validation and validate the properties.
--- a/src/modules/client/imageplan.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/client/imageplan.py	Mon Jul 11 13:49:50 2011 -0700
@@ -48,6 +48,7 @@
 import pkg.client.pkg_solver as pkg_solver
 import pkg.client.pkgdefs as pkgdefs
 import pkg.client.pkgplan as pkgplan
+import pkg.client.plandesc as plandesc
 import pkg.fmri
 import pkg.manifest as manifest
 import pkg.misc as misc
@@ -56,92 +57,29 @@
 import pkg.version
 
 from pkg.client.debugvalues import DebugValues
+from pkg.client.plandesc import _ActionPlan
 from pkg.mediator import mediator_impl_matches
 
-UNEVALUATED       = 0 # nothing done yet
-EVALUATED_PKGS    = 1 # established fmri changes
-MERGED_OK         = 2 # created single merged plan
-EVALUATED_OK      = 3 # ready to execute
-PREEXECUTED_OK    = 4 # finished w/ preexecute
-PREEXECUTED_ERROR = 5 # whoops
-EXECUTED_OK       = 6 # finished execution
-EXECUTED_ERROR    = 7 # failed
-
-ActionPlan = namedtuple("ActionPlan", "p src dst")
-
-IP_MODE_DEFAULT = "default"
-IP_MODE_SAVE    = "save"
-IP_MODE_LOAD    = "load"
-ip_mode_values = frozenset([
-    IP_MODE_DEFAULT,
-    IP_MODE_SAVE,
-    IP_MODE_LOAD,
-])
-
-STATE_FILE_PKGS = "pkgs"
-STATE_FILE_ACTIONS = "actions"
-
 class ImagePlan(object):
         """ImagePlan object contains the plan for changing the image...
         there are separate routines for planning the various types of
         image modifying operations; evaluation (comparing manifests
-        and buildig lists of removeal, install and update actions
+        and building lists of removal, install and update actions
         and their execution is all common code"""
 
-        PLANNED_FIX           = "fix"
-        PLANNED_INSTALL       = "install"
-        PLANNED_NOOP          = "no-op"
-        PLANNED_NOTHING       = "no-plan"
-        PLANNED_REVERT        = "revert"
-        PLANNED_MEDIATOR      = "set-mediator"
-        PLANNED_SYNC          = "sync"
-        PLANNED_UNINSTALL     = "uninstall"
-        PLANNED_UPDATE        = "update"
-        PLANNED_VARIANT       = "change-variant"
-        __planned_values  = frozenset([
-                PLANNED_FIX,
-                PLANNED_INSTALL,
-                PLANNED_NOTHING,
-
-                PLANNED_REVERT,
-                PLANNED_MEDIATOR,
-                PLANNED_SYNC,
-                PLANNED_UNINSTALL,
-                PLANNED_UPDATE,
-                PLANNED_VARIANT,
-        ])
-
         MATCH_ALL           = 0
         MATCH_INST_VERSIONS = 1
         MATCH_INST_STEMS    = 2
         MATCH_UNINSTALLED   = 3
 
-        def __init__(self, image, progtrack, check_cancel, noexecute=False,
-            mode=IP_MODE_DEFAULT):
-
-                assert mode in ip_mode_values
+        def __init__(self, image, op, progtrack, check_cancel, noexecute=False,
+            pd=None):
 
                 self.image = image
-                self.pkg_plans = []
-
-                self.state = UNEVALUATED
                 self.__progtrack = progtrack
+                self.__check_cancel = check_cancel
                 self.__noexecute = noexecute
 
-                self.__fmri_changes = [] # install  (None, fmri)
-                                         # remove   (oldfmri, None)
-                                         # update   (oldfmri, newfmri|oldfmri)
-
-                # Used to track users and groups that are part of operation.
-                self.added_groups = {}
-                self.removed_groups = {}
-                self.added_users = {}
-                self.removed_users = {}
-
-                self.update_actions  = []
-                self.removal_actions = []
-                self.install_actions = []
-
                 # The set of processed target object directories known to be
                 # valid (those that are not symlinks and thus are valid for
                 # use during installation).  This is used by the pkg.actions
@@ -166,253 +104,141 @@
                 self.__old_excludes = image.list_excludes()
                 self.__new_excludes = self.__old_excludes
 
-                self.__check_cancel = check_cancel
-
-                self.__actuators = actuator.Actuator()
-
-                self.update_index = True
-
                 self.__preexecuted_indexing_error = None
-                self._planned_op = self.PLANNED_NOTHING
-                self.__pkg_solver = None
-                self.__new_mediators = None
-                self.__mediators_change = False
-                self.__new_variants = None
-                self.__new_facets = None
-                self.__changed_facets = {}
-                self.__removed_facets = set()
-                self.__varcets_change = False
-                self.__rm_aliases = {}
                 self.__match_inst = {} # dict of fmri -> pattern
                 self.__match_rm = {} # dict of fmri -> pattern
                 self.__match_update = {} # dict of fmri -> pattern
-                self.__need_boot_archive = None
-                self.__new_avoid_obs = (None, None)
-                self.__salvaged = []
-                self.__mode = mode
-                self.__cbytes_added = 0  # size of compressed files
-                self.__bytes_added = 0   # size of files added
-                self.__cbytes_avail = 0  # avail space for downloads
-                self.__bytes_avail = 0   # avail space for fs
-
-                if noexecute:
-                        return
-
-                # generate filenames for state files
-                self.__planfile = dict()
-                self.__planfile[STATE_FILE_PKGS] = \
-                    "%s.%d.json" % (STATE_FILE_PKGS, image.runid)
-                self.__planfile[STATE_FILE_ACTIONS] = \
-                    "%s.%d.json" % (STATE_FILE_ACTIONS, image.runid)
-
-                # delete any pre-existing state files
-                rm_paths = []
-                if mode in [IP_MODE_DEFAULT, IP_MODE_SAVE]:
-                        rm_paths.append(self.__planfile[STATE_FILE_PKGS])
-                        rm_paths.append(self.__planfile[STATE_FILE_ACTIONS])
-                for path in rm_paths:
-                        try:
-                                os.remove(path)
-                        except OSError, e:
-                                if e.errno != errno.ENOENT:
-                                        raise
+
+                self.pd = None
+                if pd is None:
+                        pd = plandesc.PlanDescription(op)
+                assert(pd._op == op)
+                self.__setup_plan(pd)
 
         def __str__(self):
 
-                if self.state == UNEVALUATED:
+                if self.pd.state == plandesc.UNEVALUATED:
                         s = "UNEVALUATED:\n"
                         return s
 
-                s = "%s\n" % self.__pkg_solver
-
-                if self.state < EVALUATED_PKGS:
+                s = "%s\n" % self.pd._solver_summary
+
+                if self.pd.state < plandesc.EVALUATED_PKGS:
                         return s
 
                 s += "Package version changes:\n"
 
-                for oldfmri, newfmri in self.__fmri_changes:
+                for oldfmri, newfmri in self.pd._fmri_changes:
                         s += "%s -> %s\n" % (oldfmri, newfmri)
 
-                if self.__actuators:
-                        s = s + "\nActuators:\n%s\n" % self.__actuators
+                if self.pd._actuators:
+                        s = s + "\nActuators:\n%s\n" % self.pd._actuators
 
                 if self.__old_excludes != self.__new_excludes:
                         s = s + "\nVariants/Facet changes:\n %s -> %s\n" % \
                             (self.__old_excludes, self.__new_excludes)
 
-                if self.__new_mediators:
+                if self.pd._mediators_change:
                         s = s + "\nMediator changes:\n %s" % \
-                            "\n".join(self.mediators_to_strings())
+                            "\n".join(self.pd.get_mediators())
 
                 return s
 
+        def __setup_plan(self, plan, prepared=False):
+                assert plan.state in [
+                    plandesc.UNEVALUATED, plandesc.EVALUATED_PKGS,
+                    plandesc.EVALUATED_OK, plandesc.PREEXECUTED_OK]
+
+                self.pd = plan
+                self.__update_avail_space()
+
+                if self.pd.state == plandesc.UNEVALUATED:
+                        self.image.linked.init_plan(plan)
+                        return
+
+                # figure out excludes
+                self.__new_excludes = self.image.list_excludes(
+                    self.pd._new_variants, self.pd._new_facets)
+
+                # tell the linked image subsystem about this plan
+                self.image.linked.setup_plan(plan)
+
+                for pp in self.pd.pkg_plans:
+                        pp.image = self.image
+                        if pp.origin_fmri and pp.destination_fmri:
+                                self.__target_update_count += 1
+                        elif pp.destination_fmri:
+                                self.__target_install_count += 1
+                        elif pp.origin_fmri:
+                                self.__target_removal_count += 1
+
+                if self.pd.state >= plandesc.EVALUATED_OK:
+                        self.__progtrack.download_set_goal(self.pd._dl_npkgs,
+                            self.pd._dl_nfiles, self.pd._dl_nbytes)
+
+                if self.pd.state >= plandesc.PREEXECUTED_OK:
+                        self.__progtrack.evaluate_done(
+                            self.__target_install_count,
+                            self.__target_update_count,
+                            self.__target_removal_count)
+
+        def skip_preexecute(self):
+                assert self.pd.state in \
+                    [plandesc.PREEXECUTED_OK, plandesc.EVALUATED_OK], \
+                    "%s not in [%s, %s]" % (self.pd.state,
+                    plandesc.PREEXECUTED_OK, plandesc.EVALUATED_OK)
+
+                if self.pd.state == plandesc.PREEXECUTED_OK:
+                        # can't skip preexecute since we already preexecuted it
+                        return
+
+                if self.image.version != self.image.CURRENT_VERSION:
+                        # Prevent plan execution if image format isn't current.
+                        raise api_errors.ImageFormatUpdateNeeded(
+                            self.image.root)
+
+                if self.image.transport:
+                        self.image.transport.shutdown()
+
+                self.pd.state = plandesc.PREEXECUTED_OK
+
         @property
-        def mediators(self):
-                """Returns a list of three-tuples containing information about
-                the mediators.  The first element in the tuple is the name of
-                the mediator.  The second element is a tuple containing the
-                original version and source and the new version and source of
-                the mediator.  The third element is a tuple containing the
-                original implementation and source and new implementation and
-                source."""
-
-                ret = []
-                cfg_mediators = self.image.cfg.mediators
-                if not (self.__mediators_change and
-                    (self.__new_mediators or cfg_mediators)):
-                        return ret
-
-                def get_mediation(mediators):
-                        mimpl = mver = mimpl_source = \
-                            mver_source = None
-                        if m in mediators:
-                                mimpl = mediators[m].get(
-                                    "implementation")
-                                mimpl_ver = mediators[m].get(
-                                    "implementation-version")
-                                if mimpl_ver:
-                                        mimpl_ver = \
-                                            mimpl_ver.get_short_version()
-                                if mimpl and mimpl_ver:
-                                        mimpl += "(@%s)" % mimpl_ver
-                                mimpl_source = mediators[m].get(
-                                    "implementation-source")
-
-                                mver = mediators[m].get("version")
-                                if mver:
-                                        mver = mver.get_short_version()
-                                mver_source = mediators[m].get(
-                                    "version-source")
-                        return mimpl, mver, mimpl_source, mver_source
-
-                for m in sorted(set(self.__new_mediators.keys() +
-                    cfg_mediators.keys())):
-                        orig_impl, orig_ver, orig_impl_source, \
-                            orig_ver_source = get_mediation(cfg_mediators)
-                        new_impl, new_ver, new_impl_source, new_ver_source = \
-                            get_mediation(self.__new_mediators)
-
-                        if orig_ver == new_ver and \
-                            orig_ver_source == new_ver_source and \
-                            orig_impl == new_impl and \
-                            orig_impl_source == new_impl_source:
-                                # Mediation not changed.
-                                continue
-
-                        out = (m,
-                            ((orig_ver, orig_ver_source),
-                            (new_ver, new_ver_source)),
-                            ((orig_impl, orig_impl_source),
-                            (new_impl, new_impl_source)))
-
-                        ret.append(out)
-
-                return ret
-
-        def mediators_to_strings(self):
-                """Returns list of strings describing mediator changes."""
-                ret = []
-                for m, ver, impl in self.mediators:
-                        ((orig_ver, orig_ver_source),
-                            (new_ver, new_ver_source)) = ver
-                        ((orig_impl, orig_impl_source),
-                            (new_impl, new_impl_source)) = impl
-                        out = "mediator %s:\n" % m
-                        if orig_ver and new_ver:
-                                out += "           version: %s (%s default) " \
-                                    "-> %s (%s default)\n" % (orig_ver,
-                                    orig_ver_source, new_ver, new_ver_source)
-                        elif orig_ver:
-                                out += "           version: %s (%s default) " \
-                                    "-> None\n" % (orig_ver, orig_ver_source)
-                        elif new_ver:
-                                out += "           version: None -> " \
-                                    "%s (%s default)\n" % (new_ver,
-                                    new_ver_source)
-
-                        if orig_impl and new_impl:
-                                out += "    implementation: %s (%s default) " \
-                                    "-> %s (%s default)\n" % (orig_impl,
-                                    orig_impl_source, new_impl, new_impl_source)
-                        elif orig_impl:
-                                out += "    implementation: %s (%s default) " \
-                                    "-> None\n" % (orig_impl, orig_impl_source)
-                        elif new_impl:
-                                out += "    implementation: None -> " \
-                                    "%s (%s default)\n" % (new_impl,
-                                    new_impl_source)
-                        ret.append(out)
-                return ret
-
-        @property
-        def salvaged(self):
-                """A list of tuples of items that were salvaged during plan
-                execution.  Each tuple is of the form (original_path,
-                salvage_path).  Where 'original_path' is the path of the item
-                before it was salvaged, and 'salvage_path' is where the item was
-                moved to.  This property is only valid after plan execution
-                has completed."""
-                return self.__salvaged
-
-        @property
-        def services(self):
-                """Returns a list of string tuples describing affected services
-                (action, SMF FMRI)."""
-                return sorted(
-                    ((str(action), str(smf_fmri))
-                    for action, smf_fmri in self.__actuators.get_services_list()),
-                    key=operator.itemgetter(0, 1)
-                )
-
-        @property
-        def varcets(self):
-                """Returns list of variant/facet changes"""
-                if self.__new_variants:
-                        vs = self.__new_variants.items()
-                else:
-                        vs = []
-                fs = []
-                fs.extend(self.__changed_facets.items())
-                fs.extend([(f, None) for f in self.__removed_facets])
-                return vs, fs
-
-        def gen_verbose_strs(self):
-                """yields action change descriptions in order performed"""
-                for pplan, o_act, d_act in itertools.chain(
-                    self.removal_actions, 
-                    self.update_actions,
-                    self.install_actions):
-                        yield "%s -> %s" % (o_act, d_act)
+        def state(self):
+                return self.pd.state
 
         @property
         def planned_op(self):
                 """Returns a constant value indicating the type of operation
                 planned."""
 
-                return self._planned_op
+                return self.pd._op
 
         @property
         def plan_desc(self):
                 """Get the proposed fmri changes."""
-                return self.__fmri_changes
+                return self.pd._fmri_changes
+
+        def describe(self):
+                """Return a pointer to the plan description."""
+                return self.pd
 
         @property
         def bytes_added(self):
                 """get the (approx) number of bytes added"""
-                return self.__bytes_added
+                return self.pd._bytes_added
         @property
         def cbytes_added(self):
                 """get the (approx) number of bytes needed in download cache"""
-                return self.__cbytes_added
+                return self.pd._cbytes_added
 
         @property
         def bytes_avail(self):
                 """get the (approx) number of bytes space available"""
-                return self.__bytes_avail
+                return self.pd._bytes_avail
         @property
         def cbytes_avail(self):
                 """get the (approx) number of download space available"""
-                return self.__cbytes_avail
+                return self.pd._cbytes_avail
 
         def __vector_2_fmri_changes(self, installed_dict, vector,
             li_pkg_updates=True, new_variants=None, new_facets=None):
@@ -463,12 +289,11 @@
 
                 return fmri_updates
 
-        def __plan_op(self, op):
+        def __plan_op(self):
                 """Private helper method used to mark the start of a planned
                 operation."""
 
-                self._planned_op = op
-                self._image_lm = self.image.get_last_modified()
+                self.pd._image_lm = self.image.get_last_modified(string=True)
 
         def __plan_install_solver(self, li_pkg_updates=True, li_sync_op=False,
             new_facets=None, new_variants=None, pkgs_inst=None,
@@ -480,22 +305,22 @@
                 if not (new_variants or pkgs_inst or li_sync_op or
                     new_facets is not None):
                         # nothing to do
-                        self.__fmri_changes = []
+                        self.pd._fmri_changes = []
                         return
 
                 old_facets = self.image.cfg.facets
                 if new_variants or \
                     (new_facets is not None and new_facets != old_facets):
-                        self.__varcets_change = True
-                        self.__new_variants = new_variants
-                        self.__new_facets   = new_facets
+                        self.pd._varcets_change = True
+                        self.pd._new_variants = new_variants
+                        self.pd._new_facets   = new_facets
                         tmp_new_facets = new_facets
                         if tmp_new_facets is None:
                                 tmp_new_facets = pkg.facet.Facets()
-                        self.__changed_facets = pkg.facet.Facets(dict(
+                        self.pd._changed_facets = pkg.facet.Facets(dict(
                             set(tmp_new_facets.iteritems()) -
                             set(old_facets.iteritems())))
-                        self.__removed_facets = set(old_facets.keys()) - \
+                        self.pd._removed_facets = set(old_facets.keys()) - \
                             set(tmp_new_facets.keys())
 
                 # get ranking of publishers
@@ -506,15 +331,16 @@
                     self.image.gen_installed_pkgs())
 
                 if reject_list:
-                        reject_set = self.match_user_stems(reject_list,
-                            self.MATCH_ALL)
+                        reject_set = self.match_user_stems(self.image,
+                            reject_list, self.MATCH_ALL)
                 else:
                         reject_set = set()
 
                 if pkgs_inst:
                         inst_dict, references = self.__match_user_fmris(
-                            pkgs_inst, self.MATCH_ALL, pub_ranks=pub_ranks,
-                            installed_pkgs=installed_dict, reject_set=reject_set)
+                            self.image, pkgs_inst, self.MATCH_ALL,
+                            pub_ranks=pub_ranks, installed_pkgs=installed_dict,
+                            reject_set=reject_set)
                         self.__match_inst = references
                 else:
                         inst_dict = {}
@@ -528,7 +354,7 @@
                         variants = self.image.get_variants()
 
                 # instantiate solver
-                self.__pkg_solver = pkg_solver.PkgSolver(
+                solver = pkg_solver.PkgSolver(
                     self.image.get_catalog(self.image.IMG_CATALOG_KNOWN),
                     installed_dict,
                     pub_ranks,
@@ -538,18 +364,21 @@
                     self.__progtrack)
 
                 # Solve... will raise exceptions if no solution is found
-                new_vector, self.__new_avoid_obs = \
-                    self.__pkg_solver.solve_install(
+                new_vector, self.pd._new_avoid_obs = solver.solve_install(
                         self.image.get_frozen_list(), inst_dict,
                         new_variants=new_variants, new_facets=new_facets,
                         excludes=self.__new_excludes, reject_set=reject_set,
                         relax_all=li_sync_op)
 
-                self.__fmri_changes = self.__vector_2_fmri_changes(
+                self.pd._fmri_changes = self.__vector_2_fmri_changes(
                     installed_dict, new_vector,
                     li_pkg_updates=li_pkg_updates,
                     new_variants=new_variants, new_facets=new_facets)
 
+                self.pd._solver_summary = str(solver)
+                if DebugValues["plan"]:
+                        self.pd._solver_errors = solver.get_trim_errors()
+
         def __plan_install(self, li_pkg_updates=True, li_sync_op=False,
             new_facets=None, new_variants=None, pkgs_inst=None,
             reject_list=misc.EmptyI):
@@ -557,33 +386,36 @@
                 pkgs, sync the image, and/or change facets/variants within the
                 current image."""
 
-                # someone better have called __plan_op()
-                assert self._planned_op in self.__planned_values
-
-                plandir = self.image.plandir
-
-                if self.__mode in [IP_MODE_DEFAULT, IP_MODE_SAVE]:
-                        self.__plan_install_solver(
-                            li_pkg_updates=li_pkg_updates,
-                            li_sync_op=li_sync_op,
-                            new_facets=new_facets,
-                            new_variants=new_variants,
-                            pkgs_inst=pkgs_inst,
-                            reject_list=reject_list)
-
-                        if self.__mode == IP_MODE_SAVE:
-                                self.__save(STATE_FILE_PKGS)
-                else:
-                        assert self.__mode == IP_MODE_LOAD
-                        self.__fmri_changes = self.__load(STATE_FILE_PKGS)
-
-                self.state = EVALUATED_PKGS
+                self.__plan_op()
+                self.__plan_install_solver(
+                    li_pkg_updates=li_pkg_updates,
+                    li_sync_op=li_sync_op,
+                    new_facets=new_facets,
+                    new_variants=new_variants,
+                    pkgs_inst=pkgs_inst,
+                    reject_list=reject_list)
+                self.pd.state = plandesc.EVALUATED_PKGS
+
+        def set_be_options(self, backup_be, backup_be_name, new_be,
+            be_activate, be_name):
+                self.pd._backup_be = backup_be
+                self.pd._backup_be_name = backup_be_name
+                self.pd._new_be = new_be
+                self.pd._be_activate = be_activate
+                self.pd._be_name = be_name
+
+        def __set_update_index(self, value):
+                self.pd._update_index = value
+
+        def __get_update_index(self):
+                return self.pd._update_index
+
+        update_index = property(__get_update_index, __set_update_index)
 
         def plan_install(self, pkgs_inst=None, reject_list=misc.EmptyI):
                 """Determine the fmri changes needed to install the specified
                 pkgs"""
 
-                self.__plan_op(self.PLANNED_INSTALL)
                 self.__plan_install(pkgs_inst=pkgs_inst,
                      reject_list=reject_list)
 
@@ -592,7 +424,6 @@
                 """Determine the fmri changes needed to change the specified
                 facets/variants."""
 
-                self.__plan_op(self.PLANNED_VARIANT)
                 self.__plan_install(new_facets=new_facets,
                      new_variants=new_variants, reject_list=reject_list)
 
@@ -622,33 +453,31 @@
                    default.
                 """
 
-                self.__plan_op(self.PLANNED_MEDIATOR)
-
-                self.__mediators_change = True
-                self.__new_mediators = new_mediators
-                self.__fmri_changes = []
-
+                self.__plan_op()
+
+                self.pd._mediators_change = True
+                self.pd._new_mediators = new_mediators
                 cfg_mediators = self.image.cfg.mediators
 
                 # keys() is used since entries are deleted during iteration.
                 update_mediators = {}
-                for m in self.__new_mediators.keys():
+                for m in self.pd._new_mediators.keys():
                         for k in ("implementation", "version"):
-                                if k in self.__new_mediators[m]:
-                                        if self.__new_mediators[m][k] is not None:
+                                if k in self.pd._new_mediators[m]:
+                                        if self.pd._new_mediators[m][k] is not None:
                                                 # Any mediators being set this
                                                 # way are forced to be marked as
                                                 # being set by local administrator.
-                                                self.__new_mediators[m]["%s-source" % k] = \
+                                                self.pd._new_mediators[m]["%s-source" % k] = \
                                                     "local"
                                                 continue
 
                                         # Explicit reset requested.
-                                        del self.__new_mediators[m][k]
-                                        self.__new_mediators[m].pop(
+                                        del self.pd._new_mediators[m][k]
+                                        self.pd._new_mediators[m].pop(
                                             "%s-source" % k, None)
                                         if k == "implementation":
-                                                self.__new_mediators[m].pop(
+                                                self.pd._new_mediators[m].pop(
                                                     "implementation-version",
                                                     None)
                                         continue
@@ -667,13 +496,13 @@
                                 if med_source != "local":
                                         continue
 
-                                self.__new_mediators[m][k] = \
+                                self.pd._new_mediators[m][k] = \
                                     cfg_mediators[m].get(k)
-                                self.__new_mediators[m]["%s-source" % k] = "local"
+                                self.pd._new_mediators[m]["%s-source" % k] = "local"
 
                                 if k == "implementation" and \
                                     "implementation-version" in cfg_mediators[m]:
-                                        self.__new_mediators[m]["implementation-version"] = \
+                                        self.pd._new_mediators[m]["implementation-version"] = \
                                             cfg_mediators[m].get("implementation-version")
 
                         if m not in cfg_mediators:
@@ -684,22 +513,22 @@
                         # whether configuration source is changing.  If so,
                         # optimize planning by not loading any package data.
                         for k in ("implementation", "version"):
-                                if self.__new_mediators[m].get(k) != \
+                                if self.pd._new_mediators[m].get(k) != \
                                     cfg_mediators[m].get(k):
                                         break
                         else:
-                                if (self.__new_mediators[m].get("version-source") != \
+                                if (self.pd._new_mediators[m].get("version-source") != \
                                     cfg_mediators[m].get("version-source")) or \
-                                    (self.__new_mediators[m].get("implementation-source") != \
+                                    (self.pd._new_mediators[m].get("implementation-source") != \
                                     cfg_mediators[m].get("implementation-source")):
                                         update_mediators[m] = \
-                                            self.__new_mediators[m]
-                                del self.__new_mediators[m]
-
-                if self.__new_mediators:
+                                            self.pd._new_mediators[m]
+                                del self.pd._new_mediators[m]
+
+                if self.pd._new_mediators:
                         # Some mediations are changing, so merge the update only
                         # ones back in.
-                        self.__new_mediators.update(update_mediators)
+                        self.pd._new_mediators.update(update_mediators)
 
                         # Determine which packages will be affected.
                         for f in self.image.gen_installed_pkgs():
@@ -723,22 +552,20 @@
                                         pp.evaluate(self.__new_excludes,
                                             self.__new_excludes,
                                             can_exclude=True)
-                                        self.pkg_plans.append(pp)
+                                        self.pd.pkg_plans.append(pp)
                 else:
                         # Only the source property is being updated for
                         # these mediators, so no packages needed loading.
-                        self.__new_mediators = update_mediators
-
-                self.state = EVALUATED_PKGS
+                        self.pd._new_mediators = update_mediators
+
+                self.pd.state = plandesc.EVALUATED_PKGS
 
         def plan_sync(self, li_pkg_updates=True, reject_list=misc.EmptyI):
                 """Determine the fmri changes needed to sync the image."""
 
-                self.__plan_op(self.PLANNED_SYNC)
-
                 # check if the sync will try to uninstall packages.
                 uninstall = False
-                reject_set = self.match_user_stems(reject_list,
+                reject_set = self.match_user_stems(self.image, reject_list,
                     self.MATCH_INST_VERSIONS, raise_not_installed=False)
                 if reject_set:
                         # at least one reject pattern matched an installed
@@ -753,17 +580,18 @@
                 # already in sync then don't bother invoking the solver.
                 if not uninstall and rv == pkgdefs.EXIT_OK:
                         # we don't need to do anything
-                        self.__fmri_changes = []
-                        self.state = EVALUATED_PKGS
+                        self.__plan_op()
+                        self.pd._fmri_changes = []
+                        self.pd.state = plandesc.EVALUATED_PKGS
                         return
 
                 self.__plan_install(li_pkg_updates=li_pkg_updates,
                     li_sync_op=True, reject_list=reject_list)
 
         def plan_uninstall(self, pkgs_to_uninstall):
-                self.__plan_op(self.PLANNED_UNINSTALL)
+                self.__plan_op()
                 proposed_dict, self.__match_rm = self.__match_user_fmris(
-                    pkgs_to_uninstall, self.MATCH_INST_VERSIONS)
+                    self.image, pkgs_to_uninstall, self.MATCH_INST_VERSIONS)
                 # merge patterns together
                 proposed_removals = set([
                     f
@@ -776,7 +604,7 @@
                     self.image.gen_installed_pkgs())
 
                 # instantiate solver
-                self.__pkg_solver = pkg_solver.PkgSolver(
+                solver = pkg_solver.PkgSolver(
                     self.image.get_catalog(self.image.IMG_CATALOG_KNOWN),
                     installed_dict,
                     self.image.get_publisher_ranks(),
@@ -785,19 +613,22 @@
                     self.image.linked.parent_fmris(),
                     self.__progtrack)
 
-                new_vector, self.__new_avoid_obs = \
-                    self.__pkg_solver.solve_uninstall(
-                        self.image.get_frozen_list(), proposed_removals,
-                        self.__new_excludes)
-
-                self.__fmri_changes = [
+                new_vector, self.pd._new_avoid_obs = solver.solve_uninstall(
+                    self.image.get_frozen_list(), proposed_removals,
+                    self.__new_excludes)
+
+                self.pd._fmri_changes = [
                     (a, b)
                     for a, b in ImagePlan.__dicts2fmrichanges(installed_dict,
                         ImagePlan.__fmris2dict(new_vector))
                     if a != b
                 ]
 
-                self.state = EVALUATED_PKGS
+                self.pd._solver_summary = str(solver)
+                if DebugValues["plan"]:
+                        self.pd._solver_errors = solver.get_trim_errors()
+
+                self.pd.state = plandesc.EVALUATED_PKGS
 
         def __plan_update_solver(self, pkgs_update=None,
             reject_list=misc.EmptyI):
@@ -814,20 +645,20 @@
                 # If specific packages or patterns were provided, then
                 # determine the proposed set to pass to the solver.
                 if reject_list:
-                        reject_set = self.match_user_stems(reject_list,
-                            self.MATCH_ALL)
+                        reject_set = self.match_user_stems(self.image,
+                            reject_list, self.MATCH_ALL)
                 else:
                         reject_set = set()
 
                 if pkgs_update:
                         update_dict, references = self.__match_user_fmris(
-                            pkgs_update, self.MATCH_INST_STEMS,
+                            self.image, pkgs_update, self.MATCH_INST_STEMS,
                             pub_ranks=pub_ranks, installed_pkgs=installed_dict,
                             reject_set=reject_set)
                         self.__match_update = references
 
                 # instantiate solver
-                self.__pkg_solver = pkg_solver.PkgSolver(
+                solver = pkg_solver.PkgSolver(
                     self.image.get_catalog(self.image.IMG_CATALOG_KNOWN),
                     installed_dict,
                     pub_ranks,
@@ -837,8 +668,8 @@
                     self.__progtrack)
 
                 if pkgs_update:
-                        new_vector, self.__new_avoid_obs = \
-                            self.__pkg_solver.solve_install(
+                        new_vector, self.pd._new_avoid_obs = \
+                            solver.solve_install(
                                 self.image.get_frozen_list(),
                                 update_dict, excludes=self.__new_excludes,
                                 reject_set=reject_set,
@@ -846,42 +677,36 @@
                 else:
                         # Updating all installed packages requires a different
                         # solution path.
-                        new_vector, self.__new_avoid_obs = \
-                            self.__pkg_solver.solve_update_all(
+                        new_vector, self.pd._new_avoid_obs = \
+                            solver.solve_update_all(
                                 self.image.get_frozen_list(),
                                 excludes=self.__new_excludes,
                                 reject_set=reject_set)
 
-                self.__fmri_changes = self.__vector_2_fmri_changes(
+                self.pd._fmri_changes = self.__vector_2_fmri_changes(
                     installed_dict, new_vector)
 
+                self.pd._solver_summary = str(solver)
+                if DebugValues["plan"]:
+                        self.pd._solver_errors = solver.get_trim_errors()
+
         def plan_update(self, pkgs_update=None, reject_list=misc.EmptyI):
                 """Determine the fmri changes needed to update the specified
                 pkgs or all packages if none were specified."""
-                self.__plan_op(self.PLANNED_UPDATE)
-
-                plandir = self.image.plandir
-
-                if self.__mode in [IP_MODE_DEFAULT, IP_MODE_SAVE]:
-                        self.__plan_update_solver(
-                            pkgs_update=pkgs_update,
-                            reject_list=reject_list)
-
-                        if self.__mode == IP_MODE_SAVE:
-                                self.__save(STATE_FILE_PKGS)
-                else:
-                        assert self.__mode == IP_MODE_LOAD
-                        self.__fmri_changes = self.__load(STATE_FILE_PKGS)
-
-                self.state = EVALUATED_PKGS
+
+                self.__plan_op()
+                self.__plan_update_solver(
+                    pkgs_update=pkgs_update,
+                    reject_list=reject_list)
+                self.pd.state = plandesc.EVALUATED_PKGS
 
         def plan_revert(self, args, tagged):
-                """Plan reverting the specifed files or files tagged as
+                """Plan reverting the specified files or files tagged as
                 specified.  We create the pkgplans here rather than in
                 evaluate; by keeping the list of changed_fmris empty we
                 skip most of the processing in evaluate"""
 
-                self.__plan_op(self.PLANNED_REVERT)
+                self.__plan_op()
 
                 revert_dict = defaultdict(list)
 
@@ -956,21 +781,17 @@
                                 pp.evaluate(self.__new_excludes,
                                     self.__new_excludes,
                                     can_exclude=True)
-                                self.pkg_plans.append(pp)
-
-                self.__fmri_changes = []
-                self.state = EVALUATED_PKGS
-
-        def plan_fix(self, pkgs_to_fix):
-                """Create the list of pkgs to fix"""
-                self.__plan_op(self.PLANNED_FIX)
+                                self.pd.pkg_plans.append(pp)
+
+                self.pd._fmri_changes = []
+                self.pd.state = plandesc.EVALUATED_PKGS
 
         def plan_noop(self):
                 """Create a plan that doesn't change the package contents of
                 the current image."""
-                self.__plan_op(self.PLANNED_NOOP)
-                self.__fmri_changes = []
-                self.state = EVALUATED_PKGS
+                self.__plan_op()
+                self.pd._fmri_changes = []
+                self.pd.state = plandesc.EVALUATED_PKGS
 
         @staticmethod
         def __fmris2dict(fmri_list):
@@ -988,46 +809,40 @@
 
         def reboot_advised(self):
                 """Check if evaluated imageplan suggests a reboot"""
-                assert self.state >= MERGED_OK
-                return self.__actuators.reboot_advised()
+                assert self.state >= plandesc.MERGED_OK
+                return self.pd._actuators.reboot_advised()
 
         def reboot_needed(self):
                 """Check if evaluated imageplan requires a reboot"""
-                assert self.state >= MERGED_OK
-                return self.__actuators.reboot_needed()
+                assert self.pd.state >= plandesc.MERGED_OK
+                return self.pd._actuators.reboot_needed()
 
         def boot_archive_needed(self):
                 """True if boot archive needs to be rebuilt"""
-                assert self.state >= MERGED_OK
-                return self.__need_boot_archive
+                assert self.pd.state >= plandesc.MERGED_OK
+                return self.pd._need_boot_archive
 
         def get_solver_errors(self):
                 """Returns a list of strings for all FMRIs evaluated by the
                 solver explaining why they were rejected.  (All packages
                 found in solver's trim database.)"""
-
-                assert self.state >= EVALUATED_PKGS
-                # in case this operation doesn't use solver
-                if self.__pkg_solver is None:
-                        return []
-
-                return self.__pkg_solver.get_trim_errors()
+                return self.pd.get_solver_errors()
 
         def get_plan(self, full=True):
                 if full:
                         return str(self)
 
                 output = ""
-                for t in self.__fmri_changes:
+                for t in self.pd._fmri_changes:
                         output += "%s -> %s\n" % t
                 return output
 
         def gen_new_installed_pkgs(self):
                 """Generates all the fmris which will be in the new image."""
-                assert self.state >= EVALUATED_PKGS
+                assert self.pd.state >= plandesc.EVALUATED_PKGS
                 fmri_set = set(self.image.gen_installed_pkgs())
 
-                for p in self.pkg_plans:
+                for p in self.pd.pkg_plans:
                         p.update_pkg_set(fmri_set)
 
                 for pfmri in fmri_set:
@@ -1036,9 +851,9 @@
         def __gen_only_new_installed_info(self):
                 """Generates fmri-manifest pairs for all packages which are
                 being installed (or fixed, etc.)."""
-                assert self.state >= EVALUATED_PKGS
-
-                for p in self.pkg_plans:
+                assert self.pd.state >= plandesc.EVALUATED_PKGS
+
+                for p in self.pd.pkg_plans:
                         if p.destination_fmri:
                                 assert p.destination_manifest
                                 yield p.destination_fmri, p.destination_manifest
@@ -1046,9 +861,9 @@
         def __gen_outgoing_info(self):
                 """Generates fmri-manifest pairs for all the packages which are
                 being removed."""
-                assert self.state >= EVALUATED_PKGS
-
-                for p in self.pkg_plans:
+                assert self.pd.state >= plandesc.EVALUATED_PKGS
+
+                for p in self.pd.pkg_plans:
                         if p.origin_fmri and \
                             p.origin_fmri != p.destination_fmri:
                                 assert p.origin_manifest
@@ -1085,7 +900,7 @@
                 when 'atype' is 'dir', directories only implicitly delivered
                 in the image will be emitted as well."""
 
-                assert self.state >= EVALUATED_PKGS
+                assert self.pd.state >= plandesc.EVALUATED_PKGS
 
                 # Don't bother accounting for implicit directories if we're not
                 # looking for them.
@@ -1094,7 +909,7 @@
                                 implicit_dirs = False
                         else:
                                 da = pkg.actions.directory.DirectoryAction
- 
+
                 for pfmri in generator():
                         m = self.image.get_manifest(pfmri, ignore_excludes=True)
                         if implicit_dirs:
@@ -1118,7 +933,7 @@
                 'dir', directories only implicitly delivered in the image will
                 be emitted as well."""
 
-                assert self.state >= EVALUATED_PKGS
+                assert self.pd.state >= plandesc.EVALUATED_PKGS
 
                 # Don't bother accounting for implicit directories if we're not
                 # looking for them.
@@ -1418,8 +1233,6 @@
                 pp, install, remove = self.__fixups.get(pfmri,
                     (None, None, None))
                 if pp is None:
-                        # XXX The lambda: False is temporary until fix is moved
-                        # into the API and self.__check_cancel can be used.
                         pp = pkgplan.PkgPlan(self.image)
                         if inst_action:
                                 install = [inst_action]
@@ -1439,18 +1252,18 @@
                         pp.propose_repair(pfmri, nfm, install, remove,
                             autofix=True)
                         pp.evaluate(self.__old_excludes, self.__new_excludes)
-                        self.pkg_plans.append(pp)
+                        self.pd.pkg_plans.append(pp)
 
                         # Repairs end up going into the package plan's update
-                        # and remove lists, so ActionPlans needed to be appended
-                        # for each action in this fixup pkgplan to the list of
-                        # related actions.
+                        # and remove lists, so _ActionPlans needed to be
+                        # appended for each action in this fixup pkgplan to
+                        # the list of related actions.
                         for action in install:
-                                self.update_actions.append(ActionPlan(pp, None,
-                                    action))
+                                self.pd.update_actions.append(
+                                    _ActionPlan(pp, None, action))
                         for action in remove:
-                                self.removal_actions.append(ActionPlan(pp,
-                                    action, None))
+                                self.pd.removal_actions.append(
+                                    _ActionPlan(pp, action, None))
 
                 # Don't process this particular set of fixups again.
                 self.__fixups = {}
@@ -1474,13 +1287,14 @@
                         return False
 
                 if msg == "nothing":
-                        for i, ap in enumerate(self.removal_actions):
+                        for i, ap in enumerate(self.pd.removal_actions):
                                 if ap and ap.src.attrs.get(ap.src.key_attr,
                                     None) == key:
-                                        self.removal_actions[i] = None
+                                        self.pd.removal_actions[i] = None
                 elif msg == "overlay":
                         pp_needs_trimming = {}
-                        for al in (self.install_actions, self.update_actions):
+                        for al in (self.pd.install_actions,
+                            self.pd.update_actions):
                                 for i, ap in enumerate(al):
                                         if not (ap and ap.dst.attrs.get(
                                             ap.dst.key_attr, None) == key):
@@ -1633,9 +1447,12 @@
                 # change-facet/variant, revert, fix, or set-mediator, then we
                 # need to skip modifying new, as it'll just end up with
                 # incorrect duplicates.
-                if self.planned_op in (self.PLANNED_FIX,
-                    self.PLANNED_VARIANT, self.PLANNED_REVERT,
-                    self.PLANNED_MEDIATOR):
+                if self.planned_op in (
+                    pkgdefs.API_OP_CHANGE_FACET,
+                    pkgdefs.API_OP_CHANGE_VARIANT,
+                    pkgdefs.API_OP_REPAIR,
+                    pkgdefs.API_OP_REVERT,
+                    pkgdefs.API_OP_SET_MEDIATOR):
                         return
 
                 build_release = self.image.attrs["Build-Release"]
@@ -1874,7 +1691,7 @@
                 """Now that we're done reading the manifests, we can clear them
                 from the pkgplans."""
 
-                for p in self.pkg_plans:
+                for p in self.pd.pkg_plans:
                         p.clear_dest_manifest()
                         p.clear_origin_manifest()
 
@@ -1899,8 +1716,9 @@
                 their core attributes.
                 """
 
-                # We need to be able to create broken images from the testsuite.
+                 # We need to be able to create broken images from the testsuite.
                 if DebugValues["broken-conflicting-action-handling"]:
+                        self.__clear_pkg_plans()
                         return
 
                 errs = []
@@ -1911,6 +1729,7 @@
 
                 # If we're removing all packages, there won't be any conflicts.
                 if not new_fmris:
+                        self.__clear_pkg_plans()
                         return
 
                 old_fmris = set((
@@ -2032,7 +1851,7 @@
                 if pfmri:
                         return self.image.get_manifest(pfmri,
                             ignore_excludes=ignore_excludes or
-                            self.__varcets_change,
+                            self.pd._varcets_change,
                             intent=intent)
                 else:
                         return manifest.NullFactoredManifest
@@ -2075,7 +1894,7 @@
                                 old_fmri = None
 
                 info = {
-                    "operation": self._planned_op,
+                    "operation": self.pd._op,
                     "old_fmri" : old_fmri,
                     "new_fmri" : new_fmri,
                     "reference": reference
@@ -2103,11 +1922,11 @@
                 """
 
                 if phase == "install":
-                        d = self.__actuators.install
+                        d = self.pd._actuators.install
                 elif phase == "remove":
-                        d = self.__actuators.removal
+                        d = self.pd._actuators.removal
                 elif phase == "update":
-                        d = self.__actuators.update
+                        d = self.pd._actuators.update
 
                 if callable(value):
                         d[name] = value
@@ -2119,25 +1938,18 @@
                 build pkg plans and figure out exact impact of
                 proposed changes"""
 
-                assert self.state == EVALUATED_PKGS, self
-
-                if self._image_lm != self.image.get_last_modified():
+                assert self.pd.state == plandesc.EVALUATED_PKGS, self
+
+                if self.pd._image_lm != \
+                    self.image.get_last_modified(string=True):
                         # State has been modified since plan was created; this
                         # plan is no longer valid.
                         raise api_errors.InvalidPlanError()
 
-                plandir = self.image.plandir
-                if self.__mode in [IP_MODE_DEFAULT, IP_MODE_SAVE]:
-                        self.evaluate_pkg_plans()
-                        if self.__mode == IP_MODE_SAVE:
-                                self.__save(STATE_FILE_ACTIONS)
-                else:
-                        assert self.__mode == IP_MODE_LOAD
-                        self.pkg_plans = self.__load(STATE_FILE_ACTIONS)
-
+                self.evaluate_pkg_plans()
                 self.merge_actions()
 
-                for p in self.pkg_plans:
+                for p in self.pd.pkg_plans:
                         cpbytes, pbytes = p.get_bytes_added()
                         if p.destination_fmri:
                                 mpath = self.image.get_manifest_path(
@@ -2148,18 +1960,18 @@
                                         # For now, include this in cbytes_added
                                         # since that's closest to where the
                                         # download cache is stored.
-                                        self.__cbytes_added += \
+                                        self.pd._cbytes_added += \
                                             os.stat(mpath).st_size * 3
                                 except EnvironmentError, e:
                                         raise api_errors._convert_error(e)
-                        self.__cbytes_added += cpbytes
-                        self.__bytes_added += pbytes
+                        self.pd._cbytes_added += cpbytes
+                        self.pd._bytes_added += pbytes
 
                 # Include state directory in cbytes_added for now since it's
                 # closest to where the download cache is stored.  (Twice the
                 # amount is used because image state update involves using
                 # a complete copy of existing state.)
-                self.__cbytes_added += \
+                self.pd._cbytes_added += \
                     misc.get_dir_size(self.image._statedir) * 2
 
                 # Our slop factor is 25%; overestimating is safer than under-
@@ -2171,25 +1983,25 @@
                 # an image, a 12% difference between actual size and installed
                 # size was found, so this seems safe enough.  (And helps account
                 # for any bootarchives, fs overhead, etc.)
-                self.__cbytes_added *= 1.25
-                self.__bytes_added *= 1.25
+                self.pd._cbytes_added *= 1.25
+                self.pd._bytes_added *= 1.25
 
                 # XXX For now, include cbytes_added in bytes_added total; in the
                 # future, this should only happen if they share the same
                 # filesystem.
-                self.__bytes_added += self.__cbytes_added
+                self.pd._bytes_added += self.pd._cbytes_added
 
                 self.__update_avail_space()
 
         def __update_avail_space(self):
                 """Update amount of available space on FS"""
-                self.__cbytes_avail = misc.spaceavail(
+                self.pd._cbytes_avail = misc.spaceavail(
                     self.image.write_cache_path)
 
-                self.__bytes_avail = misc.spaceavail(self.image.root)
+                self.pd._bytes_avail = misc.spaceavail(self.image.root)
                 # if we don't have a full image yet
-                if self.__cbytes_avail < 0:
-                        self.__cbytes_avail = self.__bytes_avail
+                if self.pd._cbytes_avail < 0:
+                        self.pd._cbytes_avail = self.pd._bytes_avail
 
         def evaluate_pkg_plans(self):
                 """Internal helper function that does the work of converting
@@ -2204,7 +2016,7 @@
                                 for a in self.image.gen_publishers()
                                 ])
 
-                for oldfmri, newfmri in self.__fmri_changes:
+                for oldfmri, newfmri in self.pd._fmri_changes:
                         self.__progtrack.evaluate_progress(oldfmri)
                         old_in, new_in = self.__create_intent(oldfmri, newfmri,
                             enabled_publishers)
@@ -2223,6 +2035,7 @@
                 del enabled_publishers
                 self.__match_inst = {}
                 self.__match_rm = {}
+                self.__match_update = {}
 
                 self.image.transport.prefetch_manifests(prefetch_mfsts,
                     ccancel=self.__check_cancel)
@@ -2258,7 +2071,7 @@
                         pp.evaluate(self.__old_excludes, self.__new_excludes,
                             can_exclude=can_exclude)
 
-                        self.pkg_plans.append(pp)
+                        self.pd.pkg_plans.append(pp)
                         pp = None
                         self.__progtrack.evaluate_progress()
 
@@ -2295,15 +2108,16 @@
                         mediated_installed_paths[a.attrs["path"]].add((a, pfmri,
                             mediator, med_ver, med_impl))
 
-                # Now select only the "best" mediation for each mediator; items()
-                # is used here as the dictionary is altered during iteration.
-                cfg_mediators = self.image.cfg.mediators
+                # Now select only the "best" mediation for each mediator;
+                # items() is used here as the dictionary is altered during
+                # iteration.
+                cfg_mediators = self.pd._cfg_mediators
                 changed_mediators = set()
                 for mediator, values in prop_mediators.items():
                         med_ver_source = med_impl_source = med_priority = \
                             med_ver = med_impl = med_impl_ver = None
 
-                        mediation = self.__new_mediators.get(mediator)
+                        mediation = self.pd._new_mediators.get(mediator)
                         cfg_mediation = cfg_mediators.get(mediator)
                         if mediation:
                                 med_ver = mediation.get("version")
@@ -2404,9 +2218,8 @@
                 # and which need removal.
                 act_mediated_paths = { "installed": {}, "removed": {} }
 
-                cfg_mediators = self.image.cfg.mediators
-                for al, ptype in ((self.install_actions, "added"),
-                    (self.update_actions, "changed")):
+                for al, ptype in ((self.pd.install_actions, "added"),
+                    (self.pd.update_actions, "changed")):
                         for i, ap in enumerate(al):
                                 if not ap or not (ap.dst.name == "link" or
                                     ap.dst.name == "hardlink"):
@@ -2475,7 +2288,7 @@
                 ):
                         ap.p.actions.removed.append((ap.dst,
                             None))
-                        self.removal_actions.append(ActionPlan(
+                        self.pd.removal_actions.append(_ActionPlan(
                             ap.p, ap.dst, None))
                 act_mediated_paths = None
 
@@ -2535,9 +2348,9 @@
                 being set but don't affect the plan and update proposed image
                 configuration."""
 
-                cfg_mediators = self.image.cfg.mediators
-                for m in self.__new_mediators:
-                        prop_mediators.setdefault(m, self.__new_mediators[m])
+                cfg_mediators = self.pd._cfg_mediators
+                for m in self.pd._new_mediators:
+                        prop_mediators.setdefault(m, self.pd._new_mediators[m])
                 for m in cfg_mediators:
                         if m in prop_mediators:
                                 continue
@@ -2573,7 +2386,7 @@
                 # instead of being explicitly requested).
 
                 # Initially assume mediation is changing.
-                self.__mediators_change = True
+                self.pd._mediators_change = True
 
                 for m in prop_mediators.keys():
                         if m not in cfg_mediators:
@@ -2601,9 +2414,9 @@
                                         # configuration.
                                         break
                         else:
-                                self.__mediators_change = False
-
-                self.__new_mediators = prop_mediators
+                                self.pd._mediators_change = False
+
+                self.pd._new_mediators = prop_mediators
 
                 # Link mediation is complete.
                 self.__progtrack.evaluate_progress()
@@ -2613,23 +2426,23 @@
                 merge all the resultant actions for the packages being
                 updated."""
 
-                if self.__new_mediators is None:
-                        self.__new_mediators = {}
+                if self.pd._new_mediators is None:
+                        self.pd._new_mediators = {}
 
                 if self.image.has_boot_archive():
                         ramdisk_prefixes = tuple(
                             self.image.get_ramdisk_filelist())
                         if not ramdisk_prefixes:
-                                self.__need_boot_archive = False
+                                self.pd._need_boot_archive = False
                 else:
-                        self.__need_boot_archive = False
+                        self.pd._need_boot_archive = False
 
                 # now combine all actions together to create a synthetic
                 # single step upgrade operation, and handle editable
                 # files moving from package to package.  See theory
                 # comment in execute, below.
 
-                for pp in self.pkg_plans:
+                for pp in self.pd.pkg_plans:
                         if pp.origin_fmri and pp.destination_fmri:
                                 self.__target_update_count += 1
                         elif pp.destination_fmri:
@@ -2641,21 +2454,24 @@
                 # now combine all actions together to create a synthetic single
                 # step upgrade operation, and handle editable files moving from
                 # package to package.  See theory comment in execute, below.
-
-                self.removal_actions = []
-                cfg_mediators = self.image.cfg.mediators
+                self.pd.removal_actions = []
+
+                # cache the current image mediators within the plan
+                cfg_mediators = self.pd._cfg_mediators = \
+                    self.image.cfg.mediators
+
                 mediated_removed_paths = set()
-                for p in self.pkg_plans:
+                for p in self.pd.pkg_plans:
                         for src, dest in p.gen_removal_actions():
                                 if src.name == "user":
-                                        self.removed_users[src.attrs["username"]] = \
-                                            p.origin_fmri
+                                        self.pd.removed_users[src.attrs[
+                                            "username"]] = p.origin_fmri
                                 elif src.name == "group":
-                                        self.removed_groups[src.attrs["groupname"]] = \
-                                            p.origin_fmri
-
-                                self.removal_actions.append(ActionPlan(p, src,
-                                    dest))
+                                        self.pd.removed_groups[src.attrs[
+                                            "groupname"]] = p.origin_fmri
+
+                                self.pd.removal_actions.append(
+                                    _ActionPlan(p, src, dest))
                                 if (not (src.name == "link" or
                                     src.name == "hardlink") or
                                     "mediator" not in src.attrs):
@@ -2691,50 +2507,50 @@
 
                 self.__progtrack.evaluate_progress()
 
-                self.update_actions = []
-                self.__rm_aliases = {}
-                for p in self.pkg_plans:
+                self.pd.update_actions = []
+                self.pd._rm_aliases = {}
+                for p in self.pd.pkg_plans:
                         for src, dest in p.gen_update_actions():
                                 if dest.name == "user":
-                                        self.added_users[dest.attrs["username"]] = \
-                                            p.destination_fmri
+                                        self.pd.added_users[dest.attrs[
+                                            "username"]] = p.destination_fmri
                                 elif dest.name == "group":
-                                        self.added_groups[dest.attrs["groupname"]] = \
-                                            p.destination_fmri
+                                        self.pd.added_groups[dest.attrs[
+                                            "groupname"]] = p.destination_fmri
                                 elif dest.name == "driver" and src:
                                         rm = \
                                             set(src.attrlist("alias")) - \
                                             set(dest.attrlist("alias"))
                                         if rm:
-                                                self.__rm_aliases.setdefault(
+                                                self.pd._rm_aliases.setdefault(
                                                     dest.attrs["name"],
                                                     set()).update(rm)
-                                self.update_actions.append(ActionPlan(p, src,
-                                    dest))
+                                self.pd.update_actions.append(
+                                    _ActionPlan(p, src, dest))
                 self.__progtrack.evaluate_progress()
 
-                self.install_actions = []
-                for p in self.pkg_plans:
+                self.pd.install_actions = []
+                for p in self.pd.pkg_plans:
                         for src, dest in p.gen_install_actions():
                                 if dest.name == "user":
-                                        self.added_users[dest.attrs["username"]] = \
-                                            p.destination_fmri
+                                        self.pd.added_users[dest.attrs[
+                                            "username"]] = p.destination_fmri
                                 elif dest.name == "group":
-                                        self.added_groups[dest.attrs["groupname"]] = \
-                                            p.destination_fmri
-                                self.install_actions.append(ActionPlan(p, src,
-                                    dest))
+                                        self.pd.added_groups[dest.attrs[
+                                            "groupname"]] = p.destination_fmri
+                                self.pd.install_actions.append(
+                                    _ActionPlan(p, src, dest))
                 self.__progtrack.evaluate_progress()
 
                 # In case a removed user or group was added back...
-                for entry in self.added_groups.keys():
-                        if entry in self.removed_groups:
-                                del self.removed_groups[entry]
-                for entry in self.added_users.keys():
-                        if entry in self.removed_users:
-                                del self.removed_users[entry]
-
-                self.state = MERGED_OK
+                for entry in self.pd.added_groups.keys():
+                        if entry in self.pd.removed_groups:
+                                del self.pd.removed_groups[entry]
+                for entry in self.pd.added_users.keys():
+                        if entry in self.pd.removed_users:
+                                del self.pd.removed_users[entry]
+
+                self.pd.state = plandesc.MERGED_OK
 
                 self.__find_all_conflicts()
 
@@ -2759,7 +2575,7 @@
                         else:
                                 return v
 
-                for i, ap in enumerate(self.removal_actions):
+                for i, ap in enumerate(self.pd.removal_actions):
                         if ap is None:
                                 continue
                         self.__progtrack.evaluate_progress()
@@ -2788,13 +2604,13 @@
                                         # doesn't match the new mediation
                                         # criteria, it is safe to remove.
                                         mediator = ap.src.attrs.get("mediator")
-                                        if mediator in self.__new_mediators:
+                                        if mediator in self.pd._new_mediators:
                                                 src_version = ap.src.attrs.get(
                                                     "mediator-version")
                                                 src_impl = ap.src.attrs.get(
                                                     "mediator-implementation")
                                                 dest_version = \
-                                                    self.__new_mediators[mediator].get(
+                                                    self.pd._new_mediators[mediator].get(
                                                         "version")
                                                 if dest_version:
                                                         # Requested version needs
@@ -2803,7 +2619,7 @@
                                                         dest_version = \
                                                             dest_version.get_short_version()
                                                 dest_impl = \
-                                                    self.__new_mediators[mediator].get(
+                                                    self.pd._new_mediators[mediator].get(
                                                         "implementation")
                                                 if dest_version is not None and \
                                                     src_version != dest_version:
@@ -2821,7 +2637,7 @@
                                 remove = False
 
                         if not remove:
-                                self.removal_actions[i] = None
+                                self.pd.removal_actions[i] = None
                                 if "mediator" in ap.src.attrs:
                                         mediated_removed_paths.discard(
                                             ap.src.attrs["path"])
@@ -2844,11 +2660,18 @@
                                         fname = None
                                 attrs = re = None
 
-                        self.__actuators.scan_removal(ap.src.attrs)
-                        if self.__need_boot_archive is None:
+                        self.pd._actuators.scan_removal(ap.src.attrs)
+                        if self.pd._need_boot_archive is None:
                                 if ap.src.attrs.get("path", "").startswith(
                                     ramdisk_prefixes):
-                                        self.__need_boot_archive = True
+                                        self.pd._need_boot_archive = True
+
+                # reduce memory consumption
+                self.__directories = None
+                self.__symlinks = None
+                self.__hardlinks = None
+                self.__licenses = None
+                self.__legacy = None
 
                 self.__progtrack.evaluate_progress()
 
@@ -2860,7 +2683,7 @@
                 # must remain fixed, at least for the duration of the imageplan
                 # evaluation.
                 plan_pos = {}
-                for p in self.pkg_plans:
+                for p in self.pd.pkg_plans:
                         for i, a in enumerate(p.gen_install_actions()):
                                 plan_pos[id(a[1])] = i
 
@@ -2870,11 +2693,11 @@
 
                 # This maps destination actions to the pkgplans they're
                 # associated with, which allows us to create the newly
-                # discovered update ActionPlans.
+                # discovered update _ActionPlans.
                 dest_pkgplans = {}
 
                 new_updates = []
-                for i, ap in enumerate(self.install_actions):
+                for i, ap in enumerate(self.pd.install_actions):
                         if ap is None:
                                 continue
                         self.__progtrack.evaluate_progress()
@@ -2891,7 +2714,7 @@
                             ap.dst.attrs["original_name"] in cons_named):
                                 cache_name = ap.dst.attrs["original_name"]
                                 index = cons_named[cache_name].idx
-                                ra = self.removal_actions[index].src
+                                ra = self.pd.removal_actions[index].src
                                 assert(id(ra) == cons_named[cache_name].id)
                                 # If the paths match, don't remove and add;
                                 # convert to update.
@@ -2900,8 +2723,8 @@
                                         # If we delete items here, the indices
                                         # in cons_named will be bogus, so mark
                                         # them for later deletion.
-                                        self.removal_actions[index] = None
-                                        self.install_actions[i] = None
+                                        self.pd.removal_actions[index] = None
+                                        self.pd.install_actions[i] = None
                                         # No need to handle it in cons_generic
                                         # anymore
                                         del cons_generic[("file", ra.attrs["path"])]
@@ -2920,12 +2743,12 @@
                         if (ap.dst.name, keyval) in cons_generic:
                                 nkv = ap.dst.name, keyval
                                 index = cons_generic[nkv].idx
-                                ra = self.removal_actions[index].src
+                                ra = self.pd.removal_actions[index].src
                                 assert(id(ra) == cons_generic[nkv].id)
                                 if keyval == ra.attrs[ra.key_attr]:
                                         new_updates.append((ra, ap.dst))
-                                        self.removal_actions[index] = None
-                                        self.install_actions[i] = None
+                                        self.pd.removal_actions[index] = None
+                                        self.pd.install_actions[i] = None
                                         dest_pkgplans[id(ap.dst)] = ap.p
                                         # Add the action to the pkgplan's update
                                         # list and mark it for removal from the
@@ -2935,11 +2758,11 @@
                                         pp_needs_trimming.add(ap.p)
                                 nkv = index = ra = None
 
-                        self.__actuators.scan_install(ap.dst.attrs)
-                        if self.__need_boot_archive is None:
+                        self.pd._actuators.scan_install(ap.dst.attrs)
+                        if self.pd._need_boot_archive is None:
                                 if ap.dst.attrs.get("path", "").startswith(
                                     ramdisk_prefixes):
-                                        self.__need_boot_archive = True
+                                        self.pd._need_boot_archive = True
 
                 del ConsolidationEntry, cons_generic, cons_named, plan_pos
 
@@ -2956,7 +2779,8 @@
                 del pp_needs_trimming
 
                 # We want to cull out actions where they've not changed at all,
-                # leaving only the changed ones to put into self.update_actions.
+                # leaving only the changed ones to put into
+                # self.pd.update_actions.
                 nu_src = manifest.Manifest()
                 nu_src.set_content(content=(a[0] for a in new_updates),
                     excludes=self.__old_excludes)
@@ -2976,8 +2800,8 @@
 
                 # Extend update_actions with the new tuples.  The package plan
                 # is the one associated with the action getting installed.
-                self.update_actions.extend([
-                    ActionPlan(dest_pkgplans[id(dst)], src, dst)
+                self.pd.update_actions.extend([
+                    _ActionPlan(dest_pkgplans[id(dst)], src, dst)
                     for src, dst in nu_chg
                 ])
 
@@ -2990,7 +2814,7 @@
 
                 for prop in ("removal_actions", "install_actions",
                     "update_actions"):
-                        pval = getattr(self, prop)
+                        pval = getattr(self.pd, prop)
                         pval[:] = [
                             a
                             for a in pval
@@ -3006,7 +2830,7 @@
                 # Go over update actions
                 l_actions = self.get_actions("hardlink", self.hardlink_keyfunc)
                 l_refresh = []
-                for a in self.update_actions:
+                for a in self.pd.update_actions:
                         # For any files being updated that are the target of
                         # _any_ hardlink actions, append the hardlink actions
                         # to the update list so that they are not broken.
@@ -3018,7 +2842,7 @@
                                         unique_links = dict((l.attrs["path"], l)
                                             for l in l_actions[path])
                                         l_refresh.extend([
-                                            ActionPlan(a[0], l, l)
+                                            _ActionPlan(a[0], l, l)
                                             for l in unique_links.values()
                                         ])
                                 path = None
@@ -3026,58 +2850,59 @@
                         # scan both old and new actions
                         # repairs may result in update action w/o orig action
                         if a[1]:
-                                self.__actuators.scan_update(a[1].attrs)
-                        self.__actuators.scan_update(a[2].attrs)
-                        if self.__need_boot_archive is None:
+                                self.pd._actuators.scan_update(a[1].attrs)
+                        self.pd._actuators.scan_update(a[2].attrs)
+                        if self.pd._need_boot_archive is None:
                                 if a[2].attrs.get("path", "").startswith(
                                     ramdisk_prefixes):
-                                        self.__need_boot_archive = True
-
-                self.update_actions.extend(l_refresh)
+                                        self.pd._need_boot_archive = True
+
+                self.pd.update_actions.extend(l_refresh)
 
                 # sort actions to match needed processing order
                 remsort = operator.itemgetter(1)
                 addsort = operator.itemgetter(2)
-                self.removal_actions.sort(key=remsort, reverse=True)
-                self.update_actions.sort(key=addsort)
-                self.install_actions.sort(key=addsort)
+                self.pd.removal_actions.sort(key=remsort, reverse=True)
+                self.pd.update_actions.sort(key=addsort)
+                self.pd.install_actions.sort(key=addsort)
 
                 # Pre-calculate size of data retrieval for preexecute().
-                npkgs = nfiles = nbytes = 0
-                for p in self.pkg_plans:
+                for p in self.pd.pkg_plans:
                         nf, nb = p.get_xferstats()
-                        nbytes += nb
-                        nfiles += nf
+                        self.pd._dl_nbytes += nb
+                        self.pd._dl_nfiles += nf
 
                         # It's not perfectly accurate but we count a download
                         # even if the package will do zero data transfer.  This
                         # makes the pkg stats consistent between download and
                         # install.
-                        npkgs += 1
-                self.__progtrack.download_set_goal(npkgs, nfiles, nbytes)
+                        self.pd._dl_npkgs += 1
+                self.__progtrack.download_set_goal(self.pd._dl_npkgs,
+                    self.pd._dl_nfiles, self.pd._dl_nbytes)
 
                 # Evaluation complete.
                 self.__progtrack.evaluate_done(self.__target_install_count, \
                     self.__target_update_count, self.__target_removal_count)
 
-                if self.__need_boot_archive is None:
-                        self.__need_boot_archive = False
-
-                self.state = EVALUATED_OK
+                if self.pd._need_boot_archive is None:
+                        self.pd._need_boot_archive = False
+
+                self.pd.state = plandesc.EVALUATED_OK
 
         def nothingtodo(self):
                 """Test whether this image plan contains any work to do """
 
-                if self.state == EVALUATED_PKGS:
-                        return not (self.__fmri_changes or
-                            self.__new_variants or
-                            (self.__new_facets is not None) or
-                            self.__mediators_change or
-                            self.pkg_plans)
-                elif self.state >= EVALUATED_OK:
-                        return not (self.pkg_plans or self.__new_variants or
-                            (self.__new_facets is not None) or
-                            self.__mediators_change)
+                if self.pd.state == plandesc.EVALUATED_PKGS:
+                        return not (self.pd._fmri_changes or
+                            self.pd._new_variants or
+                            (self.pd._new_facets is not None) or
+                            self.pd._mediators_change or
+                            self.pd.pkg_plans)
+                elif self.pd.state >= plandesc.EVALUATED_OK:
+                        return not (self.pd.pkg_plans or
+                            self.pd._new_variants or
+                            (self.pd._new_facets is not None) or
+                            self.pd._mediators_change)
 
         def preexecute(self):
                 """Invoke the evaluated image plan
@@ -3085,16 +2910,17 @@
                 execute actions need to be sorted across packages
                 """
 
-                assert self.state == EVALUATED_OK
-
-                if self._image_lm != self.image.get_last_modified():
+                assert self.pd.state == plandesc.EVALUATED_OK
+
+                if self.pd._image_lm != \
+                    self.image.get_last_modified(string=True):
                         # State has been modified since plan was created; this
                         # plan is no longer valid.
-                        self.state = PREEXECUTED_ERROR
+                        self.pd.state = plandesc.PREEXECUTED_ERROR
                         raise api_errors.InvalidPlanError()
 
                 if self.nothingtodo():
-                        self.state = PREEXECUTED_OK
+                        self.pd.state = plandesc.PREEXECUTED_OK
                         return
 
                 if self.image.version != self.image.CURRENT_VERSION:
@@ -3102,6 +2928,13 @@
                         raise api_errors.ImageFormatUpdateNeeded(
                             self.image.root)
 
+                if DebugValues["plandesc_validate"]:
+                        # get a json copy of the plan description so that
+                        # later we can verify that it wasn't updated during
+                        # the pre-execution stage.
+                        pd_json1 = self.pd.getstate(self.pd,
+                            reset_volatiles=True)
+
                 # Checks the index to make sure it exists and is
                 # consistent. If it's inconsistent an exception is thrown.
                 # If it's totally absent, it will index the existing packages
@@ -3150,15 +2983,15 @@
                 # check if we're going to have enough room
                 # stat fs again just in case someone else is using space...
                 self.__update_avail_space()
-                if self.__cbytes_added > self.__cbytes_avail: 
+                if self.pd._cbytes_added > self.pd._cbytes_avail:
                         raise api_errors.ImageInsufficentSpace(
-                            self.__cbytes_added,
-                            self.__cbytes_avail,
+                            self.pd._cbytes_added,
+                            self.pd._cbytes_avail,
                             _("Download cache"))
-                if self.__bytes_added > self.__bytes_avail:
+                if self.pd._bytes_added > self.pd._bytes_avail:
                         raise api_errors.ImageInsufficentSpace(
-                            self.__bytes_added,
-                            self.__bytes_avail,
+                            self.pd._bytes_added,
+                            self.pd._bytes_avail,
                             _("Root filesystem"))
 
                 # Remove history about manifest/catalog transactions.  This
@@ -3171,7 +3004,7 @@
                         # Check for license acceptance issues first to avoid
                         # wasted time in the download phase and so failure
                         # can occur early.
-                        for p in self.pkg_plans:
+                        for p in self.pd.pkg_plans:
                                 try:
                                         p.preexecute()
                                 except api_errors.PkgLicenseErrors, e:
@@ -3190,7 +3023,7 @@
                                 raise api_errors.PlanLicenseErrors(lic_errors)
 
                         try:
-                                for p in self.pkg_plans:
+                                for p in self.pd.pkg_plans:
                                         p.download(self.__progtrack,
                                             self.__check_cancel)
                         except EnvironmentError, e:
@@ -3210,30 +3043,49 @@
                         self.__progtrack.download_done()
                         self.image.transport.shutdown()
                 except:
-                        self.state = PREEXECUTED_ERROR
+                        self.pd.state = plandesc.PREEXECUTED_ERROR
                         raise
 
-                self.state = PREEXECUTED_OK
+                self.pd.state = plandesc.PREEXECUTED_OK
+
+                if DebugValues["plandesc_validate"]:
+                        # verify that preexecution did not update the plan
+                        pd_json2 = self.pd.getstate(self.pd,
+                            reset_volatiles=True)
+                        pkg.misc.json_diff("PlanDescription",
+                            pd_json1, pd_json2)
+                        del pd_json1, pd_json2
 
         def execute(self):
                 """Invoke the evaluated image plan
                 preexecute, execute and postexecute
                 execute actions need to be sorted across packages
                 """
-                assert self.state == PREEXECUTED_OK
-
-                if self._image_lm != self.image.get_last_modified():
+                assert self.pd.state == plandesc.PREEXECUTED_OK
+
+                if self.pd._image_lm != \
+                    self.image.get_last_modified(string=True):
                         # State has been modified since plan was created; this
                         # plan is no longer valid.
-                        self.state = EXECUTED_ERROR
+                        self.pd.state = plandesc.EXECUTED_ERROR
                         raise api_errors.InvalidPlanError()
 
+                # load data from previously downloaded actions
+                try:
+                        for p in self.pd.pkg_plans:
+                                p.cacheload()
+                except EnvironmentError, e:
+                        if e.errno == errno.EACCES:
+                                raise api_errors.PermissionsException(
+                                    e.filename)
+                        raise
+
                 # check for available space
                 self.__update_avail_space()
-                if self.__bytes_added > self.__bytes_avail:
+                if self.pd._bytes_added > self.pd._bytes_avail:
                         raise api_errors.ImageInsufficentSpace(
-                            self.__bytes_added,
-                            self.__bytes_avail,
+                            self.pd._bytes_added,
+                            self.pd._bytes_avail,
                             _("Root filesystem"))
 
                 #
@@ -3283,16 +3135,16 @@
                 #    driver to another.
 
                 if self.nothingtodo():
-                        self.state = EXECUTED_OK
+                        self.pd.state = plandesc.EXECUTED_OK
                         return
 
                 # It's necessary to do this check here because the state of the
                 # image before the current operation is performed is desired.
                 empty_image = self.__is_image_empty()
 
-                self.__actuators.exec_prep(self.image)
-
-                self.__actuators.exec_pre_actuators(self.image)
+                self.pd._actuators.exec_prep(self.image)
+
+                self.pd._actuators.exec_pre_actuators(self.image)
 
                 # List of tuples of (src, dest) used to track each pkgplan so
                 # that it can be discarded after execution.
@@ -3303,8 +3155,8 @@
                                 # execute removals
                                 self.__progtrack.actions_set_goal(
                                     _("Removal Phase"),
-                                    len(self.removal_actions))
-                                for p, src, dest in self.removal_actions:
+                                    len(self.pd.removal_actions))
+                                for p, src, dest in self.pd.removal_actions:
                                         p.execute_removal(src, dest)
                                         self.__progtrack.actions_add_progress()
                                 self.__progtrack.actions_done()
@@ -3314,34 +3166,34 @@
                                 # This prevents two drivers from ever attempting
                                 # to have the same alias at the same time.
                                 for name, aliases in \
-                                    self.__rm_aliases.iteritems():
+                                    self.pd._rm_aliases.iteritems():
                                         driver.DriverAction.remove_aliases(name,
                                             aliases, self.image)
 
                                 # Done with removals; discard them so memory can
                                 # be re-used.
-                                self.removal_actions = []
+                                self.pd.removal_actions = []
 
                                 # execute installs
                                 self.__progtrack.actions_set_goal(
                                     _("Install Phase"),
-                                    len(self.install_actions))
-
-                                for p, src, dest in self.install_actions:
+                                    len(self.pd.install_actions))
+
+                                for p, src, dest in self.pd.install_actions:
                                         p.execute_install(src, dest)
                                         self.__progtrack.actions_add_progress()
                                 self.__progtrack.actions_done()
 
                                 # Done with installs, so discard them so memory
                                 # can be re-used.
-                                self.install_actions = []
+                                self.pd.install_actions = []
 
                                 # execute updates
                                 self.__progtrack.actions_set_goal(
                                     _("Update Phase"),
-                                    len(self.update_actions))
-
-                                for p, src, dest in self.update_actions:
+                                    len(self.pd.update_actions))
+
+                                for p, src, dest in self.pd.update_actions:
                                         p.execute_update(src, dest)
                                         self.__progtrack.actions_add_progress()
 
@@ -3349,16 +3201,16 @@
 
                                 # Done with updates, so discard them so memory
                                 # can be re-used.
-                                self.update_actions = []
+                                self.pd.update_actions = []
 
                                 # handle any postexecute operations
-                                while self.pkg_plans:
+                                while self.pd.pkg_plans:
                                         # postexecute in reverse, but pkg_plans
                                         # aren't ordered, so does it matter?
                                         # This allows the pkgplan objects to be
                                         # discarded as they're executed which
                                         # allows memory to be-reused sooner.
-                                        p = self.pkg_plans.pop()
+                                        p = self.pd.pkg_plans.pop()
                                         p.postexecute()
                                         executed_pp.append((p.destination_fmri,
                                             p.origin_fmri))
@@ -3369,14 +3221,15 @@
                                     executed_pp, self.__progtrack)
 
                                 # write out variant changes to the image config
-                                if self.__varcets_change or \
-                                    self.__mediators_change:
+                                if self.pd._varcets_change or \
+                                    self.pd._mediators_change:
                                         self.image.image_config_update(
-                                            self.__new_variants,
-                                            self.__new_facets,
-                                            self.__new_mediators)
+                                            self.pd._new_variants,
+                                            self.pd._new_facets,
+                                            self.pd._new_mediators)
                                 # write out any changes
-                                self.image._avoid_set_save(*self.__new_avoid_obs)
+                                self.image._avoid_set_save(
+                                    *self.pd._new_avoid_obs)
 
                         except EnvironmentError, e:
                                 if e.errno == errno.EACCES or \
@@ -3397,9 +3250,10 @@
                                 raise
                 except pkg.actions.ActionError:
                         exc_type, exc_value, exc_tb = sys.exc_info()
-                        self.state = EXECUTED_ERROR
+                        self.pd.state = plandesc.EXECUTED_ERROR
                         try:
-                                self.__actuators.exec_fail_actuators(self.image)
+                                self.pd._actuators.exec_fail_actuators(
+                                    self.image)
                         except:
                                 # Ensure the real cause of failure is raised.
                                 pass
@@ -3407,36 +3261,28 @@
                             exc_value]), None, exc_tb
                 except:
                         exc_type, exc_value, exc_tb = sys.exc_info()
-                        self.state = EXECUTED_ERROR
+                        self.pd.state = plandesc.EXECUTED_ERROR
                         try:
-                                self.__actuators.exec_fail_actuators(self.image)
+                                self.pd._actuators.exec_fail_actuators(
+                                    self.image)
                         finally:
                                 # This ensures that the original exception and
                                 # traceback are used if exec_fail_actuators
                                 # fails.
                                 raise exc_value, None, exc_tb
                 else:
-                        self.__actuators.exec_post_actuators(self.image)
+                        self.pd._actuators.exec_post_actuators(self.image)
 
                 self.image._create_fast_lookups()
 
-                self.state = EXECUTED_OK
+                # success
+                self.pd.state = plandesc.EXECUTED_OK
+                self.pd._executed_ok()
 
                 # reduce memory consumption
-                self.added_groups = {}
-                self.removed_groups = {}
-                self.added_users = {}
-                self.removed_users = {}
                 self.saved_files = {}
                 self.valid_directories = set()
-                self.__fmri_changes  = []
-                self.__directories   = []
-                self.__actuators     = actuator.Actuator()
                 self.__cached_actions = {}
-                self.__symlinks = None
-                self.__hardlinks = None
-                self.__licenses = None
-                self.__legacy = None
 
                 # Clear out the primordial user and group caches.
                 self.image._users = set()
@@ -3509,7 +3355,8 @@
                 except StopIteration:
                         return True
 
-        def match_user_stems(self, patterns, match_type, raise_unmatched=True,
+        @staticmethod
+        def match_user_stems(image, patterns, match_type, raise_unmatched=True,
             raise_not_installed=True, return_matchdict=False, universe=None):
                 """Given a user specified list of patterns, return a set
                 of matching package stems.  Any versions specified are
@@ -3557,7 +3404,7 @@
                 # avoid checking everywhere
                 if not patterns:
                         return set()
-                brelease = self.image.attrs["Build-Release"]
+                brelease = image.attrs["Build-Release"]
 
                 illegals      = []
                 nonmatch      = []
@@ -3623,15 +3470,14 @@
                 ret = dict(zip(patterns, [set() for i in patterns]))
 
                 if universe is not None:
-                        assert match_type == self.MATCH_ALL
+                        assert match_type == ImagePlan.MATCH_ALL
                         pkg_names = universe
                 else:
-                        if match_type != self.MATCH_INST_VERSIONS:
-                                cat = self.image.get_catalog(
-                                    self.image.IMG_CATALOG_KNOWN)
+                        if match_type != ImagePlan.MATCH_INST_VERSIONS:
+                                cat = image.get_catalog(image.IMG_CATALOG_KNOWN)
                         else:
-                                cat = self.image.get_catalog(
-                                    self.image.IMG_CATALOG_INSTALLED)
+                                cat = image.get_catalog(
+                                    image.IMG_CATALOG_INSTALLED)
                         pkg_names = cat.pkg_names()
 
                 # construct matches for each pattern
@@ -3663,13 +3509,13 @@
                                 multispec.append(tuple([name] +
                                     matchdict[name]))
 
-                if match_type == self.MATCH_INST_VERSIONS:
+                if match_type == ImagePlan.MATCH_INST_VERSIONS:
                         not_installed, nonmatch = nonmatch, not_installed
-                elif match_type == self.MATCH_UNINSTALLED:
+                elif match_type == ImagePlan.MATCH_UNINSTALLED:
                         already_installed = [
                             name
-                            for name in self.image.get_catalog(
-                            self.image.IMG_CATALOG_INSTALLED).names()
+                            for name in image.get_catalog(
+                            image.IMG_CATALOG_INSTALLED).names()
                             if name in matchdict
                         ]
                 if illegals or (raise_unmatched and nonmatch) or multimatch \
@@ -3687,7 +3533,8 @@
                         return matchdict
                 return set(matchdict.keys())
 
-        def __match_user_fmris(self, patterns, match_type,
+        @staticmethod
+        def __match_user_fmris(image, patterns, match_type,
             pub_ranks=misc.EmptyDict, installed_pkgs=misc.EmptyDict,
             raise_not_installed=True, reject_set=misc.EmptyI):
                 """Given a user-specified list of patterns, return a dictionary
@@ -3765,7 +3612,8 @@
                 patterns = list(set(patterns))
 
                 installed_pubs = misc.EmptyDict
-                if match_type in [self.MATCH_INST_STEMS, self.MATCH_ALL]:
+                if match_type in [ImagePlan.MATCH_INST_STEMS,
+                    ImagePlan.MATCH_ALL]:
                         # build installed publisher dictionary
                         installed_pubs = dict((
                             (f.pkg_name, f.get_publisher())
@@ -3773,7 +3621,7 @@
                         ))
 
                 # figure out which kind of matching rules to employ
-                brelease = self.image.attrs["Build-Release"]
+                brelease = image.attrs["Build-Release"]
                 latest_pats = set()
                 seen = set()
                 npatterns = []
@@ -3854,16 +3702,14 @@
                 # of installed publisher to produce better error message.
                 rejected_pubs = {}
 
-                if match_type != self.MATCH_INST_VERSIONS:
-                        cat = self.image.get_catalog(
-                            self.image.IMG_CATALOG_KNOWN)
+                if match_type != ImagePlan.MATCH_INST_VERSIONS:
+                        cat = image.get_catalog(image.IMG_CATALOG_KNOWN)
                         info_needed = [pkg.catalog.Catalog.DEPENDENCY]
                 else:
-                        cat = self.image.get_catalog(
-                            self.image.IMG_CATALOG_INSTALLED)
+                        cat = image.get_catalog(image.IMG_CATALOG_INSTALLED)
                         info_needed = []
 
-                variants = self.image.get_variants()
+                variants = image.get_variants()
                 for name in cat.names():
                         for pat, matcher, fmri, version, pub in \
                             zip(patterns, matchers, fmris, versions, pubs):
@@ -3878,7 +3724,7 @@
                                                 fpub = f.publisher
                                                 if pub and pub != fpub:
                                                         continue # specified pubs conflict
-                                                elif match_type == self.MATCH_INST_STEMS and \
+                                                elif match_type == ImagePlan.MATCH_INST_STEMS and \
                                                     f.pkg_name not in installed_pkgs:
                                                         # Matched stem is not
                                                         # in list of installed
@@ -3948,7 +3794,7 @@
                                                 ret[pat].setdefault(f.pkg_name,
                                                     []).append(f)
 
-                                                if not pub and match_type != self.MATCH_INST_VERSIONS and \
+                                                if not pub and match_type != ImagePlan.MATCH_INST_VERSIONS and \
                                                     name in installed_pubs and \
                                                     pub_ranks[installed_pubs[name]][1] \
                                                     == True and installed_pubs[name] != \
@@ -4081,7 +3927,7 @@
                         inst_pub = installed_pubs.get(name)
                         stripped_by_publisher = False
                         if not pub_named and common_pfmris and \
-                            match_type != self.MATCH_INST_VERSIONS and \
+                            match_type != ImagePlan.MATCH_INST_VERSIONS and \
                             inst_pub and pub_ranks[inst_pub][1] == True:
                                 common_pfmris = set(
                                     p for p in common_pfmris
@@ -4099,7 +3945,7 @@
                                 multispec.append(tuple([name] +
                                     [p for p, vs in rel_ps]))
 
-                if match_type != self.MATCH_ALL:
+                if match_type != ImagePlan.MATCH_ALL:
                         not_installed, nonmatch = nonmatch, not_installed
 
                 if illegals or nonmatch or multimatch or \
@@ -4115,7 +3961,7 @@
                             rejected_pats=exclpats)
 
                 # eliminate lower ranked publishers
-                if match_type != self.MATCH_INST_VERSIONS:
+                if match_type != ImagePlan.MATCH_INST_VERSIONS:
                         # no point for installed pkgs....
                         for pkg_name in proposed_dict:
                                 pubs_found = set([
@@ -4202,126 +4048,27 @@
 
                 return proposed_dict, references
 
-        # We must save the planned fmri change or the pkg_plans
-        class __save_encode(json.JSONEncoder):
-
-                def default(self, obj):
-                        """Required routine that overrides the default base
-                        class version and attempts to serialize 'obj' when
-                        attempting to save 'obj' json format."""
-
-                        if isinstance(obj, pkg.fmri.PkgFmri):
-                                return str(obj)
-                        if isinstance(obj, pkg.client.pkgplan.PkgPlan):
-                                return obj.getstate()
-                        return json.JSONEncoder.default(self, obj)
-
-        def __save(self, filename):
-                """Json encode fmri changes or pkg plans and save them to a
-                file."""
-
-                assert filename in [STATE_FILE_PKGS, STATE_FILE_ACTIONS]
-                if not os.path.isdir(self.image.plandir):
-                        os.makedirs(self.image.plandir)
-
-                # write the output file to a temporary file
-                pathtmp = os.path.join(self.image.plandir,
-                    "%s.%d.%d.json" % (filename, self.image.runid, os.getpid()))
-                oflags = os.O_CREAT | os.O_TRUNC | os.O_WRONLY
-                try:
-                        fobj = os.fdopen(os.open(pathtmp, oflags, 0644), "wb")
-                        if filename == STATE_FILE_PKGS:
-                                json.dump(self.__fmri_changes, fobj,
-                                    encoding="utf-8", cls=self.__save_encode)
-                        elif filename == STATE_FILE_ACTIONS:
-                                json.dump(self.pkg_plans, fobj,
-                                    encoding="utf-8", cls=self.__save_encode)
-                        fobj.close()
-                except OSError, e:
-                        raise api_errors._convert_error(e)
-
-                # atomically create the desired file
-                path = os.path.join(self.image.plandir,
-                    "%s.%d.json" % (filename, self.image.runid))
-
-                try:
-                        os.rename(pathtmp, path)
-                except OSError, e:
-                        raise api_errors._convert_error(e)
-
-        def __load_decode(self, dct):
-                """Routine that takes a loaded json dictionary and converts
-                any keys and/or values from unicode strings into ascii
-                strings.  (Keys or values of other types are left
-                unchanged.)"""
-
-                # Replace unicode keys/values with strings
-                rvdct = {}
-                for k, v in dct.items():
-                        # unicode must die
-                        if type(k) == unicode:
-                                k = k.encode("utf-8")
-                        if type(v) == unicode:
-                                v = v.encode("utf-8")
-                        rvdct[k] = v
-                return rvdct
-
-        def __load(self, filename):
-                """Load Json encoded fmri changes or pkg plans."""
-
-                assert filename in [STATE_FILE_PKGS, STATE_FILE_ACTIONS]
-
-                path = os.path.join(self.image.plandir,
-                    "%s.%d.json" % (filename, self.image.runid))
-
-                # load the json file
-                try:
-                        with open(path) as fobj:
-                                # fobj will be closed when we exit this loop
-                                data = json.load(fobj, encoding="utf-8",
-                                    object_hook=self.__load_decode)
-                except OSError, e:
-                        raise api_errors._convert_error(e)
-
-                if filename == STATE_FILE_PKGS:
-                        assert(type(data) == list)
-                        tuples = []
-                        for (old, new) in data:
-                                if old:
-                                        old = pkg.fmri.PkgFmri(str(old))
-                                if new:
-                                        new = pkg.fmri.PkgFmri(str(new))
-                                tuples.append((old, new))
-                        return tuples
-
-                elif filename == STATE_FILE_ACTIONS:
-                        pkg_plans = []
-                        for item in data:
-                                pp = pkgplan.PkgPlan(self.image)
-                                pp.setstate(item)
-                                pkg_plans.append(pp)
-                        return pkg_plans
-
-        def freeze_pkgs_match(self, pats):
+        @staticmethod
+        def freeze_pkgs_match(image, pats):
                 """Find the packages which match the given patterns and thus
                 should be frozen."""
 
                 pats = set(pats)
                 freezes = set()
-                pub_ranks = self.image.get_publisher_ranks()
+                pub_ranks = image.get_publisher_ranks()
                 installed_version_mismatches = {}
                 versionless_uninstalled = set()
                 multiversions = []
 
                 # Find the installed packages that match the provided patterns.
-                inst_dict, references = self.__match_user_fmris(pats,
-                    self.MATCH_INST_VERSIONS, pub_ranks=pub_ranks,
+                inst_dict, references = ImagePlan.__match_user_fmris(image,
+                    pats, ImagePlan.MATCH_INST_VERSIONS, pub_ranks=pub_ranks,
                     raise_not_installed=False)
 
                 # Find the installed package stems that match the provided
                 # patterns.
-                installed_stems_dict = self.match_user_stems(pats,
-                    self.MATCH_INST_VERSIONS, raise_unmatched=False,
+                installed_stems_dict = ImagePlan.match_user_stems(image, pats,
+                    ImagePlan.MATCH_INST_VERSIONS, raise_unmatched=False,
                     raise_not_installed=False, return_matchdict=True)
 
                 stems_of_fmri_matches = set(inst_dict.keys())
@@ -4388,7 +4135,7 @@
                                 multiversions.append((k, v))
                         else:
                                 stems[k] = v.pop()
-                        
+
                 if versionless_uninstalled or unmatched_wildcards or \
                     installed_version_mismatches or multiversions:
                         raise api_errors.FreezePkgsException(
--- a/src/modules/client/linkedimage/__init__.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/client/linkedimage/__init__.py	Mon Jul 11 13:49:50 2011 -0700
@@ -21,7 +21,7 @@
 #
 
 #
-# Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved.
 #
 
 """
@@ -34,12 +34,9 @@
 
 # standard python classes
 import inspect
-import os
 
 # import linked image common code
-# W0401 Wildcard import
-# W0403 Relative import
-from common import * # pylint: disable-msg=W0401,W0403
+from pkg.client.linkedimage.common import * # pylint: disable-msg=W0401
 
 # names of linked image plugins
 p_types = [ "zone", "system" ]
--- a/src/modules/client/linkedimage/common.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/client/linkedimage/common.py	Mon Jul 11 13:49:50 2011 -0700
@@ -40,16 +40,13 @@
 
 """
 
-#
-# Too many lines in module; pylint: disable-msg=C0302
-#
-
 # standard python classes
+import collections
+import copy
 import operator
 import os
+import select
 import simplejson as json
-import sys
-import tempfile
 
 # pkg classes
 import pkg.actions
@@ -60,13 +57,14 @@
 import pkg.client.linkedimage
 import pkg.client.pkgdefs as pkgdefs
 import pkg.client.pkgplan as pkgplan
+import pkg.client.pkgremote
+import pkg.client.progress as progress
 import pkg.fmri
 import pkg.misc as misc
 import pkg.pkgsubprocess
 import pkg.version
 
 from pkg.client import global_settings
-from pkg.client.debugvalues import DebugValues
 
 logger = global_settings.logger
 
@@ -113,6 +111,80 @@
 PATH_PROP      = os.path.join(__DATA_DIR, "linked_prop")
 PATH_PUBS      = os.path.join(__DATA_DIR, "linked_ppubs")
 
+LI_RVTuple = collections.namedtuple("LI_RVTuple", "rvt_rv rvt_e rvt_p_dict")
+
+def _li_rvtuple_check(rvtuple):
+        """Sanity check a linked image operation return value tuple.
+        The format of said tuple is:
+                process return code
+                LinkedImageException exception (optional)
+                json dictionary containing planned image changes
+        """
+
+        # make sure we're using the LI_RVTuple class
+        assert type(rvtuple) == LI_RVTuple
+
+        # decode the tuple
+        rv, e, p_dict = rvtuple
+
+        # rv must be an integer
+        assert type(rv) == int
+        # any exception returned must be a LinkedImageException
+        assert e is None or type(e) == apx.LinkedImageException
+        # if specified, p_dict must be a dictionary
+        assert p_dict is None or type(p_dict) is dict
+        # some child return codes should never be associated with an exception
+        assert rv not in [pkgdefs.EXIT_OK, pkgdefs.EXIT_NOP] or e is None
+        # a p_dict can only be returned if the child returned EXIT_OK
+        assert rv == pkgdefs.EXIT_OK or p_dict is None
+
+        # return the value that was passed in
+        return rvtuple
+
+def _li_rvdict_check(rvdict):
+        """Given a linked image return value dictionary, sanity check all the
+        entries."""
+
+        assert(type(rvdict) == dict)
+        for k, v in rvdict.iteritems():
+                assert type(k) == LinkedImageName, \
+                    ("Unexpected rvdict key: ", k)
+                _li_rvtuple_check(v)
+
+        # return the value that was passed in
+        return rvdict
+
+def _li_rvdict_exceptions(rvdict):
+        """Given a linked image return value dictionary, return a list of any
+        exceptions that were encountered while processing children."""
+
+        # sanity check rvdict
+        _li_rvdict_check(rvdict)
+
+        # get a list of exceptions
+        return [
+            rvtuple.rvt_e
+            for rvtuple in rvdict.values()
+            if rvtuple.rvt_e is not None
+        ]
+
+def _li_rvdict_raise_exceptions(rvdict):
+        """If an exception was encountered while operating on a linked
+        child then raise that exception.  If multiple exceptions were
+        encountered while operating on multiple children, then bundle
+        those exceptions together and raise them."""
+
+        # get a list of exceptions
+        exceptions = _li_rvdict_exceptions(rvdict)
+
+        if len(exceptions) == 1:
+                # one exception encountered
+                raise exceptions[0]
+
+        if exceptions:
+                # multiple exceptions encountered
+                raise apx.LinkedImageException(bundle=exceptions)
+
 class LinkedImagePlugin(object):
         """This class is a template that all linked image plugins should
         inherit from.  Linked image plugins derived from this class are
@@ -191,8 +263,7 @@
                 """Sync out the in-memory linked image state of this image to
                 disk."""
 
-                # return value: tuple:
-                #    (pkgdefs EXIT_* return value, exception object or None)
+                # return value: LI_RVTuple()
                 raise NotImplementedError
 
 
@@ -247,6 +318,20 @@
                 if self.lin_type not in pkg.client.linkedimage.p_types:
                         raise apx.LinkedImageException(lin_malformed=name)
 
+        @staticmethod
+        def getstate(obj, je_state=None):
+                """Returns the serialized state of this object in a format
+                that that can be easily stored using JSON, pickle, etc."""
+                # Unused argument; pylint: disable-msg=W0613
+                return str(obj)
+
+        @staticmethod
+        def fromstate(state, jd_state=None):
+                """Allocate a new object using previously serialized state
+                obtained via getstate()."""
+                # Unused argument; pylint: disable-msg=W0613
+                return LinkedImageName(state)
+
         def __str__(self):
                 return "%s:%s" % (self.lin_type, self.lin_name)
 
@@ -276,7 +361,7 @@
                 return str(self) == str(other)
 
         def __ne__(self, other):
-                return not self.__eq__(self, other)
+                return not self.__eq__(other)
 
 class LinkedImage(object):
         """A LinkedImage object is used to manage the linked image aspects of
@@ -285,9 +370,6 @@
         properties and also provides routines that allow operations to be
         performed on child images."""
 
-        # Too many instance attributes; pylint: disable-msg=R0902
-        # Too many public methods; pylint: disable-msg=R0904
-
         # Properties that a parent image with push children should save locally.
         __parent_props = frozenset([
             PROP_PATH
@@ -328,8 +410,9 @@
                 self.__ppubs = None
                 self.__pimg = None
 
-                # variables reset by self.reset_recurse()
-                self.__lic_list = []
+                # variables reset by self.__recursion_init()
+                self.__lic_ignore = None
+                self.__lic_dict = {}
 
                 # variables reset by self._init_root()
                 self.__root = None
@@ -339,7 +422,6 @@
 
                 # initialize with no properties
                 self.__update_props()
-                self.reset_recurse()
 
                 # initialize linked image plugin objects
                 self.__plugins = dict()
@@ -408,7 +490,7 @@
                         lip.init_root(old_altroot)
 
                 # Tell linked image children about the updated paths
-                for lic in self.__lic_list:
+                for lic in self.__lic_dict.itervalues():
                         lic.child_init_root(old_altroot)
 
         def __update_props(self, props=None):
@@ -595,7 +677,8 @@
                 versions (which have ".<runid>" appended to them."""
 
                 path = self.__path_prop
-                path_tmp = "%s.%d" % (self.__path_prop, self.__img.runid)
+                path_tmp = "%s.%d" % (self.__path_prop,
+                    global_settings.client_runid)
 
                 # read the linked image properties from disk
                 if tmp and path_exists(path_tmp):
@@ -631,7 +714,8 @@
                 linked image metadata files, or if we should access temporary
                 versions (which have ".<runid>" appended to them."""
 
-                path = "%s.%d" % (self.__path_ppkgs, self.__img.runid)
+                path = "%s.%d" % (self.__path_ppkgs,
+                    global_settings.client_runid)
                 if tmp and path_exists(path):
                         return frozenset([
                             pkg.fmri.PkgFmri(str(s))
@@ -655,7 +739,8 @@
                 linked image metadata files, or if we should access temporary
                 versions (which have ".<runid>" appended to them."""
 
-                path = "%s.%d" % (self.__path_ppubs, self.__img.runid)
+                path = "%s.%d" % (self.__path_ppubs,
+                    global_settings.client_runid)
                 if tmp and path_exists(path):
                         return load_data(path)
 
@@ -765,6 +850,8 @@
                                     attach_bad_prop=k))
                                 continue
 
+                if len(errs) == 1:
+                        raise errs[0]
                 if errs:
                         raise apx.LinkedImageException(bundle=errs)
 
@@ -780,7 +867,6 @@
                 try:
                         pimg = self.__img.alloc(
                             root=path,
-                            runid=self.__img.runid,
                             user_provided_dir=True,
                             cmdpath=self.__img.cmdpath)
                 except apx.ImageNotFoundException:
@@ -853,7 +939,7 @@
                         rv.append([str(p), p.sticky])
                 return rv
 
-        def check_pubs(self, op):
+        def pubcheck(self):
                 """If we're a child image's, verify that the parent image
                 publisher configuration is a subset of the child images
                 publisher configuration.  This means that all publishers
@@ -875,10 +961,6 @@
                 if self.__img.cfg.get_policy("use-system-repo"):
                         return
 
-                if op in [pkgdefs.API_OP_DETACH]:
-                        # we don't need to do a pubcheck for detach
-                        return
-
                 pubs = self.get_pubs()
                 ppubs = self.__ppubs
 
@@ -899,7 +981,7 @@
                         raise apx.PlanCreationException(
                             linked_pub_error=(pubs, ppubs))
 
-        def syncmd_from_parent(self, op=None):
+        def syncmd_from_parent(self, api_op=None):
                 """Update linked image constraint, publisher data, and
                 state from our parent image."""
 
@@ -911,7 +993,7 @@
                         # parent pushes data to us, nothing to do
                         return
 
-                # initalize the parent image
+                # initialize the parent image
                 if not self.__pimg:
                         path = self.__props[PROP_PARENT_PATH]
                         self.__pimg = self.__init_pimg(path)
@@ -942,7 +1024,7 @@
 
                 # if we're not planning an image attach operation then write
                 # the linked image metadata to disk.
-                if op != pkgdefs.API_OP_ATTACH:
+                if api_op != pkgdefs.API_OP_ATTACH:
                         self.syncmd()
 
         def syncmd(self):
@@ -954,7 +1036,8 @@
 
                 # cleanup any temporary files
                 for path in paths:
-                        path = "%s.%d" % (path, self.__img.runid)
+                        path = "%s.%d" % (path,
+                            global_settings.client_runid)
                         path_unlink(path, noent_ok=True)
 
                 if not self.ischild() and not self.isparent():
@@ -1004,10 +1087,10 @@
                 return len(self.__list_children(
                     ignore_errors=ignore_errors)) > 0
 
-        def isparent(self):
+        def isparent(self, li_ignore=None):
                 """Indicates whether the current image is a parent image."""
 
-                return self.__isparent()
+                return len(self.__list_children(li_ignore=li_ignore)) > 0
 
         def child_props(self, lin=None):
                 """Return a dictionary which represents the linked image
@@ -1056,6 +1139,17 @@
                         raise apx.LinkedImageException(child_unknown=lin)
                 return False
 
+        def verify_names(self, lin_list):
+                """Given a list of linked image name objects, make sure all
+                the children exist."""
+
+                assert isinstance(lin_list, list), \
+                    "type(lin_list) == %s, str(lin_list) == %s" % \
+                    (type(lin_list), str(lin_list))
+
+                for lin in lin_list:
+                        self.__verify_child_name(lin, raise_except=True)
+
         def parent_fmris(self):
                 """A set of the fmris installed in our parent image."""
 
@@ -1113,6 +1207,8 @@
                     apx.LinkedImageException(child_unknown=lin)
                     for lin in (set(li_ignore) - li_all)
                 ]
+                if len(errs) == 1:
+                        raise errs[0]
                 if errs:
                         raise apx.LinkedImageException(bundle=errs)
 
@@ -1308,7 +1404,8 @@
                 parent."""
 
                 if not self.ischild():
-                        return (pkgdefs.EXIT_OOPS, self.__apx_not_child(), None)
+                        e = self.__apx_not_child()
+                        return LI_RVTuple(pkgdefs.EXIT_OOPS, e, None)
 
                 try:
                         if li_parent_sync:
@@ -1317,14 +1414,14 @@
                                 self.syncmd_from_parent()
 
                 except apx.LinkedImageException, e:
-                        return (e.lix_exitrv, e, None)
+                        return LI_RVTuple(e.lix_exitrv, e, None)
 
                 if not self.__insync():
                         e = apx.LinkedImageException(
                             child_diverged=self.child_name)
-                        return (pkgdefs.EXIT_DIVERGED, e, None)
-
-                return (pkgdefs.EXIT_OK, None, None)
+                        return LI_RVTuple(pkgdefs.EXIT_DIVERGED, e, None)
+
+                return LI_RVTuple(pkgdefs.EXIT_OK, None, None)
 
         @staticmethod
         def __rvdict2rv(rvdict, rv_map=None):
@@ -1332,13 +1429,7 @@
                 from an operations on multiple children and merges the results
                 into a single return code."""
 
-                assert not rvdict or type(rvdict) == dict
-                for k, (rv, err, p_dict) in rvdict.iteritems():
-                        assert type(k) == LinkedImageName
-                        assert type(rv) == int
-                        assert err is None or \
-                            isinstance(err, apx.LinkedImageException)
-                        assert p_dict is None or isinstance(p_dict, dict)
+                _li_rvdict_check(rvdict)
                 if type(rv_map) != type(None):
                         assert type(rv_map) == list
                         for (rv_set, rv) in rv_map:
@@ -1346,21 +1437,25 @@
                                 assert(type(rv) == int)
 
                 if not rvdict:
-                        return (pkgdefs.EXIT_OK, None, None)
+                        return LI_RVTuple(pkgdefs.EXIT_OK, None, None)
 
                 if not rv_map:
                         rv_map = [(set([pkgdefs.EXIT_OK]), pkgdefs.EXIT_OK)]
 
                 p_dicts = [
-                    p_dict for (rv, e, p_dict) in rvdict.itervalues()
-                    if p_dict is not None
+                    rvtuple.rvt_p_dict
+                    for rvtuple in rvdict.itervalues()
+                    if rvtuple.rvt_p_dict is not None
                 ]
 
                 rv_mapped = set()
-                rv_seen = set([rv for (rv, e, p_dict) in rvdict.itervalues()])
+                rv_seen = set([
+                    rvtuple.rvt_rv
+                    for rvtuple in rvdict.itervalues()
+                ])
                 for (rv_map_set, rv_map_rv) in rv_map:
                         if (rv_seen == rv_map_set):
-                                return (rv_map_rv, None, p_dicts)
+                                return LI_RVTuple(rv_map_rv, None, p_dicts)
                         # keep track of all the return values that are mapped
                         rv_mapped |= rv_map_set
 
@@ -1369,20 +1464,22 @@
 
                 # if we had errors for unmapped return values, bundle them up
                 errs = [
-                        e
-                        for (rv, e, p_dict) in rvdict.itervalues()
-                        if e and rv not in rv_mapped
+                        rvtuple.rvt_e
+                        for rvtuple in rvdict.itervalues()
+                        if rvtuple.rvt_e and rvtuple.rvt_rv not in rv_mapped
                 ]
-                if errs:
+                if len(errs) == 1:
+                        err = errs[0]
+                elif errs:
                         err = apx.LinkedImageException(bundle=errs)
                 else:
                         err = None
 
                 if len(rv_seen) == 1:
                         # we have one consistent return value
-                        return (list(rv_seen)[0], err, p_dicts)
-
-                return (pkgdefs.EXIT_PARTIAL, err, p_dicts)
+                        return LI_RVTuple(list(rv_seen)[0], err, p_dicts)
+
+                return LI_RVTuple(pkgdefs.EXIT_PARTIAL, err, p_dicts)
 
         def audit_rvdict2rv(self, rvdict):
                 """Convenience function that takes a dictionary returned from
@@ -1530,8 +1627,6 @@
                 For descriptions of parameters please see the descriptions in
                 api.py`gen_plan_*"""
 
-                # Too many arguments; pylint: disable-msg=R0913
-                # Too many return statements; pylint: disable-msg=R0911
                 assert type(lin) == LinkedImageName
                 assert type(path) == str
                 assert props == None or type(props) == dict, \
@@ -1543,12 +1638,12 @@
                 if not lip.support_attach and not force:
                         e = apx.LinkedImageException(
                             attach_child_notsup=lin.lin_type)
-                        return (e.lix_exitrv, e, None)
+                        return LI_RVTuple(e.lix_exitrv, e, None)
 
                 # Path must be an absolute path.
                 if not os.path.isabs(path):
                         e = apx.LinkedImageException(child_path_notabs=path)
-                        return (e.lix_exitrv, e, None)
+                        return LI_RVTuple(e.lix_exitrv, e, None)
 
                 # cleanup specified path
                 cwd = os.getcwd()
@@ -1556,7 +1651,7 @@
                         os.chdir(path)
                 except OSError, e:
                         e = apx.LinkedImageException(child_path_eaccess=path)
-                        return (e.lix_exitrv, e, None)
+                        return LI_RVTuple(e.lix_exitrv, e, None)
                 path = os.getcwd()
                 os.chdir(cwd)
 
@@ -1580,7 +1675,7 @@
                         self.__validate_child_attach(lin, path, props,
                             allow_relink=allow_relink)
                 except apx.LinkedImageException, e:
-                        return (e.lix_exitrv, e, None)
+                        return LI_RVTuple(e.lix_exitrv, e, None)
 
                 # make a copy of the options and start updating them
                 child_props = props.copy()
@@ -1599,95 +1694,162 @@
 
                 if noexecute and li_md_only:
                         # we've validated parameters, nothing else to do
-                        return (pkgdefs.EXIT_OK, None, None)
+                        return LI_RVTuple(pkgdefs.EXIT_OK, None, None)
 
                 # update the child
                 try:
                         lic = LinkedImageChild(self, lin)
                 except apx.LinkedImageException, e:
-                        return (e.lix_exitrv, e, None)
-
-                rv, e, p_dict = self.__sync_child(lic,
-                    accept=accept, li_attach_sync=True, li_md_only=li_md_only,
-                    li_pkg_updates=li_pkg_updates, noexecute=noexecute,
-                    progtrack=progtrack,
-                    refresh_catalogs=refresh_catalogs, reject_list=reject_list,
-                    show_licenses=show_licenses, update_index=update_index)
-
-                assert isinstance(e, (type(None), apx.LinkedImageException))
-
-                if rv not in [pkgdefs.EXIT_OK, pkgdefs.EXIT_NOP]:
-                        return (rv, e, p_dict)
-
-                if noexecute:
-                        # if noexecute then we're done
-                        return (pkgdefs.EXIT_OK, None, p_dict)
-
-                # save child image properties
-                rv, e = lip.sync_children_todisk()
-                assert isinstance(e, (type(None), apx.LinkedImageException))
-                if e:
-                        return (pkgdefs.EXIT_OOPS, e, p_dict)
+                        return LI_RVTuple(e.lix_exitrv, e, None)
+
+                rvdict = {}
+                list(self.__children_op(
+                    _pkg_op=pkgdefs.PKG_OP_SYNC,
+                    _lic_list=[lic],
+                    _rvdict=rvdict,
+                    _progtrack=progtrack,
+                    _failfast=False,
+                    _expect_plan=True,
+                    accept=accept,
+                    li_attach_sync=True,
+                    li_md_only=li_md_only,
+                    li_pkg_updates=li_pkg_updates,
+                    noexecute=noexecute,
+                    refresh_catalogs=refresh_catalogs,
+                    reject_list=reject_list,
+                    show_licenses=show_licenses,
+                    update_index=update_index))
+
+                rvtuple = rvdict[lin]
+
+                if noexecute or rvtuple.rvt_rv not in [
+                    pkgdefs.EXIT_OK, pkgdefs.EXIT_NOP ]:
+                        return rvtuple
+
+                # commit child image property updates
+                rvtuple2 = lip.sync_children_todisk()
+                _li_rvtuple_check(rvtuple2)
+                if rvtuple2.rvt_e:
+                        return rvtuple2
 
                 # save parent image properties
                 self.syncmd()
 
-                return (pkgdefs.EXIT_OK, None, p_dict)
-
-        def audit_children(self, lin_list, **kwargs):
+                # The recursive child operation may have returned NOP, but
+                # since we always update our own image metadata, we always
+                # return OK.
+                if rvtuple.rvt_rv == pkgdefs.EXIT_NOP:
+                        return LI_RVTuple(pkgdefs.EXIT_OK, None, None)
+                return rvtuple
+
+        def audit_children(self, lin_list):
                 """Audit one or more children of the current image to see if
                 they are in sync with this image."""
 
-                return self.__children_op(lin_list,
-                    self.__audit_child, **kwargs)
-
-        def sync_children(self, lin_list, **kwargs):
+                if lin_list == []:
+                        lin_list = None
+
+                lic_dict, rvdict = self.__children_init(lin_list=lin_list,
+                    failfast=False)
+
+                list(self.__children_op(
+                    _pkg_op=pkgdefs.PKG_OP_AUDIT_LINKED,
+                    _lic_list=lic_dict.values(),
+                    _rvdict=rvdict,
+                    _progtrack=progress.QuietProgressTracker(),
+                    _failfast=False))
+                return rvdict
+
+        def sync_children(self, lin_list, accept=False, li_attach_sync=False,
+            li_md_only=False, li_pkg_updates=True, progtrack=None,
+            noexecute=False, refresh_catalogs=True, reject_list=misc.EmptyI,
+            show_licenses=False, update_index=True):
                 """Sync one or more children of the current image."""
 
-                return self.__children_op(lin_list,
-                    self.__sync_child, **kwargs)
-
-        def detach_children(self, lin_list, **kwargs):
+                if progtrack is None:
+                        progtrack = progress.QuietProgressTracker()
+
+                if lin_list == []:
+                        lin_list = None
+
+                lic_dict = self.__children_init(lin_list=lin_list)
+
+                rvdict = {}
+                list(self.__children_op(
+                    _pkg_op=pkgdefs.PKG_OP_SYNC,
+                    _lic_list=lic_dict.values(),
+                    _rvdict=rvdict,
+                    _progtrack=progtrack,
+                    _failfast=False,
+                    _expect_plan=True,
+                    accept=accept,
+                    li_attach_sync=li_attach_sync,
+                    li_md_only=li_md_only,
+                    li_pkg_updates=li_pkg_updates,
+                    noexecute=noexecute,
+                    refresh_catalogs=refresh_catalogs,
+                    reject_list=reject_list,
+                    show_licenses=show_licenses,
+                    update_index=update_index))
+                return rvdict
+
+        def detach_children(self, lin_list, force=False, noexecute=False):
                 """Detach one or more children from the current image. This
                 operation results in the removal of any constraint package
                 from the child images."""
 
-                # get parameter meant for __detach_child()
-                force = noexecute = False
-                if "force" in kwargs:
-                        force = kwargs["force"]
-                if "noexecute" in kwargs:
-                        noexecute = kwargs["noexecute"]
-
-                # expand lin_list before calling __detach_child()
-                if not lin_list:
-                        lin_list = [i[0] for i in self.__list_children()]
-
-                rvdict = self.__children_op(lin_list,
-                    self.__detach_child, **kwargs)
-
-                for lin in lin_list:
+                if lin_list == []:
+                        lin_list = None
+
+                lic_dict, rvdict = self.__children_init(lin_list=lin_list,
+                    failfast=False)
+
+                # check if we support detach for these children.  we don't use
+                # iteritems() when walking lic_dict because we might modify
+                # lic_dict.
+                for lin in lic_dict:
+                        lip = self.__plugins[lin.lin_type]
+                        if lip.support_detach or force:
+                                continue
+
+                        # we can't detach this type of image.
+                        e = apx.LinkedImageException(
+                                detach_child_notsup=lin.lin_type)
+                        rvdict[lin] = LI_RVTuple(e.lix_exitrv, e, None)
+                        _li_rvtuple_check(rvdict[lin])
+                        del lic_dict[lin]
+
+                # do the detach
+                list(self.__children_op(
+                    _pkg_op=pkgdefs.PKG_OP_DETACH,
+                    _lic_list=lic_dict.values(),
+                    _rvdict=rvdict,
+                    _progtrack=progress.QuietProgressTracker(),
+                    _failfast=False,
+                    noexecute=noexecute))
+
+                # if any of the children successfully detached, then we want
+                # to discard our metadata for that child.
+                for lin, rvtuple in rvdict.iteritems():
+
                         # if the detach failed leave metadata in parent
-                        # Unused variable 'rv'; pylint: disable-msg=W0612
-                        rv, e, p_dict = rvdict[lin]
-                        # pylint: enable-msg=W0612
-                        assert e == None or \
-                            (isinstance(e, apx.LinkedImageException))
-                        if e and not force:
+                        if rvtuple.rvt_e and not force:
                                 continue
 
                         # detach the child in memory
                         lip = self.__plugins[lin.lin_type]
                         lip.detach_child_inmemory(lin)
 
-                        if not noexecute:
-                                # sync out the fact that we detached the child
-                                rv2, e2 = lip.sync_children_todisk()
-                                assert e2 == None or \
-                                    (isinstance(e2, apx.LinkedImageException))
-                                if not e:
-                                        # don't overwrite previous errors
-                                        rvdict[lin] = (rv2, e2, p_dict)
+                        if noexecute:
+                                continue
+
+                        # commit child image property updates
+                        rvtuple2 = lip.sync_children_todisk()
+                        _li_rvtuple_check(rvtuple2)
+
+                        # don't overwrite previous errors
+                        if rvtuple2.rvt_e and rvtuple.rvt_e is None:
+                                rvdict[lin] = rvtuple2
 
                 if not (self.ischild() or self.isparent()):
                         # we're not linked anymore, so delete all our linked
@@ -1697,94 +1859,356 @@
 
                 return rvdict
 
-        def __children_op(self, lin_list, op, **kwargs):
-                """Perform a linked image operation on multiple children."""
-
-                assert type(lin_list) == list
-                assert type(kwargs) == dict
-                assert "lin" not in kwargs
-                assert "lic" not in kwargs
-
-                if not lin_list:
-                        lin_list = [i[0] for i in self.__list_children()]
-
-                rvdict = dict()
+        @staticmethod
+        def __children_op(_pkg_op, _lic_list, _rvdict, _progtrack, _failfast,
+            _expect_plan=False, _ignore_syncmd_nop=True, _pd=None,
+            **kwargs):
+                """An iterator function which performs a linked image
+                operation on multiple children in parallel.
+
+                '_pkg_op' is the pkg.1 operation that we're going to perform
+
+                '_lic_list' is a list of linked image child objects to perform
+                the operation on.
+
+                '_rvdict' is a dictionary, indexed by linked image name, which
+                contains rvtuples of the result of the operation for each
+                child.
+
+                '_prograck' is a ProgressTracker pointer.
+
+                '_failfast' is a boolean.  If True and we encounter a failure
+                operating on a child then we raise an exception immediately.
+                If False then we'll attempt to perform the operation on all
+                children and rvdict will contain a LI_RVTuple result for all
+                children.
+
+                '_expect_plan' is a boolean that indicates if we expect this
+                operation to generate an image plan.
+
+                '_ignore_syncmd_nop' a boolean that indicates if we should
+                always recurse into a child even if the linked image meta data
+                isn't changing.
+
+                '_pd' a PlanDescription pointer."""
+
+                if _lic_list:
+                        _progtrack.li_recurse_start()
+
+                if _pkg_op in [ pkgdefs.PKG_OP_AUDIT_LINKED,
+                    pkgdefs.PKG_OP_PUBCHECK ]:
+                        # these operations are cheap, use full parallelism
+                        concurrency = -1
+                else:
+                        concurrency = global_settings.client_concurrency
+
+                # setup operation for each child
+                lic_setup = []
+                for lic in _lic_list:
+                        try:
+                                lic.child_op_setup(_pkg_op, _progtrack,
+                                    _ignore_syncmd_nop, _pd, **kwargs)
+                                lic_setup.append(lic)
+                        except apx.LinkedImageException, e:
+                                _rvdict[lic.child_name] = \
+                                    LI_RVTuple(e.lix_exitrv, e, None)
+
+                # if _failfast is true, then throw an exception if we failed
+                # to setup any of the children.  if _failfast is false we'll
+                # continue to perform the operation on any children that
+                # successfully initialized and we'll report setup errors along
+                # with the final results for all children.
+                if _failfast and _li_rvdict_exceptions(_rvdict):
+                        # before we raise an exception we need to cleanup any
+                        # children that we setup.
+                        for lic in lic_setup:
+                                lic.child_op_abort()
+                        # raise an exception
+                        _li_rvdict_raise_exceptions(_rvdict)
+
+                def __child_op_finish(lic, lic_list, _rvdict, _progtrack,
+                    _failfast, _expect_plan):
+                        """An iterator function invoked when a child has
+                        finished an operation.
+
+                        'lic' is the child that has finished execution.
+
+                        'lic_list' a list of children to remove 'lic' from.
+
+                        See __children_op() for an explanation of the other
+                        parameters."""
+
+                        assert lic.child_op_is_done()
+
+                        lic_list.remove(lic)
+
+                        (rvtuple, stdout, stderr) = lic.child_op_rv(
+                            _progtrack, _expect_plan)
+                        _li_rvtuple_check(rvtuple)
+                        _rvdict[lic.child_name] = rvtuple
+
+                        # check if we should raise an exception
+                        if _failfast and _li_rvdict_exceptions(_rvdict):
+
+                                # we're going to raise an exception.  abort
+                                # the remaining children.
+                                for lic in lic_list:
+                                        lic.child_op_abort()
+
+                                # raise an exception
+                                _li_rvdict_raise_exceptions(_rvdict)
+
+                        if rvtuple.rvt_rv in [ pkgdefs.EXIT_OK,
+                            pkgdefs.EXIT_NOP ]:
+
+                                # only display child output if there was no
+                                # error (otherwise the exception includes the
+                                # output so we'll display it twice.)
+                                _progtrack.li_recurse_output(lic.child_name,
+                                    stdout, stderr)
+
+                        # check if we should yield a plan.
+                        if _expect_plan and rvtuple.rvt_rv == pkgdefs.EXIT_OK:
+                                yield rvtuple.rvt_p_dict
+
+                # check if we did everything we needed to do during child
+                # setup.  (this can happen if we're just doing an implicit
+                # syncmd during setup we discover the linked image metadata
+                # isn't changing.)  we iterate over a copy of lic_setup to
+                # allow __child_op_finish() to remove elements from lic_setup
+                # while we're walking through it.
+                for lic in copy.copy(lic_setup):
+                        if not lic.child_op_is_done():
+                                continue
+                        for p_dict in __child_op_finish(lic, lic_setup,
+                            _rvdict, _progtrack, _failfast, _expect_plan):
+                                yield p_dict
+
+                # keep track of currently running children
+                lic_running = []
+
+                # keep going as long as there are children to process
+                progtrack_update = False
+                while len(lic_setup) or len(lic_running):
+
+                        while lic_setup and (
+                            concurrency > len(lic_running) or
+                            concurrency <= 0):
+                                # start processing on a child
+                                progtrack_update = True
+                                lic = lic_setup.pop()
+                                lic_running.append(lic)
+                                lic.child_op_start()
+
+                        if progtrack_update:
+                                # display progress on children
+                                progtrack_update = False
+                                done = len(_lic_list) - len(lic_setup) - \
+                                    len(lic_running)
+                                pending = len(_lic_list) - len(lic_running) - \
+                                    done
+                                _progtrack.li_recurse(lic_running,
+                                    done, pending)
+
+                        rlistrv = select.select(lic_running, [], [])[0]
+                        for lic in rlistrv:
+                                _progtrack.li_recurse_progress(lic.child_name)
+                                if not lic.child_op_is_done():
+                                        continue
+                                # a child finished processing
+                                progtrack_update = True
+                                for p_dict in __child_op_finish(lic,
+                                    lic_running, _rvdict, _progtrack,
+                                    _failfast, _expect_plan):
+                                        yield p_dict
+
+                _li_rvdict_check(_rvdict)
+                if _lic_list:
+                        _progtrack.li_recurse_end()
+
+        def __children_init(self, lin_list=None, li_ignore=None, failfast=True):
+                """Initialize LinkedImageChild objects for children specified
+                in 'lin_list'.  If 'lin_list' is not specified, then
+                initialize objects for all children (excluding any being
+                ignored via 'li_ignore')."""
+
+                # you can't specify children to operate on and children to be
+                # ignored at the same time
+                assert lin_list is None or li_ignore is None
+
+                # if no children we listed, build a list of children
+                if lin_list is None:
+                        lin_list = [
+                            i[0]
+                            for i in self.__list_children(li_ignore)
+                        ]
+                else:
+                        self.verify_names(lin_list)
+
+                rvdict = {}
+                lic_dict = {}
                 for lin in lin_list:
                         try:
                                 lic = LinkedImageChild(self, lin)
-
-                                # perform the requested operation
-                                rvdict[lin] = op(lic, **kwargs)
-
-                                # Unused variable; pylint: disable-msg=W0612
-                                rv, e, p_dict = rvdict[lin]
-                                # pylint: enable-msg=W0612
-                                assert e == None or \
-                                    (isinstance(e, apx.LinkedImageException))
-
+                                lic_dict[lin] = lic
                         except apx.LinkedImageException, e:
-                                rvdict[lin] = (e.lix_exitrv, e, None)
-
-                return rvdict
-
-        @staticmethod
-        def __audit_child(lic):
-                """Recurse into a child image and audit it."""
-                return lic.child_audit()
-
-        @staticmethod
-        def __sync_child(lic, **kwargs):
-                """Recurse into a child image and sync it."""
-                return lic.child_sync(**kwargs)
-
-        def __detach_child(self, lic, force=False, noexecute=False,
-            progtrack=None):
-                """Recurse into a child image and detach it."""
-
-                lin = lic.child_name
-                lip = self.__plugins[lin.lin_type]
-                if not force and not lip.support_detach:
-                        # we can't detach this type of image.
-                        e = apx.LinkedImageException(
-                            detach_child_notsup=lin.lin_type)
-                        return (pkgdefs.EXIT_OOPS, e, None)
-
-                # remove linked data from the child
-                return lic.child_detach(noexecute=noexecute,
-                    progtrack=progtrack)
-
-        def reset_recurse(self):
-                """Reset all child recursion state."""
-
-                self.__lic_list = []
-
-        def init_recurse(self, op, li_ignore, accept,
-            refresh_catalogs, update_index, args):
-                """When planning changes on a parent image, prepare to
-                recurse into all child images and operate on them as well."""
-
-                # Too many arguments; pylint: disable-msg=R0913
-
-                if op == pkgdefs.API_OP_DETACH:
-                        # we don't need to recurse for these operations
-                        self.__lic_list = []
-                        return
+                                rvdict[lin] = LI_RVTuple(e.lix_exitrv, e, None)
+
+                if failfast:
+                        _li_rvdict_raise_exceptions(rvdict)
+                        return lic_dict
+
+                return (lic_dict, rvdict)
+
+        def __recursion_init(self, li_ignore):
+                """Initialize child objects used during recursive packaging
+                operations."""
+
+                self.__lic_ignore = li_ignore
+                self.__lic_dict = self.__children_init(li_ignore=li_ignore)
+
+        def api_recurse_init(self, li_ignore=None, repos=None):
+                """Initialize planning state.  If we're a child image we save
+                our current state (which may reflect a planned state that we
+                have not committed to disk) into the plan.  We also initialize
+                all our children to prepare to recurse into them."""
 
                 if PROP_RECURSE in self.__props and \
                     not self.__props[PROP_RECURSE]:
-                        # don't bother to recurse into children
-                        self.__lic_list = []
+                        # we don't want to recurse
+                        self.__recursion_init(li_ignore=[])
+                        return
+
+                # Initialize children
+                self.__recursion_init(li_ignore)
+
+                if not self.__lic_dict:
+                        # we don't need to recurse
                         return
 
-                self.__lic_list = []
-                # Unused variable 'path'; pylint: disable-msg=W0612
-                for (lin, path) in self.__list_children(li_ignore):
-                # pylint: enable-msg=W0612
-                        self.__lic_list.append(LinkedImageChild(self, lin))
-
-                if not self.__lic_list:
-                        # no child images to recurse into
-                        return
+                # if we have any children we don't support operations using
+                # temporary repositories.
+                if repos:
+                        raise apx.PlanCreationException(no_tmp_origins=True)
+
+        def api_recurse_pubcheck(self):
+                """Do a recursive publisher check"""
+
+                # get a list of of children to recurse into.
+                lic_list = self.__lic_dict.values()
+
+                # do a publisher check on all of them
+                rvdict = {}
+                list(self.__children_op(
+                    _pkg_op=pkgdefs.PKG_OP_PUBCHECK,
+                    _lic_list=lic_list,
+                    _rvdict=rvdict,
+                    _progtrack=progress.QuietProgressTracker(),
+                    _failfast=False))
+
+                # raise an exception if one or more children failed the
+                # publisher check.
+                _li_rvdict_raise_exceptions(rvdict)
+
+        def __api_recurse(self, stage, progtrack):
+                """This is an iterator function.  It recurses into linked
+                image children to perform the specified operation.
+                """
+
+                # get a pointer to the current image plan
+                pd = self.__img.imageplan.pd
+
+                # get a list of of children to recurse into.
+                lic_list = self.__lic_dict.values()
+
+                # sanity check stage
+                assert stage in [pkgdefs.API_STAGE_PLAN,
+                    pkgdefs.API_STAGE_PREPARE, pkgdefs.API_STAGE_EXECUTE]
+
+                # if we're ignoring all children then we can't be recursing
+                assert pd.children_ignored != [] or lic_list == []
+
+                # sanity check the plan description state
+                if stage == pkgdefs.API_STAGE_PLAN:
+                        # the state should be uninitialized
+                        assert pd.children_planned == []
+                        assert pd.children_nop == []
+                else:
+                        # if we ignored all children, we better not have
+                        # recursed into any children.
+                        assert pd.children_ignored != [] or \
+                            pd.children_planned == pd.children_nop == []
+
+                        # there shouldn't be any overloap between sets of
+                        # children in the plan
+                        assert not (set(pd.children_planned) &
+                            set(pd.children_nop))
+                        if pd.children_ignored:
+                                assert not (set(pd.children_ignored) &
+                                    set(pd.children_planned))
+                                assert not (set(pd.children_ignored) &
+                                    set(pd.children_nop))
+
+                        # make sure set of child handles matches the set of
+                        # previously planned children.
+                        assert set(self.__lic_dict) == set(pd.children_planned)
+
+                # if we're in the planning stage, we should pass the current
+                # image plan onto the child and also expect an image plan from
+                # the child.
+                expect_plan = False
+                if stage == pkgdefs.API_STAGE_PLAN:
+                        expect_plan = True
+
+                # get target op and arguments
+                pkg_op = pd.child_op
+
+                # assume that for most operations we want to recurse into the
+                # child image even if the linked image metadata isn't
+                # changing.  (this would be required for recursive operations,
+                # update operations, etc.)
+                _ignore_syncmd_nop = True
+                if pkg_op == pkgdefs.API_OP_SYNC:
+                        # the exception is if we're doing an implicit sync.
+                        # to improve performance we assume the child is
+                        # already in sync, so if its linked image metadata
+                        # isn't changing then the child won't need any updates
+                        # so there will be no need to recurse into it.
+                        _ignore_syncmd_nop = False
+
+                rvdict = {}
+                for p_dict in self.__children_op(
+                    _pkg_op=pkg_op,
+                    _lic_list=lic_list,
+                    _rvdict=rvdict,
+                    _progtrack=progtrack,
+                    _failfast=True,
+                    _expect_plan=expect_plan,
+                    _ignore_syncmd_nop=_ignore_syncmd_nop,
+                    _pd=pd,
+                    stage=stage,
+                    **pd.child_kwargs):
+                        yield p_dict
+
+                assert not _li_rvdict_exceptions(rvdict)
+
+                for lin in rvdict:
+                        # check for children that don't need any updates
+                        if rvdict[lin].rvt_rv == pkgdefs.EXIT_NOP:
+                                assert lin not in pd.children_nop
+                                pd.children_nop.append(lin)
+                                del self.__lic_dict[lin]
+
+                        # record the children that are done planning
+                        if stage == pkgdefs.API_STAGE_PLAN and \
+                            rvdict[lin].rvt_rv == pkgdefs.EXIT_OK:
+                                assert lin not in pd.children_planned
+                                pd.children_planned.append(lin)
+
+        @staticmethod
+        def __recursion_op(api_op, api_kwargs):
+                """Determine what pkg command to use when recursing into child
+                images."""
 
                 #
                 # given the api operation being performed on the current
@@ -1799,34 +2223,118 @@
                 # of specific packages, etc, then when we recurse we'll do a
                 # sync in the child.
                 #
-                if op == pkgdefs.API_OP_UPDATE and not args["pkgs_update"]:
+                if api_op == pkgdefs.API_OP_UPDATE and not \
+                    api_kwargs["pkgs_update"]:
                         pkg_op = pkgdefs.PKG_OP_UPDATE
                 else:
                         pkg_op = pkgdefs.PKG_OP_SYNC
-
-                for lic in self.__lic_list:
-                        lic.child_init_recurse(pkg_op, accept,
-                            refresh_catalogs, update_index,
-                            args)
-
-        def do_recurse(self, stage, ip=None):
-                """When planning changes within a parent image, recurse into
-                all child images and operate on them as well."""
-
-                assert stage in pkgdefs.api_stage_values
-                assert stage != pkgdefs.API_STAGE_DEFAULT
-
-                res = []
-                for lic in self.__lic_list:
-                        res.append(lic.child_do_recurse(stage=stage, ip=ip))
-                return res
+                return pkg_op
+
+        @staticmethod
+        def __recursion_args(pd, refresh_catalogs, update_index, api_kwargs):
+                """Determine what pkg command arguments to use when recursing
+                into child images."""
+
+                kwargs = {}
+                kwargs["noexecute"] = api_kwargs["noexecute"]
+                kwargs["refresh_catalogs"] = refresh_catalogs
+                kwargs["show_licenses"] = False
+                kwargs["update_index"] = update_index
+
+                #
+                # when we recurse we always accept all new licenses (for now).
+                #
+                # ultimately (when start yielding back plan descriptions for
+                # children) in addition to accepting licenses on the plan for
+                # the current image the api client will also have to
+                # explicitly accept licenses for all child images.  but until
+                # that happens we'll just assume that the parent image license
+                # space is a superset of the child image license space (and
+                # since the api consumer must accept licenses in the parent
+                # before we'll do anything, we'll assume licenses in the child
+                # are accepted as well).
+                #
+                kwargs["accept"] = True
+
+                if "li_pkg_updates" in api_kwargs:
+                        # option specific to: attach, set-property-linked, sync
+                        kwargs["li_pkg_updates"] = api_kwargs["li_pkg_updates"]
+
+                if pd.child_op == pkgdefs.PKG_OP_UPDATE:
+                        # skip ipkg up to date check for child images
+                        kwargs["force"] = True
+
+                return kwargs
+
+        def api_recurse_plan(self, api_kwargs, refresh_catalogs,
+            update_index, progtrack):
+                """Plan child image updates."""
+
+                pd = self.__img.imageplan.pd
+                api_op = pd.plan_type
+
+                # update the plan arguments
+                pd.child_op = self.__recursion_op(api_op, api_kwargs)
+                pd.child_kwargs = self.__recursion_args(pd,
+                    refresh_catalogs, update_index, api_kwargs)
+                pd.children_ignored = self.__lic_ignore
+
+                # recurse into children
+                for p_dict in self.__api_recurse(pkgdefs.API_STAGE_PLAN,
+                    progtrack):
+                        yield p_dict
+
+        def api_recurse_prepare(self, progtrack):
+                """Prepare child image updates."""
+                list(self.__api_recurse(pkgdefs.API_STAGE_PREPARE, progtrack))
+
+        def api_recurse_execute(self, progtrack):
+                """Execute child image updates."""
+                list(self.__api_recurse(pkgdefs.API_STAGE_EXECUTE, progtrack))
+
+        def init_plan(self, pd):
+                """Initialize our state in the PlanDescription."""
+
+                # if we're a child, save our parent package state into the
+                # plan description
+                pd.li_props = self.__props
+                pd.li_ppkgs = self.__ppkgs
+                pd.li_ppubs = self.__ppubs
+
+        def setup_plan(self, pd):
+                """Reload a previously created plan."""
+
+                # load linked image state from the plan
+                self.__update_props(pd.li_props)
+                self.__ppubs = pd.li_ppubs
+                self.__ppkgs = pd.li_ppkgs
+
+                # now initialize our recursion state, this involves allocating
+                # handles to operate on children.  we don't need handles for
+                # children that were either ignored during planning, or which
+                # return EXIT_NOP after planning (since these children don't
+                # need any updates).
+                li_ignore = copy.copy(pd.children_ignored)
+
+                # merge the children that returned nop into li_ignore (since
+                # we don't need to recurse into them).  if li_ignore is [],
+                # then we ignored all children during planning
+                if li_ignore != [] and pd.children_nop:
+                        if li_ignore is None:
+                                # no children were ignored during planning
+                                li_ignore = []
+                        li_ignore += pd.children_nop
+
+                # Initialize children
+                self.__recursion_init(li_ignore=li_ignore)
 
         def recurse_nothingtodo(self):
                 """Return True if there is no planned work to do on child
                 image."""
 
-                for lic in self.__lic_list:
-                        if not lic.child_nothingtodo():
+                for lic in self.__lic_dict.itervalues():
+                        if lic.child_name not in \
+                            self.__img.imageplan.pd.children_nop:
                                 return False
                 return True
 
@@ -1858,9 +2366,7 @@
                 attrs["fmri"] = pkg.actions.depend.DEPEND_SELF
                 attrs["variant.opensolaris.zone"] = "nonglobal"
 
-                # Used * or ** magic; pylint: disable-msg=W0142
                 pda = pkg.actions.depend.DependencyAction(**attrs)
-                # pylint: enable-msg=W0142
 
                 if not pda.include_this(excludes):
                         # we're not operating on a nonglobal zone image so we
@@ -1969,8 +2475,6 @@
         auditing a child image, or recursing into a child image to keep it in
         sync with planned changes in the parent image."""
 
-        # Too many instance attributes; pylint: disable-msg=R0902
-
         def __init__(self, li, lin):
                 assert isinstance(li, LinkedImage), \
                     "isinstance(%s, LinkedImage)" % type(li)
@@ -2004,12 +2508,8 @@
                 self.__plugin = \
                     pkg.client.linkedimage.p_classes_child[lin.lin_type](self)
 
-                # variables reset by self.child_reset_recurse()
-                self.__r_op = None
-                self.__r_args = None
-                self.__r_progtrack = None
-                self.__r_rv_nop = False
-                self.child_reset_recurse()
+                self.__pkg_remote = pkg.client.pkgremote.PkgRemote()
+                self.__child_op_rvtuple = None
 
         @property
         def child_name(self):
@@ -2027,11 +2527,12 @@
                 this child."""
                 return self.__img
 
-        def __push_data(self, root, path, data, tmp, test):
+        @staticmethod
+        def __push_data(root, path, data, tmp, test):
                 """Write data to a child image."""
 
                 # first save our data to a temporary file
-                path_tmp = "%s.%s" % (path, self.__img.runid)
+                path_tmp = "%s.%s" % (path, global_settings.client_runid)
                 save_data(path_tmp, data, root=root)
 
                 # check if we're updating the data
@@ -2070,7 +2571,7 @@
 
                 return True
 
-        def __push_ppkgs(self, tmp=False, test=False, ip=None):
+        def __push_ppkgs(self, tmp=False, test=False, pd=None):
                 """Sync linked image parent constraint data to a child image.
 
                 'tmp' determines if we should read/write to the official
@@ -2081,10 +2582,10 @@
                 cati = self.__img.get_catalog(self.__img.IMG_CATALOG_INSTALLED)
                 ppkgs = set(cati.fmris())
 
-                if ip != None and ip.plan_desc:
+                if pd is not None:
                         # if there's an image plan the we need to update the
                         # installed packages based on that plan.
-                        for src, dst in ip.plan_desc:
+                        for src, dst in pd.plan_desc:
                                 if src == dst:
                                         continue
                                 if src:
@@ -2131,216 +2632,90 @@
                 return self.__push_data(self.child_path, self.__path_ppubs,
                     ppubs, tmp, test)
 
-        def __syncmd(self, tmp=False, test=False, ip=None):
+        def __syncmd(self, tmp=False, test=False, pd=None):
                 """Sync linked image data to a child image.
 
                 'tmp' determines if we should read/write to the official
                 linked image metadata files, or if we should access temporary
                 versions (which have ".<runid>" appended to them."""
 
-                if ip:
+                if pd:
                         tmp = True
 
-                ppkgs_updated = self.__push_ppkgs(tmp, test, ip=ip)
+                ppkgs_updated = self.__push_ppkgs(tmp, test, pd=pd)
                 props_updated = self.__push_props(tmp, test)
                 pubs_updated = self.__push_ppubs(tmp, test)
 
                 return (props_updated or ppkgs_updated or pubs_updated)
 
-        @staticmethod
-        def __flush_output():
-                """We flush stdout and stderr before and after operating on
-                child images to avoid any out-of-order output problems that
-                could be caused by caching of output."""
-
-                try:
-                        sys.stdout.flush()
-                except IOError:
-                        pass
-                except OSError, e:
-                        # W0212 Access to a protected member
-                        # pylint: disable-msg=W0212
-                        raise apx._convert_error(e)
+        def __child_op_setup_syncmd(self, ignore_syncmd_nop=True,
+            tmp=False, test=False, pd=None, stage=pkgdefs.API_STAGE_DEFAULT):
+                """Prepare to perform an operation on a child image by syncing
+                the latest linked image data to that image.  As part of this
+                operation, if we discover that the meta data hasn't changed we
+                may report back that there is nothing to do (EXIT_NOP).
+
+                'ignore_syncmd_nop' a boolean that indicates if we should
+                always recurse into a child even if the linked image meta data
+                isn't changing.
+
+                'tmp' a boolean that indicates if we should save the child
+                image meta data into temporary files (instead of overwriting
+                the persistent meta data files).
+
+                'test' a boolean that indicates we shouldn't save any child
+                image meta data, instead we should just test to see if the
+                meta data is changing.
+
+                'pd' an optional plan description object.  this plan
+                description describes changes that will be made to the parent
+                image.  if this is supplied then we derive the meta data that
+                we write into the child from the planned parent image state
+                (instead of the current parent image state).
+
+                'stage' indicates which stage of execution we should be
+                performing on a child image."""
+
+                # we don't actually update metadata during other stages of
+                # operation
+                if stage not in [
+                    pkgdefs.API_STAGE_DEFAULT, pkgdefs.API_STAGE_PLAN]:
+                        return True
 
                 try:
-                        sys.stderr.flush()
-                except IOError:
-                        pass
-                except OSError, e:
-                        # W0212 Access to a protected member
-                        # pylint: disable-msg=W0212
-                        raise apx._convert_error(e)
-
-        def __pkg_cmd(self, pkg_op, pkg_args, stage=None, progtrack=None):
-                """Perform a pkg(1) operation on a child image."""
-
-                if stage == None:
-                        stage = pkgdefs.API_STAGE_DEFAULT
-                assert stage in pkgdefs.api_stage_values
-
-                #
-                # Build up a command line to execute.  Note that we take care
-                # to try to run the exact same pkg command that we were
-                # executed with.  We do this because pkg commonly tries to
-                # access the image that the command is being run from.
-                #
-                pkg_bin = "pkg"
-                cmdpath = self.__img.cmdpath
-                if cmdpath and os.path.basename(cmdpath) == "pkg":
-                        try:
-                                # check if the currently running pkg command
-                                # exists and is accessible.
-                                os.stat(cmdpath)
-                                pkg_bin = cmdpath
-                        except OSError:
-                                pass
-
-                pkg_cmd = [
-                    pkg_bin,
-                    "-R", str(self.child_path),
-                    "--runid=%s" % self.__img.runid,
-                ]
-
-                # propagate certain debug options
-                for k in [
-                    "broken-conflicting-action-handling",
-                    "disp_linked_cmds",
-                    "plan"]:
-                        if DebugValues[k]:
-                                pkg_cmd.append("-D")
-                                pkg_cmd.append("%s=1" % k)
-
-                # add the subcommand argument
-                pkg_cmd.append(pkg_op)
-
-                # propagate stage option
-                if stage != pkgdefs.API_STAGE_DEFAULT:
-                        pkg_cmd.append("--stage=%s" % stage)
-
-                # add the subcommand argument options
-                pkg_cmd.extend(pkg_args)
-
-                if progtrack:
-                        progtrack.li_recurse_start(self.child_name)
-
-                # flush all output before recursing into child
-                self.__flush_output()
-
-                disp_linked_cmds = DebugValues.get_value("disp_linked_cmds")
-                if not disp_linked_cmds and \
-                    "PKG_DISP_LINKED_CMDS" in os.environ:
-                        disp_linked_cmds = True
-                pv_in_args = False
-                for a in pkg_args:
-                        if a.startswith("--parsable="):
-                                pv_in_args = True
-                # If we're using --parsable, don't emit the child cmd
-                # information as info because it will confuse the JSON parser.
-                if disp_linked_cmds and not pv_in_args:
-                        logger.info("child cmd: %s" % " ".join(pkg_cmd))
-                else:
-                        logger.debug("child cmd: %s" % " ".join(pkg_cmd))
-
-                #
-                # Start the operation on the child.  let the child have direct
-                # access to stdout but capture stderr.
-                #
-                ferrout = tempfile.TemporaryFile()
-                # If we're using --parsable, then we need to capture stdout so
-                # that we can parse the plan of the child image and include it
-                # in our plan.
-                outloc = None
-                if pv_in_args:
-                        outloc = tempfile.TemporaryFile()
-                try:
-                        p = pkg.pkgsubprocess.Popen(pkg_cmd, stderr=ferrout,
-                            stdout=outloc)
-                        p.wait()
-                except OSError, e:
-                        # W0212 Access to a protected member
-                        # pylint: disable-msg=W0212
-                        raise apx._convert_error(e)
-
-                # flush output generated by the child
-                self.__flush_output()
-
-                # get error output generated by the child
-                ferrout.seek(0)
-                errout = "".join(ferrout.readlines())
-
-                if progtrack:
-                        progtrack.li_recurse_end(self.child_name)
-
-                p_dict = None
-                # A parsable plan is only displayed if the operation was
-                # successful and the stage was default or plan.
-                if pv_in_args and stage in (pkgdefs.API_STAGE_PLAN,
-                    pkgdefs.API_STAGE_DEFAULT) and p.returncode == 0:
-                        outloc.seek(0)
-                        output = outloc.read()
-                        try:
-                                p_dict = json.loads(output)
-                        except ValueError, e:
-                                # JSON raises a subclass of ValueError when it
-                                # can't parse a string.
-                                raise apx.UnparsableJSON(output, e)
-                        p_dict["image-name"] = str(self.child_name)
-
-                return (p.returncode, errout, p_dict)
-
-        def child_detach(self, noexecute=False, progtrack=None):
-                """Detach a child image."""
-
-                # When issuing a detach from a prent we must always use the
-                # force flag. (Normally a child will refuse to detach from a
-                # parent unless it attached to the parent, which is never the
-                # case here.)
-                pkg_args = ["-f"]
-                pkg_args.extend(["-v"] * progtrack.verbose)
-                if progtrack.quiet:
-                        pkg_args.append("-q")
-                if noexecute:
-                        pkg_args.append("-n")
-
-                rv, errout, p_dict = self.__pkg_cmd(pkgdefs.PKG_OP_DETACH,
-                    pkg_args)
-
-                # if the detach command ran, return its status.
-                if rv in [pkgdefs.EXIT_OK, pkgdefs.EXIT_NOPARENT]:
-                        return (pkgdefs.EXIT_OK, None, p_dict)
-
-                e = apx.LinkedImageException(lin=self.child_name, exitrv=rv,
-                    pkg_op_failed=(pkgdefs.PKG_OP_DETACH, rv, errout))
-                return (rv, e, p_dict)
-
-        def child_audit(self):
-                """Audit a child image to see if it's in sync with its
-                constraints."""
-
-                # first sync our metadata
-                self.__syncmd()
-
-                # recurse into the child image
-                pkg_args = ["-q"]
-
-                rv, errout, p_dict = self.__pkg_cmd(pkgdefs.PKG_OP_AUDIT_LINKED,
-                    pkg_args)
-
-                # if the audit command ran, return its status.
-                if rv in [pkgdefs.EXIT_OK, pkgdefs.EXIT_DIVERGED]:
-                        return (rv, None, p_dict)
-
-                # something went unexpectedly wrong.
-                e = apx.LinkedImageException(lin=self.child_name, exitrv=rv,
-                    pkg_op_failed=(pkgdefs.PKG_OP_AUDIT_LINKED, rv, errout))
-                return (rv, e, p_dict)
-
-        def child_sync(self, accept=False, li_attach_sync=False,
-            li_md_only=False, li_pkg_updates=True, progtrack=None,
-            noexecute=False, refresh_catalogs=True, reject_list=misc.EmptyI,
-            show_licenses=False, update_index=True):
-                """Try to bring a child image into sync with its
-                constraints.
+                        updated = self.__syncmd(tmp=tmp, test=test, pd=pd)
+                except apx.LinkedImageException, e:
+                        self.__child_op_rvtuple = \
+                            LI_RVTuple(e.lix_exitrv, e, None)
+                        return False
+
+                if ignore_syncmd_nop:
+                        # we successfully updated the metadata
+                        return True
+
+                # if the metadata changed then report success
+                if updated:
+                        return True
+
+                # the metadata didn't change, so this operation is a NOP
+                self.__child_op_rvtuple = \
+                    LI_RVTuple(pkgdefs.EXIT_NOP, None, None)
+                return False
+
+        def __child_setup_sync(self, _progtrack, _ignore_syncmd_nop, _pd,
+            accept=False,
+            li_attach_sync=False,
+            li_md_only=False,
+            li_pkg_updates=True,
+            noexecute=False,
+            refresh_catalogs=True,
+            reject_list=misc.EmptyI,
+            show_licenses=False,
+            stage=pkgdefs.API_STAGE_DEFAULT,
+            update_index=True):
+                """Prepare to sync a child image.  This involves updating the
+                linked image metadata in the child and then possibly recursing
+                into the child to actually update packages.
 
                 'li_attach_sync' indicates if this sync is part of an attach
                 operation.
@@ -2348,68 +2723,269 @@
                 For descriptions of parameters please see the descriptions in
                 api.py`gen_plan_*"""
 
-                # Too many arguments; pylint: disable-msg=R0913
-
                 if li_md_only:
+                        #
                         # we're not going to recurse into the child image,
                         # we're just going to update its metadata.
-                        try:
-                                updated = self.__syncmd(test=noexecute)
-                        except apx.LinkedImageException, e:
-                                return (e.lix_exitrv, e, None)
-
-                        if updated:
-                                return (pkgdefs.EXIT_OK, None, None)
-                        else:
-                                return (pkgdefs.EXIT_NOP, None, None)
+                        #
+                        # we don't support updating packages in the parent
+                        # during attach metadata only sync.
+                        #
+                        assert not _pd
+                        if not self.__child_op_setup_syncmd(
+                            ignore_syncmd_nop=False,
+                            test=noexecute, stage=stage):
+                                # the update failed
+                                return
+                        self.__child_op_rvtuple = \
+                            LI_RVTuple(pkgdefs.EXIT_OK, None, None)
+                        return
+
+                #
+                # first sync the metadata
+                #
+                # if we're doing this sync as part of an attach, then
+                # temporarily sync the metadata since we don't know yet if the
+                # attach will succeed.  if the attach doesn't succeed this
+                # means we don't have to delete any metadata.  if the attach
+                # succeeds the child will make the temporary metadata
+                # permanent as part of the commit.
+                #
+                # we don't support updating packages in the parent
+                # during attach.
+                #
+                assert not li_attach_sync or _pd is None
+                if not self.__child_op_setup_syncmd(
+                    ignore_syncmd_nop=_ignore_syncmd_nop,
+                    tmp=li_attach_sync, stage=stage, pd=_pd):
+                        # the update failed or the metadata didn't change
+                        return
+
+                self.__pkg_remote.setup(self.child_path,
+                    pkgdefs.PKG_OP_SYNC,
+                    accept=accept,
+                    backup_be=None,
+                    backup_be_name=None,
+                    be_activate=True,
+                    be_name=None,
+                    li_ignore=None,
+                    li_md_only=li_md_only,
+                    li_parent_sync=True,
+                    li_pkg_updates=li_pkg_updates,
+                    li_target_all=False,
+                    li_target_list=[],
+                    new_be=None,
+                    noexecute=noexecute,
+                    origins=[],
+                    parsable_version=_progtrack.parsable_version,
+                    quiet=_progtrack.quiet,
+                    refresh_catalogs=refresh_catalogs,
+                    reject_pats=reject_list,
+                    show_licenses=show_licenses,
+                    stage=stage,
+                    update_index=update_index,
+                    verbose=_progtrack.verbose)
+
+        def __child_setup_update(self, _progtrack, _ignore_syncmd_nop, _pd,
+            accept=False,
+            force=False,
+            noexecute=False,
+            refresh_catalogs=True,
+            reject_list=misc.EmptyI,
+            show_licenses=False,
+            stage=pkgdefs.API_STAGE_DEFAULT,
+            update_index=True):
+                """Prepare to update a child image."""
+
+                # first sync the metadata
+                if not self.__child_op_setup_syncmd(
+                    ignore_syncmd_nop=_ignore_syncmd_nop, pd=_pd, stage=stage):
+                        # the update failed or the metadata didn't change
+                        return
+
+                self.__pkg_remote.setup(self.child_path,
+                    pkgdefs.PKG_OP_UPDATE,
+                    accept=accept,
+                    backup_be=None,
+                    backup_be_name=None,
+                    be_activate=True,
+                    be_name=None,
+                    force=force,
+                    li_ignore=None,
+                    li_parent_sync=True,
+                    new_be=None,
+                    noexecute=noexecute,
+                    origins=[],
+                    parsable_version=_progtrack.parsable_version,
+                    quiet=_progtrack.quiet,
+                    refresh_catalogs=refresh_catalogs,
+                    reject_pats=reject_list,
+                    show_licenses=show_licenses,
+                    stage=stage,
+                    update_index=update_index,
+                    verbose=_progtrack.verbose)
+
+        def __child_setup_detach(self, _progtrack, noexecute=False):
+                """Prepare to detach a child image."""
+
+                self.__pkg_remote.setup(self.child_path,
+                    pkgdefs.PKG_OP_DETACH,
+                    force=True,
+                    li_target_all=False,
+                    li_target_list=[],
+                    noexecute=noexecute,
+                    quiet=_progtrack.quiet,
+                    verbose=_progtrack.verbose)
+
+        def __child_setup_pubcheck(self):
+                """Prepare to a check if a child's publishers are in sync."""
+
+                # first sync the metadata
+                if not self.__child_op_setup_syncmd():
+                        # the update failed
+                        return
+
+                # setup recursion into the child image
+                self.__pkg_remote.setup(self.child_path,
+                    pkgdefs.PKG_OP_PUBCHECK)
+
+        def __child_setup_audit(self):
+                """Prepare to a child image to see if it's in sync with its
+                constraints."""
 
                 # first sync the metadata
+                if not self.__child_op_setup_syncmd():
+                        # the update failed
+                        return
+
+                # setup recursion into the child image
+                self.__pkg_remote.setup(self.child_path,
+                    pkgdefs.PKG_OP_AUDIT_LINKED,
+                    li_parent_sync=True,
+                    li_target_all=False,
+                    li_target_list=[],
+                    omit_headers=True,
+                    quiet=True)
+
+        def child_op_abort(self):
+                """Public interface to abort an operation on a child image."""
+
+                self.__pkg_remote.abort()
+                self.__child_op_rvtuple = None
+
+        def child_op_setup(self, _pkg_op, _progtrack, _ignore_syncmd_nop, _pd,
+            **kwargs):
+                """Public interface to setup an operation that we'd like to
+                perform on a child image."""
+
+                assert self.__child_op_rvtuple is None
+
+                if _pkg_op == pkgdefs.PKG_OP_AUDIT_LINKED:
+                        self.__child_setup_audit(**kwargs)
+                elif _pkg_op == pkgdefs.PKG_OP_DETACH:
+                        self.__child_setup_detach(_progtrack, **kwargs)
+                elif _pkg_op == pkgdefs.PKG_OP_PUBCHECK:
+                        self.__child_setup_pubcheck(**kwargs)
+                elif _pkg_op == pkgdefs.PKG_OP_SYNC:
+                        self.__child_setup_sync(_progtrack,
+                            _ignore_syncmd_nop, _pd, **kwargs)
+                elif _pkg_op == pkgdefs.PKG_OP_UPDATE:
+                        self.__child_setup_update(_progtrack,
+                            _ignore_syncmd_nop, _pd, **kwargs)
+                else:
+                        raise RuntimeError(
+                            "Unsupported package client op: %s" % _pkg_op)
+
+        def child_op_start(self):
+                """Public interface to start an operation on a child image."""
+
+                # if we have a return value this operation is done
+                if self.__child_op_rvtuple is not None:
+                        return True
+
+                self.__pkg_remote.start()
+
+        def child_op_is_done(self):
+                """Public interface to query if an operation on a child image
+                is done."""
+
+                # if we have a return value this operation is done
+                if self.__child_op_rvtuple is not None:
+                        return True
+
+                # make sure there is some data from the child
+                return self.__pkg_remote.is_done()
+
+        def child_op_rv(self, progtrack, expect_plan):
+                """Public interface to get the result of an operation on a
+                child image.
+
+                'progtrack' progress tracker associated with this operation
+
+                'expect_plan' boolean indicating if the child is performing a
+                planning operation.  this is needed because if we're running
+                in parsable output mode then the child will emit a parsable
+                json version of the plan on stdout, and we'll verify it by
+                running it through the json parser.
+                """
+
+                # if we have a return value this operation is done
+                if self.__child_op_rvtuple is not None:
+                        rvtuple = self.__child_op_rvtuple
+                        self.__child_op_rvtuple = None
+                        return (rvtuple, None, None)
+
+                # make sure we're not going to block
+                assert self.__pkg_remote.is_done()
+
+                (pkg_op, rv, e, stdout, stderr) = self.__pkg_remote.result()
+                if e is not None:
+                        rv = pkgdefs.EXIT_OOPS
+
+                # if we got an exception, or a return value other than OK or
+                # NOP, then return an exception.
+                if e is not None or \
+                    rv not in [pkgdefs.EXIT_OK, pkgdefs.EXIT_NOP]:
+                        e = apx.LinkedImageException(
+                            lin=self.child_name, exitrv=rv,
+                            pkg_op_failed=(pkg_op, rv, stdout + stderr, e))
+                        rvtuple = LI_RVTuple(rv, e, None)
+                        return (rvtuple, stdout, stderr)
+
+                # check for NOP.
+                if rv == pkgdefs.EXIT_NOP:
+                        assert e is None
+                        rvtuple = LI_RVTuple(rv, None, None)
+                        return (rvtuple, stdout, stderr)
+
+                if progtrack.parsable_version is None or \
+                    not expect_plan:
+                        rvtuple = LI_RVTuple(rv, None, None)
+                        return (rvtuple, stdout, stderr)
+
+                # If a plan was created and we're in parsable output mode then
+                # parse the plan that should have been displayed to stdout.
+                p_dict = None
                 try:
-                        # if we're doing this sync as part of an attach, then
-                        # temporarily sync the metadata since we don't know
-                        # yet if the attach will succeed.  if the attach
-                        # doesn't succeed this means we don't have to delete
-                        # any metadata.  if the attach succeeds the child will
-                        # make the temporary metadata permanent as part of the
-                        # commit.
-                        self.__syncmd(tmp=li_attach_sync)
-                except apx.LinkedImageException, e:
-                        return (e.lix_exitrv, e, None)
-
-                pkg_args = []
-                pkg_args.extend(["-v"] * progtrack.verbose)
-                if progtrack.quiet:
-                        pkg_args.append("-q")
-                if noexecute:
-                        pkg_args.append("-n")
-                if accept:
-                        pkg_args.append("--accept")
-                if show_licenses:
-                        pkg_args.append("--licenses")
-                if not refresh_catalogs:
-                        pkg_args.append("--no-refresh")
-                if not update_index:
-                        pkg_args.append("--no-index")
-                if not li_pkg_updates:
-                        pkg_args.append("--no-pkg-updates")
-                if progtrack.parsable_version is not None:
-                        assert progtrack.quiet
-                        pkg_args.append("--parsable=%s" %
-                            progtrack.parsable_version)
-                for pat in reject_list:
-                        pkg_args.extend(["--reject", str(pat)])
-
-                rv, errout, p_dict = self.__pkg_cmd(pkgdefs.PKG_OP_SYNC,
-                    pkg_args, progtrack=progtrack)
-
-                # if the audit command ran, return its status.
-                if rv in [pkgdefs.EXIT_OK, pkgdefs.EXIT_NOP]:
-                        return (rv, None, p_dict)
-
-                # something went unexpectedly wrong.
-                e = apx.LinkedImageException(lin=self.child_name, exitrv=rv,
-                    pkg_op_failed=(pkgdefs.PKG_OP_SYNC, rv, errout))
-                return (rv, e, p_dict)
+                        p_dict = json.loads(stdout)
+                except ValueError, e:
+                        # JSON raises a subclass of ValueError when it
+                        # can't parse a string.
+
+                        e = apx.LinkedImageException(
+                            lin=self.child_name,
+                            unparsable_output=(pkg_op, stdout + stderr, e))
+                        rvtuple = LI_RVTuple(rv, e, None)
+                        return (rvtuple, stdout, stderr)
+
+                p_dict["image-name"] = str(self.child_name)
+                rvtuple = LI_RVTuple(rv, None, p_dict)
+                return (rvtuple, stdout, stderr)
+
+        def fileno(self):
+                """Return the progress pipe associated with the PkgRemote
+                instance that is operating on a child image."""
+                return self.__pkg_remote.fileno()
 
         def child_init_root(self, old_altroot):
                 """Our image path is being updated, so figure out our new
@@ -2444,110 +3020,6 @@
                 # (at which point we have no need to access the parent
                 # anymore).
 
-        def child_nothingtodo(self):
-                """Check if there are any changes planned for a child
-                image."""
-                return self.__r_rv_nop
-
-        def child_reset_recurse(self):
-                """Reset child recursion state for child."""
-
-                self.__r_op = None
-                self.__r_args = None
-                self.__r_progtrack = None
-                self.__r_rv_nop = False
-
-        def child_init_recurse(self, pkg_op, accept, refresh_catalogs,
-            update_index, args):
-                """When planning changes on a parent image, prepare to
-                recurse into a child image."""
-
-                assert pkg_op in [pkgdefs.PKG_OP_SYNC, pkgdefs.PKG_OP_UPDATE]
-
-                progtrack = args["progtrack"]
-                noexecute = args["noexecute"]
-
-                pkg_args = []
-
-                pkg_args.extend(["-v"] * progtrack.verbose)
-                if progtrack.quiet:
-                        pkg_args.append("-q")
-                if noexecute:
-                        pkg_args.append("-n")
-                if progtrack.parsable_version is not None:
-                        pkg_args.append("--parsable=%s" %
-                            progtrack.parsable_version)
-
-                # W0511 XXX / FIXME Comments; pylint: disable-msg=W0511
-                # XXX: also need to support --licenses.
-                # pylint: enable-msg=W0511
-                if accept:
-                        pkg_args.append("--accept")
-                if not refresh_catalogs:
-                        pkg_args.append("--no-refresh")
-                if not update_index:
-                        pkg_args.append("--no-index")
-
-                # options specific to: attach, set-property-linked, sync
-                if "li_pkg_updates" in args and not args["li_pkg_updates"]:
-                        pkg_args.append("--no-pkg-updates")
-
-                if pkg_op == pkgdefs.PKG_OP_UPDATE:
-                        # skip ipkg up to date check for child images
-                        pkg_args.append("-f")
-
-                self.__r_op = pkg_op
-                self.__r_args = pkg_args
-                self.__r_progtrack = progtrack
-
-        def child_do_recurse(self, stage, ip=None):
-                """When planning changes within a parent image, recurse into
-                a child image."""
-
-                assert stage in pkgdefs.api_stage_values
-                assert stage != pkgdefs.API_STAGE_DEFAULT
-                assert stage != pkgdefs.API_STAGE_PLAN or ip != None
-
-                assert self.__r_op != None
-                assert self.__r_args != None
-
-                if stage == pkgdefs.API_STAGE_PUBCHECK:
-                        self.__syncmd()
-
-                if stage == pkgdefs.API_STAGE_PLAN:
-                        # sync our metadata
-                        if not self.__syncmd(ip=ip):
-                                # no metadata changes in the child image.
-                                self.__r_rv_nop = True
-
-                if self.__r_rv_nop:
-                        if stage == pkgdefs.API_STAGE_EXECUTE:
-                                self.child_reset_recurse()
-                        # the child image told us it has no changes planned.
-                        return pkgdefs.EXIT_NOP, None
-
-                rv, errout, p_dict = self.__pkg_cmd(self.__r_op, self.__r_args,
-                    stage=stage, progtrack=self.__r_progtrack)
-
-                if rv in [pkgdefs.EXIT_OK, pkgdefs.EXIT_NOP]:
-                        # common case (we hope)
-                        pass
-                else:
-                        e = apx.LinkedImageException(
-                            lin=self.child_name, exitrv=rv,
-                            pkg_op_failed=(self.__r_op, rv, errout))
-                        self.child_reset_recurse()
-                        raise e
-
-                if stage == pkgdefs.API_STAGE_PLAN and rv == pkgdefs.EXIT_NOP:
-                        self.__r_rv_nop = True
-
-                if stage == pkgdefs.API_STAGE_EXECUTE:
-                        # we're done with this operation
-                        self.child_reset_recurse()
-
-                return rv, p_dict
-
 
 # ---------------------------------------------------------------------------
 # Utility Functions
@@ -2660,7 +3132,6 @@
         message.  Note that runtime errors should never happen and usually
         indicate bugs (or possibly corrupted linked image metadata), so they
         are not localized (just like asserts are not localized)."""
-        # Too many arguments; pylint: disable-msg=R0913
 
         assert not (li and lic)
         assert not ((lin or path) and li)
--- a/src/modules/client/linkedimage/system.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/client/linkedimage/system.py	Mon Jul 11 13:49:50 2011 -0700
@@ -21,7 +21,7 @@
 #
 
 #
-# Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved.
 #
 
 """
@@ -134,7 +134,7 @@
 
                 self.__img.cfg.write()
 
-                return (pkgdefs.EXIT_OK, None)
+                return li.LI_RVTuple(pkgdefs.EXIT_OK, None, None)
 
 
 class LinkedImageSystemChildPlugin(li.LinkedImageChildPlugin):
--- a/src/modules/client/linkedimage/zone.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/client/linkedimage/zone.py	Mon Jul 11 13:49:50 2011 -0700
@@ -356,7 +356,7 @@
                 """See parent class for docstring."""
 
                 # nothing to do
-                return (pkgdefs.EXIT_OK, None)
+                return li.LI_RVTuple(pkgdefs.EXIT_OK, None, None)
 
 
 class LinkedImageZoneChildPlugin(li.LinkedImageChildPlugin):
--- a/src/modules/client/pkgdefs.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/client/pkgdefs.py	Mon Jul 11 13:49:50 2011 -0700
@@ -52,6 +52,7 @@
 PKG_OP_LIST            = "list"
 PKG_OP_LIST_LINKED     = "list-linked"
 PKG_OP_PROP_LINKED     = "property-linked"
+PKG_OP_PUBCHECK        = "pubcheck-linked"
 PKG_OP_REVERT          = "revert"
 PKG_OP_SET_MEDIATOR    = "set-mediator"
 PKG_OP_SET_PROP_LINKED = "set-property-linked"
@@ -68,6 +69,7 @@
     PKG_OP_LIST,
     PKG_OP_LIST_LINKED,
     PKG_OP_PROP_LINKED,
+    PKG_OP_PUBCHECK,
     PKG_OP_REVERT,
     PKG_OP_SET_MEDIATOR,
     PKG_OP_SET_PROP_LINKED,
@@ -81,6 +83,7 @@
 API_OP_CHANGE_VARIANT = "change-variant"
 API_OP_DETACH         = "detach-linked"
 API_OP_INSTALL        = "install"
+API_OP_REPAIR         = "repair"
 API_OP_REVERT         = "revert"
 API_OP_SET_MEDIATOR   = "set-mediator"
 API_OP_SYNC           = "sync-linked"
@@ -92,6 +95,7 @@
     API_OP_CHANGE_VARIANT,
     API_OP_DETACH,
     API_OP_INSTALL,
+    API_OP_REPAIR,
     API_OP_REVERT,
     API_OP_SET_MEDIATOR,
     API_OP_SYNC,
@@ -100,12 +104,10 @@
 ])
 
 API_STAGE_DEFAULT  = "default"
-API_STAGE_PUBCHECK = "pubcheck"
 API_STAGE_PLAN     = "plan"
 API_STAGE_PREPARE  = "prepare"
 API_STAGE_EXECUTE  = "execute"
 api_stage_values  = frozenset([
-    API_STAGE_PUBCHECK,
     API_STAGE_DEFAULT,
     API_STAGE_PLAN,
     API_STAGE_PREPARE,
--- a/src/modules/client/pkgplan.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/client/pkgplan.py	Mon Jul 11 13:49:50 2011 -0700
@@ -26,17 +26,19 @@
 
 import copy
 import itertools
-
-from pkg.client import global_settings
-logger = global_settings.logger
+import os
 
 import pkg.actions
 import pkg.actions.directory as directory
 import pkg.client.api_errors as apx
+import pkg.fmri
 import pkg.manifest as manifest
+import pkg.misc
+
+from pkg.client import global_settings
 from pkg.misc import expanddirs, get_pkg_otw_size, EmptyI
 
-import os.path
+logger = global_settings.logger
 
 class PkgPlan(object):
         """A package plan takes two package FMRIs and an Image, and produces the
@@ -48,8 +50,8 @@
 
         __slots__ = [
             "__destination_mfst",
-            "__executed",
-            "__license_status",
+            "_executed",
+            "_license_status",
             "__origin_mfst",
             "__repair_actions",
             "__xferfiles",
@@ -63,6 +65,52 @@
             "pkg_summary",
         ]
 
+        #
+        # we don't serialize __xferfiles or __xfersize since those should be
+        # recalculated after after a plan is loaded (since the contents of the
+        # download cache may have changed).
+        #
+        # we don't serialize __origin_mfst, __destination_mfst, or
+        # __repair_actions since we only support serializing pkgplans which
+        # have had their actions evaluated and merged, and when action
+        # evaluation is complete these fields are cleared.
+        #
+        # we don't serialize our image object pointer.  that has to be reset
+        # after this object is reloaded.
+        #
+        __state__noserialize = frozenset([
+                "__destination_mfst",
+                "__origin_mfst",
+                "__repair_actions",
+                "__xferfiles",
+                "__xfersize",
+                "image",
+        ])
+
+        # make sure all __state__noserialize values are valid
+        assert (__state__noserialize - set(__slots__)) == set()
+
+        # figure out which state we are saving.
+        __state__serialize = set(__slots__) - __state__noserialize
+
+        # describe our state and the types of all objects
+        __state__desc = {
+            "_autofix_pkgs": [ pkg.fmri.PkgFmri ],
+            "_license_status": {
+                str: {
+                    "src": pkg.actions.generic.NSG,
+                    "dest": pkg.actions.generic.NSG,
+                },
+            },
+            "actions": pkg.manifest.ManifestDifference,
+            "destination_fmri": pkg.fmri.PkgFmri,
+            "origin_fmri": pkg.fmri.PkgFmri,
+        }
+
+        __state__commonize = frozenset([
+            pkg.fmri.PkgFmri,
+        ])
+
         def __init__(self, image=None):
                 self.destination_fmri = None
                 self.__destination_mfst = manifest.NullFactoredManifest
@@ -74,12 +122,68 @@
                 self.image = image
                 self.pkg_summary = None
 
-                self.__executed = False
-                self.__license_status = {}
+                self._executed = False
+                self._license_status = {}
                 self.__repair_actions = {}
                 self.__xferfiles = -1
                 self.__xfersize = -1
                 self._autofix_pkgs = []
+                self._hash = None
+
+        @staticmethod
+        def getstate(obj, je_state=None):
+                """Returns the serialized state of this object in a format
+                that that can be easily stored using JSON, pickle, etc."""
+
+                # validate unserialized state
+                # (see comments above __state__noserialize)
+                assert obj.__origin_mfst == manifest.NullFactoredManifest
+                assert obj.__destination_mfst == manifest.NullFactoredManifest
+                assert obj.__repair_actions == {}
+
+                # we use __slots__, so create a state dictionary
+                state = {}
+                for k in obj.__state__serialize:
+                        state[k] = getattr(obj, k)
+
+                return pkg.misc.json_encode(PkgPlan.__name__, state,
+                    PkgPlan.__state__desc,
+                    commonize=PkgPlan.__state__commonize, je_state=je_state)
+
+        @staticmethod
+        def setstate(obj, state, jd_state=None):
+                """Update the state of this object using previously serialized
+                state obtained via getstate()."""
+
+                # get the name of the object we're dealing with
+                name = type(obj).__name__
+
+                # decode serialized state into python objects
+                state = pkg.misc.json_decode(name, state,
+                    PkgPlan.__state__desc,
+                    commonize=PkgPlan.__state__commonize,
+                    jd_state=jd_state)
+
+                # we use __slots__, so directly update attributes
+                for k in state:
+                        setattr(obj, k, state[k])
+
+                # update unserialized state
+                # (see comments above __state__noserialize)
+                obj.__origin_mfst = manifest.NullFactoredManifest
+                obj.__destination_mfst = manifest.NullFactoredManifest
+                obj.__repair_actions = {}
+                obj.__xferfiles = -1
+                obj.__xfersize = -1
+                obj.image = None
+
+        @staticmethod
+        def fromstate(state, jd_state=None):
+                """Allocate a new object using previously serialized state
+                obtained via getstate()."""
+                rv = PkgPlan()
+                PkgPlan.setstate(rv, state, jd_state)
+                return rv
 
         def __str__(self):
                 s = "%s -> %s\n" % (self.origin_fmri, self.destination_fmri)
@@ -95,77 +199,13 @@
 
                 'dest' must be the destination action for a license."""
 
-                self.__license_status[dest.attrs["license"]] = {
+                self._license_status[dest.attrs["license"]] = {
                     "src": src,
                     "dest": dest,
                     "accepted": False,
                     "displayed": False,
                 }
 
-        def setstate(self, state):
-                """Update the state of this object using the contents of
-                the supplied dictionary."""
-
-                import pkg.fmri
-
-                # convert fmri strings into objects
-                for i in ["src", "dst"]:
-                        if state[i] is not None:
-                                state[i] = pkg.fmri.PkgFmri(state[i])
-
-                # convert lists into tuples/sets
-                # convert action object list into string list
-                for i in ["added", "changed", "removed"]:
-                        for j in range(len(state[i])):
-                                src, dst = state[i][j]
-                                if src is not None:
-                                        src = pkg.actions.fromstr(src)
-                                if dst is not None:
-                                        dst = pkg.actions.fromstr(dst)
-                                state[i][j] = (src, dst)
-
-                self.origin_fmri = state["src"]
-                self.destination_fmri = state["dst"]
-                self.pkg_summary = state["summary"]
-                self.actions = manifest.ManifestDifference(
-                    state["added"], state["changed"], state["removed"])
-
-                # update the license actions associated with this package
-                for src, dest in itertools.chain(self.gen_update_actions(),
-                    self.gen_install_actions()):
-                        if dest.name == "license":
-                                self.__add_license(src, dest)
-
-        def getstate(self):
-                """Returns a dictionary containing the state of this object
-                so that it can be easily stored using JSON."""
-
-                state = {}
-                state["src"] = self.origin_fmri
-                state["dst"] = self.destination_fmri
-                state["summary"] = self.pkg_summary
-                state["added"] = copy.copy(self.actions.added)
-                state["changed"] = copy.copy(self.actions.changed)
-                state["removed"] = copy.copy(self.actions.removed)
-
-                # convert fmri objects into strings
-                for i in ["src", "dst"]:
-                        if isinstance(state[i], pkg.fmri.PkgFmri):
-                                state[i] = str(state[i])
-
-                # convert tuples/sets into lists
-                # convert actions objects into strings
-                for i in ["added", "changed", "removed"]:
-                        for j in range(len(state[i])):
-                                src, dst = state[i][j]
-                                if src is not None:
-                                        src = str(src)
-                                if dst is not None:
-                                        dst = str(dst)
-                                state[i][j] = [src, dst]
-
-                return state
-
         def propose(self, of, om, df, dm):
                 """Propose origin and dest fmri, manifest"""
                 self.origin_fmri = of
@@ -284,7 +324,7 @@
 
                         for a in absent_dirs:
                                 self.actions.removed.append(
-                                    [directory.DirectoryAction(path=a), None])
+                                    (directory.DirectoryAction(path=a), None))
 
                 # Stash information needed by legacy actions.
                 self.pkg_summary = \
@@ -298,7 +338,7 @@
                     EmptyI))
 
                 # No longer needed.
-                self.__repair_actions = None
+                self.__repair_actions = {}
 
                 for src, dest in itertools.chain(self.gen_update_actions(),
                     self.gen_install_actions()):
@@ -325,7 +365,7 @@
                 entry).  Where 'entry' is a dict containing the license status
                 information."""
 
-                for lic, entry in self.__license_status.iteritems():
+                for lic, entry in self._license_status.iteritems():
                         yield lic, entry
 
         def set_license_status(self, plicense, accepted=None, displayed=None):
@@ -346,7 +386,7 @@
                         False   sets displayed status to False
                         True    sets displayed status to True"""
 
-                entry = self.__license_status[plicense]
+                entry = self._license_status[plicense]
                 if accepted is not None:
                         entry["accepted"] = accepted
                 if displayed is not None:
@@ -436,6 +476,16 @@
                 mfile.wait_files()
                 progtrack.download_end_pkg()
 
+        def cacheload(self):
+                """Load previously downloaded data for actions that need it."""
+
+                fmri = self.destination_fmri
+                for src, dest in itertools.chain(*self.actions):
+                        if not dest or not dest.needsdata(src, self):
+                                continue
+                        dest.data = self.image.transport.action_cached(fmri,
+                            dest)
+
         def gen_install_actions(self):
                 for src, dest in self.actions.added:
                         yield src, dest
@@ -450,7 +500,7 @@
 
         def execute_install(self, src, dest):
                 """ perform action for installation of package"""
-                self.__executed = True
+                self._executed = True
                 try:
                         dest.install(self, src)
                 except (pkg.actions.ActionError, EnvironmentError):
@@ -466,7 +516,7 @@
 
         def execute_update(self, src, dest):
                 """ handle action updates"""
-                self.__executed = True
+                self._executed = True
                 try:
                         dest.install(self, src)
                 except (pkg.actions.ActionError, EnvironmentError):
@@ -482,7 +532,7 @@
 
         def execute_removal(self, src, dest):
                 """ handle action removals"""
-                self.__executed = True
+                self._executed = True
                 try:
                         src.remove(self)
                 except (pkg.actions.ActionError, EnvironmentError):
@@ -515,11 +565,11 @@
                 plan execution.  Salvaged items are tracked in the imageplan.
                 """
 
-                assert self.__executed
+                assert self._executed
                 spath = self.image.salvage(path)
                 # get just the file path that was salvaged 
                 fpath = path[len(self.image.get_root()) + 1:]
-                self.image.imageplan.salvaged.append((fpath, spath))
+                self.image.imageplan.pd._salvaged.append((fpath, spath))
 
         def salvage_from(self, local_path, full_destination):
                 """move unpackaged contents to specified destination"""
@@ -527,9 +577,9 @@
                 if local_path.startswith(os.path.sep):
                         local_path = local_path[1:]
 
-                for fpath, spath in self.image.imageplan.salvaged[:]:
+                for fpath, spath in self.image.imageplan.pd._salvaged[:]:
                         if fpath.startswith(local_path):
-                                self.image.imageplan.salvaged.remove((fpath, spath))
+                                self.image.imageplan.pd._salvaged.remove((fpath, spath))
                                 break
                 else:
                         return
@@ -541,11 +591,11 @@
                 return self.__destination_mfst
 
         def clear_dest_manifest(self):
-                self.__destination_mfst = None
+                self.__destination_mfst = manifest.NullFactoredManifest
 
         @property
         def origin_manifest(self):
                 return self.__origin_mfst
 
         def clear_origin_manifest(self):
-                self.__origin_mfst = None
+                self.__origin_mfst = manifest.NullFactoredManifest
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/src/modules/client/pkgremote.py	Mon Jul 11 13:49:50 2011 -0700
@@ -0,0 +1,508 @@
+#!/usr/bin/python
+#
+# CDDL HEADER START
+#
+# The contents of this file are subject to the terms of the
+# Common Development and Distribution License (the "License").
+# You may not use this file except in compliance with the License.
+#
+# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+# or http://www.opensolaris.org/os/licensing.
+# See the License for the specific language governing permissions
+# and limitations under the License.
+#
+# When distributing Covered Code, include this CDDL HEADER in each
+# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+# If applicable, add the following below this CDDL HEADER, with the
+# fields enclosed by brackets "[]" replaced with your own identifying
+# information: Portions Copyright [yyyy] [name of copyright owner]
+#
+# CDDL HEADER END
+#
+
+#
+# Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
+#
+
+"""
+Interfaces used within the package client to operate on a remote image.
+Primarily used by linked images when recursing into child images.
+"""
+
+# standard python classes
+import os
+import select
+import tempfile
+import traceback
+
+# pkg classes
+import pkg.client.api_errors as apx
+import pkg.client.pkgdefs as pkgdefs
+import pkg.misc
+import pkg.nrlock
+import pkg.pipeutils
+
+from pkg.client import global_settings
+from pkg.client.debugvalues import DebugValues
+
+# debugging aids
+pkgremote_debug = (
+    DebugValues.get_value("pkgremote_debug") is not None or
+    os.environ.get("PKG_PKGREMOTE_DEBUG", None) is not None)
+
+class PkgRemote(object):
+        """This class is used to perform packaging operation on an image.  It
+        utilizes the "remote" subcommand within the pkg.1 client to manipulate
+        images.  Communication between this class and the "pkg remote" process
+        is done via RPC.  This class essentially implements an RPC client and
+        the "pkg remote" process is an RPC server."""
+
+        # variables to keep track of our RPC client call state.
+        __IDLE     = "call-idle"
+        __SETUP    = "call-setup"
+        __STARTED  = "call-started"
+
+        def __init__(self):
+                # initialize RPC server process state
+                self.__rpc_server_proc = None
+                self.__rpc_server_fstdout = None
+                self.__rpc_server_fstderr = None
+                self.__rpc_server_prog_pipe_fobj = None
+
+                # initialize RPC client process state
+                self.__rpc_client = None
+                self.__rpc_client_prog_pipe_fobj = None
+
+                # initialize RPC client call state
+                self.__state = self.__IDLE
+                self.__pkg_op = None
+                self.__kwargs = None
+                self.__async_rpc_caller = None
+                self.__async_rpc_waiter = None
+                self.__result = None
+
+                # sanity check the idle state by re-initializing it
+                self.__set_state_idle()
+
+        def __debug_msg(self, msg, t1=False, t2=False):
+                """Log debugging messages."""
+
+                if not pkgremote_debug:
+                        return
+
+                if t1:
+                        prefix = "PkgRemote(%s) client thread 1: " % id(self)
+                elif t2:
+                        prefix = "PkgRemote(%s) client thread 2: " % id(self)
+                else:
+                        prefix = "PkgRemote(%s) client: " % id(self)
+
+                global_settings.logger.info("%s%s" % (prefix, msg))
+
+        def __rpc_server_fork(self, img_path,
+            server_cmd_pipe, server_prog_pipe_fobj):
+                """Fork off a "pkg remote" server process.
+
+                'img_path' is the path to the image to manipulate.
+
+                'server_cmd_pipe' is the server side of the command pipe which
+                the server will use to receive RPC requests.
+
+                'server_prog_pipe_fobj' is the server side of the progress
+                pipe which the server will write to to indicate progress."""
+
+                pkg_cmd = pkg.misc.api_pkgcmd() + [
+                    "-R", img_path,
+                    "--runid=%s" % global_settings.client_runid,
+                    "remote",
+                    "--ctlfd=%s" % server_cmd_pipe,
+                    "--progfd=%s" % server_prog_pipe_fobj.fileno(),
+                ]
+
+                self.__debug_msg("RPC server cmd: %s" % (" ".join(pkg_cmd)))
+
+                # create temporary files to log standard output and error from
+                # the RPC server.
+                fstdout = tempfile.TemporaryFile()
+                fstderr = tempfile.TemporaryFile()
+
+                try:
+                        p = pkg.pkgsubprocess.Popen(pkg_cmd,
+                            stdout=fstdout, stderr=fstderr)
+                except OSError, e:
+                        # Access to protected member; pylint: disable-msg=W0212
+                        raise apx._convert_error(e)
+
+                # initalization successful, update RPC server state
+                self.__rpc_server_proc = p
+                self.__rpc_server_fstdout = fstdout
+                self.__rpc_server_fstderr = fstderr
+                self.__rpc_server_prog_pipe_fobj = server_prog_pipe_fobj
+
+        def __rpc_server_setup(self, img_path):
+                """Start a new RPC Server process.
+
+                'img_path' is the path to the image to manipulate."""
+
+                # create a pipe for communication between the client and server
+                client_cmd_pipe, server_cmd_pipe = os.pipe()
+
+                # create a pipe that the server server can use to indicate
+                # progress to the client.  wrap the pipe fds in python file
+                # objects so that they gets closed automatically when those
+                # objects are dereferenced.
+                client_prog_pipe, server_prog_pipe = os.pipe()
+                client_prog_pipe_fobj = os.fdopen(client_prog_pipe, "rw")
+                server_prog_pipe_fobj = os.fdopen(server_prog_pipe, "rw")
+
+                # initialize the client side of the RPC server
+                rpc_client = pkg.pipeutils.PipedServerProxy(client_cmd_pipe)
+
+                # fork off the server
+                self.__rpc_server_fork(img_path,
+                    server_cmd_pipe, server_prog_pipe_fobj)
+
+                # close our reference to server end of the pipe.  (the server
+                # should have already closed its reference to the client end
+                # of the pipe.)
+                os.close(server_cmd_pipe)
+
+                # initalization successful, update RPC client state
+                self.__rpc_client = rpc_client
+                self.__rpc_client_prog_pipe_fobj = client_prog_pipe_fobj
+
+        def __rpc_server_fini(self):
+                """Close connection to a RPC Server process."""
+
+                # destroying the RPC client object closes our connection to
+                # the server, which should cause the server to exit.
+                self.__rpc_client = None
+
+                # if we have a server, kill it and wait for it to exit
+                if self.__rpc_server_proc:
+                        self.__rpc_server_proc.terminate()
+                        self.__rpc_server_proc.wait()
+
+                # clear server state (which closes the rpc pipe file
+                # descriptors)
+                self.__rpc_server_proc = None
+                self.__rpc_server_fstdout = None
+                self.__rpc_server_fstderr = None
+
+                # wait for any client RPC threads to exit
+                if self.__async_rpc_caller:
+                        self.__async_rpc_caller.join()
+                if self.__async_rpc_waiter:
+                        self.__async_rpc_waiter.join()
+
+                # close the progress pipe
+                self.__rpc_server_prog_pipe_fobj = None
+                self.__rpc_client_prog_pipe_fobj = None
+
+        def fileno(self):
+                """Return the progress pipe for the server process.  We use
+                this to monitor progress in the RPC server"""
+
+                return self.__rpc_client_prog_pipe_fobj.fileno()
+
+        def __rpc_client_prog_pipe_drain(self):
+                """Drain the client progress pipe."""
+
+                progfd = self.__rpc_client_prog_pipe_fobj.fileno()
+                while select.select([progfd], [], [], 0)[0]:
+                        os.read(progfd, 10240)
+
+        def __state_verify(self, state=None):
+                """Sanity check our internal call state.
+
+                'state' is an optional parameter that indicates which state
+                we should be in now.  (without this parameter we just verify
+                that the current state, whatever it is, is self
+                consistent.)"""
+
+                if state is not None:
+                        assert self.__state == state, \
+                            "%s == %s" % (self.__state, state)
+                else:
+                        state = self.__state
+
+                if state == self.__IDLE:
+                        assert self.__pkg_op is None, \
+                            "%s is None" % self.__pkg_op
+                        assert self.__kwargs is None, \
+                            "%s is None" % self.__kwargs
+                        assert self.__async_rpc_caller is None, \
+                            "%s is None" % self.__async_rpc_caller
+                        assert self.__async_rpc_waiter is None, \
+                            "%s is None" % self.__async_rpc_waiter
+                        assert self.__result is None, \
+                            "%s is None" % self.__result
+
+                elif state == self.__SETUP:
+                        assert self.__pkg_op is not None, \
+                            "%s is not None" % self.__pkg_op
+                        assert self.__kwargs is not None, \
+                            "%s is not None" % self.__kwargs
+                        assert self.__async_rpc_caller is None, \
+                            "%s is None" % self.__async_rpc_caller
+                        assert self.__async_rpc_waiter is None, \
+                            "%s is None" % self.__async_rpc_waiter
+                        assert self.__result is None, \
+                            "%s is None" % self.__result
+
+                elif state == self.__STARTED:
+                        assert self.__pkg_op is not None, \
+                            "%s is not None" % self.__pkg_op
+                        assert self.__kwargs is not None, \
+                            "%s is not None" % self.__kwargs
+                        assert self.__async_rpc_caller is not None, \
+                            "%s is not None" % self.__async_rpc_caller
+                        assert self.__async_rpc_waiter is not None, \
+                            "%s is not None" % self.__async_rpc_waiter
+                        assert self.__result is None, \
+                            "%s is None" % self.__result
+
+        def __set_state_idle(self):
+                """Enter the __IDLE state.  This clears all RPC call
+                state."""
+
+                # verify the current state
+                self.__state_verify()
+
+                # setup the new state
+                self.__state = self.__IDLE
+                self.__pkg_op = None
+                self.__kwargs = None
+                self.__async_rpc_caller = None
+                self.__async_rpc_waiter = None
+                self.__result = None
+                self.__debug_msg("set call state: %s" % (self.__state))
+
+                # verify the new state
+                self.__state_verify()
+
+        def __set_state_setup(self, pkg_op, kwargs):
+                """Enter the __SETUP state.  This indicates that we're
+                all ready to make a call into the RPC server.
+
+                'pkg_op' is the packaging operation we're going to do via RPC
+
+                'kwargs' is the argument dict for the RPC operation.
+
+                't' is the RPC client thread that will call into the RPC
+                server."""
+
+                # verify the current state
+                self.__state_verify(state=self.__IDLE)
+
+                # setup the new state
+                self.__state = self.__SETUP
+                self.__pkg_op = pkg_op
+                self.__kwargs = kwargs
+                self.__debug_msg("set call state: %s, pkg op: %s" %
+                    (self.__state, pkg_op))
+
+                # verify the new state
+                self.__state_verify()
+
+        def __set_state_started(self, async_rpc_caller, async_rpc_waiter):
+                """Enter the __SETUP state.  This indicates that we've
+                started a call to the RPC server and we're now waiting for
+                that call to return."""
+
+                # verify the current state
+                self.__state_verify(state=self.__SETUP)
+
+                # setup the new state
+                self.__state = self.__STARTED
+                self.__async_rpc_caller = async_rpc_caller
+                self.__async_rpc_waiter = async_rpc_waiter
+                self.__debug_msg("set call state: %s" % (self.__state))
+
+                # verify the new state
+                self.__state_verify()
+
+        def __rpc_async_caller(self, fstdout, fstderr, rpc_client,
+            pkg_op, **kwargs):
+                """RPC thread callback.  This routine is invoked in its own
+                thread (so the caller doesn't have to block) and it makes a
+                blocking call to the RPC server.
+
+                'kwargs' is the argument dict for the RPC operation."""
+
+                self.__debug_msg("starting pkg op: %s; args: %s" %
+                    (pkg_op, kwargs), t1=True)
+
+                # make the RPC call
+                rv = e = None
+                rpc_method = getattr(rpc_client, pkg_op)
+                try:
+                        # Catch "Exception"; pylint: disable-msg=W0703
+                        rv = rpc_method(**kwargs)
+                except Exception, e:
+                        self.__debug_msg("caught exception\n%s" %
+                            (traceback.format_exc()), t1=True)
+                else:
+                        self.__debug_msg("returned: %s" % rv, t1=True)
+
+                # get output generated by the RPC server.  the server
+                # truncates its output file after each operation, so we always
+                # read output from the beginning of the file.
+                fstdout.seek(0)
+                stdout = "".join(fstdout.readlines())
+                fstderr.seek(0)
+                stderr = "".join(fstderr.readlines())
+
+                self.__debug_msg("exiting", t1=True)
+                return (rv, e, stdout, stderr)
+
+        def __rpc_async_waiter(self, async_call, prog_pipe):
+                """RPC waiter thread.  This thread waits on the RPC thread
+                and signals its completion by writing a byte to the progress
+                pipe.
+
+                The RPC call thread can't do this for itself because that
+                results in a race (the RPC thread could block after writing
+                this byte but before actually exiting, and then the client
+                would read the byte, see that the RPC thread is not done, and
+                block while trying to read another byte which would never show
+                up).  This thread solves this problem without using any shared
+                state."""
+
+                self.__debug_msg("starting", t2=True)
+                async_call.join()
+                try:
+                        os.write(prog_pipe.fileno(), ".")
+                except (IOError, OSError):
+                        pass
+                self.__debug_msg("exiting", t2=True)
+
+        def __rpc_client_setup(self, pkg_op, **kwargs):
+                """Prepare to perform a RPC operation.
+
+                'pkg_op' is the packaging operation we're going to do via RPC
+
+                'kwargs' is the argument dict for the RPC operation."""
+
+                self.__set_state_setup(pkg_op, kwargs)
+
+                # drain the progress pipe
+                self.__rpc_client_prog_pipe_drain()
+
+        def setup(self, img_path, pkg_op, **kwargs):
+                """Public interface to setup a remote packaging operation.
+
+                'img_path' is the path to the image to manipulate.
+
+                'pkg_op' is the packaging operation we're going to do via RPC
+
+                'kwargs' is the argument dict for the RPC operation."""
+
+                self.__debug_msg("setup()")
+                self.__rpc_server_setup(img_path)
+                self.__rpc_client_setup(pkg_op, **kwargs)
+
+        def start(self):
+                """Public interface to start a remote packaging operation."""
+
+                self.__debug_msg("start()")
+                self.__state_verify(self.__SETUP)
+
+                async_rpc_caller = pkg.misc.AsyncCall()
+                async_rpc_caller.start(
+                     self.__rpc_async_caller,
+                     self.__rpc_server_fstdout,
+                     self.__rpc_server_fstderr,
+                     self.__rpc_client,
+                     self.__pkg_op,
+                     **self.__kwargs)
+
+                async_rpc_waiter = pkg.misc.AsyncCall()
+                async_rpc_waiter.start(
+                    self.__rpc_async_waiter,
+                    async_rpc_caller,
+                    self.__rpc_server_prog_pipe_fobj)
+
+                self.__set_state_started(async_rpc_caller, async_rpc_waiter)
+
+        def is_done(self):
+                """Public interface to query if a remote packaging operation
+                is done."""
+
+                self.__debug_msg("is_done()")
+                assert self.__state in [self.__SETUP, self.__STARTED]
+
+                # drain the progress pipe.
+                self.__rpc_client_prog_pipe_drain()
+
+                if self.__state == self.__SETUP:
+                        rv = False
+                else:
+                        # see if the client is done
+                        rv = self.__async_rpc_caller.is_done()
+
+                return rv
+
+        def result(self):
+                """Public interface to get the result of a remote packaging
+                operation.  If the operation is not yet completed, this
+                interface will block until it finishes.  The return value is a
+                tuple which contains:
+
+                'rv' is the return value of the RPC operation
+
+                'e' is any exception generated by the RPC operation
+
+                'stdout' is the standard output generated by the RPC server
+                during the RPC operation.
+
+                'stderr' is the standard output generated by the RPC server
+                during the RPC operation."""
+
+                self.__debug_msg("result()")
+                self.__state_verify(self.__STARTED)
+
+                rvtuple = e = None
+                try:
+                        rvtuple = self.__async_rpc_caller.result()
+                except pkg.misc.AsyncCallException, e:
+                        pass
+
+                # assume we didn't get any results
+                rv = pkgdefs.EXIT_OOPS
+                stdout = stderr = ""
+
+                # unpack our results if we got any
+                if e is None:
+                        # unpack our results.
+                        # our results can contain an embedded exception.
+                        rv, e, stdout, stderr = rvtuple
+
+                # make sure the return value is an int
+                if type(rv) != int:
+                        rv = pkgdefs.EXIT_OOPS
+
+                # if we got any errors, make sure we return OOPS
+                if e is not None:
+                        rv = pkgdefs.EXIT_OOPS
+
+                # shutdown the RPC server
+                self.__rpc_server_fini()
+
+                # pack up our results and enter the done state
+                self.__set_state_idle()
+
+                return (self.__pkg_op, rv, e, stdout, stderr)
+
+        def abort(self):
+                """Public interface to abort an in-progress RPC operation."""
+
+                assert self.__state in [self.__SETUP, self.__STARTED]
+
+                self.__debug_msg("call abort requested")
+
+                # shutdown the RPC server
+                self.__rpc_server_fini()
+
+                # enter the idle state
+                self.__set_state_idle()
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/src/modules/client/plandesc.py	Mon Jul 11 13:49:50 2011 -0700
@@ -0,0 +1,659 @@
+#!/usr/bin/python
+#
+# CDDL HEADER START
+#
+# The contents of this file are subject to the terms of the
+# Common Development and Distribution License (the "License").
+# You may not use this file except in compliance with the License.
+#
+# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+# or http://www.opensolaris.org/os/licensing.
+# See the License for the specific language governing permissions
+# and limitations under the License.
+#
+# When distributing Covered Code, include this CDDL HEADER in each
+# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+# If applicable, add the following below this CDDL HEADER, with the
+# fields enclosed by brackets "[]" replaced with your own identifying
+# information: Portions Copyright [yyyy] [name of copyright owner]
+#
+# CDDL HEADER END
+#
+
+#
+# Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
+#
+
+"""
+PlanDescription and _ActionPlan classes
+
+These classes are part of the public API, and any changes here may require
+bumping CURRENT_API_VERSION in pkg.api
+
+The PlanDescription class is a public interface which contains all the data
+associated with an image-modifying operation.
+
+The _ActionPlan class is a private interface used to keep track of actions
+modified within an image during an image-modifying operation.
+"""
+
+import collections
+import itertools
+import operator
+import simplejson as json
+
+import pkg.actions
+import pkg.client.actuator
+import pkg.client.api_errors as apx
+import pkg.client.linkedimage as li
+import pkg.client.pkgplan
+import pkg.client.pkgplan
+import pkg.facet
+import pkg.fmri
+import pkg.misc
+import pkg.version
+
+from pkg.api_common import (PackageInfo, LicenseInfo)
+
+UNEVALUATED       = 0 # nothing done yet
+EVALUATED_PKGS    = 1 # established fmri changes
+MERGED_OK         = 2 # created single merged plan
+EVALUATED_OK      = 3 # ready to execute
+PREEXECUTED_OK    = 4 # finished w/ preexecute
+PREEXECUTED_ERROR = 5 # whoops
+EXECUTED_OK       = 6 # finished execution
+EXECUTED_ERROR    = 7 # failed
+
+class _ActionPlan(collections.namedtuple("_ActionPlan", "p src dst")):
+        """A named tuple used to keep track of all the actions that will be
+        executed during an image-modifying procecure."""
+        # Class has no __init__ method; pylint: disable-msg=W0232
+        # Use __slots__ on an old style class; pylint: disable-msg=E1001
+
+        __slots__ = []
+
+        __state__desc = tuple([
+            pkg.client.pkgplan.PkgPlan,
+            pkg.actions.generic.NSG,
+            pkg.actions.generic.NSG,
+        ])
+
+        @staticmethod
+        def getstate(obj, je_state=None):
+                """Returns the serialized state of this object in a format
+                that that can be easily stored using JSON, pickle, etc."""
+                return pkg.misc.json_encode(_ActionPlan.__name__, tuple(obj),
+                    _ActionPlan.__state__desc, je_state=je_state)
+
+        @staticmethod
+        def fromstate(state, jd_state=None):
+                """Allocate a new object using previously serialized state
+                obtained via getstate()."""
+                # Access to protected member; pylint: disable-msg=W0212
+
+                # get the name of the object we're dealing with
+                name = _ActionPlan.__name__
+
+                # decode serialized state into python objects
+                state = pkg.misc.json_decode(name, state,
+                    _ActionPlan.__state__desc, jd_state=jd_state)
+
+                return _ActionPlan(*state)
+
+
+class PlanDescription(object):
+        """A class which describes the changes the plan will make."""
+
+        __state__desc = {
+            "_actuators": pkg.client.actuator.Actuator,
+            "_cfg_mediators": {
+                str: {
+                    "version": pkg.version.Version,
+                    "implementation-version": pkg.version.Version,
+                }
+            },
+            "_changed_facets": pkg.facet.Facets,
+            "_fmri_changes": [ ( pkg.fmri.PkgFmri, pkg.fmri.PkgFmri ) ],
+            "_new_avoid_obs": ( set(), set() ),
+            "_new_facets": pkg.facet.Facets,
+            "_new_mediators": collections.defaultdict(set, {
+                str: {
+                    "version": pkg.version.Version,
+                    "implementation-version": pkg.version.Version,
+                }
+            }),
+            "_removed_facets": set(),
+            "_rm_aliases": { str: set() },
+            "added_groups": { str: pkg.fmri.PkgFmri },
+            "added_users": { str: pkg.fmri.PkgFmri },
+            "children_ignored": [ li.LinkedImageName ],
+            "children_nop": [ li.LinkedImageName ],
+            "children_planned": [ li.LinkedImageName ],
+            "install_actions": [ _ActionPlan ],
+            "li_ppkgs": frozenset([ pkg.fmri.PkgFmri ]),
+            "li_props": { li.PROP_NAME: li.LinkedImageName },
+            "pkg_plans": [ pkg.client.pkgplan.PkgPlan ],
+            "removal_actions": [ _ActionPlan ],
+            "removed_groups": { str: pkg.fmri.PkgFmri },
+            "removed_users": { str: pkg.fmri.PkgFmri },
+            "update_actions": [ _ActionPlan ],
+        }
+
+        __state__commonize = frozenset([
+            pkg.actions.generic.NSG,
+            pkg.client.pkgplan.PkgPlan,
+            pkg.fmri.PkgFmri,
+        ])
+
+        def __init__(self, op=None):
+                self.state = UNEVALUATED
+                self._op = op
+
+                #
+                # Properties set when state >= EVALUATED_PKGS
+                #
+                self._image_lm = None
+                self._cfg_mediators = {}
+                self._varcets_change = False
+                self._new_variants = None
+                self._changed_facets = pkg.facet.Facets()
+                self._removed_facets = set()
+                self._new_facets = None
+                self._new_mediators = collections.defaultdict(set)
+                self._mediators_change = False
+                self._new_avoid_obs = (set(), set())
+                self._fmri_changes = [] # install  (None, fmri)
+                                        # remove   (oldfmri, None)
+                                        # update   (oldfmri, newfmri|oldfmri)
+                self._solver_summary = []
+                self._solver_errors = None
+                self.li_attach = False
+                self.li_ppkgs = frozenset()
+                self.li_ppubs = None
+                self.li_props = {}
+
+                #
+                # Properties set when state >= EVALUATED_OK
+                #
+                # raw actions
+                self.pkg_plans = []
+                # merged actions
+                self.removal_actions = []
+                self.update_actions = []
+                self.install_actions = []
+                # smf and other actuators (driver actions get added during
+                # execution stage).
+                self._actuators = pkg.client.actuator.Actuator()
+                # Used to track users and groups that are part of operation.
+                self.added_groups = {}
+                self.added_users = {}
+                self.removed_groups = {}
+                self.removed_users = {}
+                # Data retrieval statistics for preexecute()
+                self._dl_npkgs = 0
+                self._dl_nfiles = 0
+                self._dl_nbytes = 0
+                # plan properties
+                self._cbytes_added = 0 # size of compressed files
+                self._bytes_added = 0  # size of files added
+                self._need_boot_archive = None
+                # child properties
+                self.child_op = None
+                self.child_kwargs = {}
+                self.children_ignored = None
+                self.children_planned = []
+                self.children_nop = []
+                # driver aliases to remove
+                self._rm_aliases = {}
+
+                #
+                # Properties set when state >= EXECUTED_OK
+                #
+                self._salvaged = []
+
+                #
+                # Set by imageplan.set_be_options()
+                #
+                self._backup_be = None
+                self._backup_be_name = None
+                self._new_be = None
+                self._be_name = None
+                self._be_activate = False
+
+                # Accessed via imageplan.update_index
+                self._update_index = True
+
+                # stats about the current image
+                self._cbytes_avail = 0  # avail space for downloads
+                self._bytes_avail = 0   # avail space for fs
+
+        @staticmethod
+        def getstate(obj, je_state=None, reset_volatiles=False):
+                """Returns the serialized state of this object in a format
+                that that can be easily stored using JSON, pickle, etc."""
+                # Access to protected member; pylint: disable-msg=W0212
+
+                if reset_volatiles:
+                        # backup and clear volatiles
+                        _bytes_avail = obj._bytes_avail
+                        _cbytes_avail = obj._cbytes_avail
+                        obj._bytes_avail = obj._cbytes_avail = 0
+
+                name = PlanDescription.__name__
+                state = pkg.misc.json_encode(name, obj.__dict__,
+                    PlanDescription.__state__desc,
+                    commonize=PlanDescription.__state__commonize,
+                    je_state=je_state)
+
+                # add a state version encoding identifier
+                state[name] = 0
+
+                if reset_volatiles:
+                        obj._bytes_avail = obj._bytes_avail
+                        obj._cbytes_avail = obj._cbytes_avail
+
+                return state
+
+        @staticmethod
+        def setstate(obj, state, jd_state=None):
+                """Update the state of this object using previously serialized
+                state obtained via getstate()."""
+                # Access to protected member; pylint: disable-msg=W0212
+
+                # get the name of the object we're dealing with
+                name = PlanDescription.__name__
+
+                # version check and delete the encoding identifier
+                assert state[name] == 0
+                del state[name]
+
+                # decode serialized state into python objects
+                state = pkg.misc.json_decode(name, state,
+                    PlanDescription.__state__desc,
+                    commonize=PlanDescription.__state__commonize,
+                    jd_state=jd_state)
+
+                # bulk update
+                obj.__dict__.update(state)
+
+                # clear volatiles
+                obj._cbytes_avail = 0
+                obj._bytes_avail = 0
+
+        @staticmethod
+        def fromstate(state, jd_state=None):
+                """Allocate a new object using previously serialized state
+                obtained via getstate()."""
+                rv = PlanDescription()
+                PlanDescription.setstate(rv, state, jd_state)
+                return rv
+
+        def _save(self, fobj, reset_volatiles=False):
+                """Save a json encoded representation of this plan
+                description objects into the specified file object."""
+
+                state = PlanDescription.getstate(self,
+                    reset_volatiles=reset_volatiles)
+                try:
+                        fobj.truncate()
+                        json.dump(state, fobj, encoding="utf-8")
+                        fobj.flush()
+                except OSError, e:
+                        # Access to protected member; pylint: disable-msg=W0212
+                        raise apx._convert_error(e)
+
+                del state
+
+        def _load(self, fobj):
+                """Load a json encoded representation of a plan description
+                from the specified file object."""
+
+                assert self.state == UNEVALUATED
+
+                try:
+                        fobj.seek(0)
+                        state = json.load(fobj, encoding="utf-8")
+                except OSError, e:
+                        # Access to protected member; pylint: disable-msg=W0212
+                        raise apx._convert_error(e)
+
+                PlanDescription.setstate(self, state)
+                del state
+
+        def _executed_ok(self):
+                """A private interface used after a plan is successfully
+                invoked to free up memory."""
+
+                # reduce memory consumption
+                self._fmri_changes = []
+                self._actuators = pkg.client.actuator.Actuator()
+                self.added_groups = {}
+                self.added_users = {}
+                self.removed_groups = {}
+                self.removed_users = {}
+
+        @property
+        def executed(self):
+                """A boolean indicating if we attempted to execute this
+                plan."""
+                return self.state in [EXECUTED_OK, EXECUTED_ERROR]
+
+        @property
+        def services(self):
+                """Returns a list of string tuples describing affected services
+                (action, SMF FMRI)."""
+                return sorted(
+                    ((str(a), str(smf_fmri))
+                    for a, smf_fmri in self._actuators.get_services_list()),
+                        key=operator.itemgetter(0, 1)
+                )
+
+        @property
+        def mediators(self):
+                """Returns a list of three-tuples containing information about
+                the mediators.  The first element in the tuple is the name of
+                the mediator.  The second element is a tuple containing the
+                original version and source and the new version and source of
+                the mediator.  The third element is a tuple containing the
+                original implementation and source and new implementation and
+                source."""
+
+                ret = []
+
+                if not self._mediators_change or \
+                    (not self._cfg_mediators and not self._new_mediators):
+                        return ret
+
+                def get_mediation(mediators, m):
+                        # Missing docstring; pylint: disable-msg=C0111
+                        mimpl = mver = mimpl_source = \
+                            mver_source = None
+                        if m in mediators:
+                                mimpl = mediators[m].get(
+                                    "implementation")
+                                mimpl_ver = mediators[m].get(
+                                    "implementation-version")
+                                if mimpl_ver:
+                                        mimpl_ver = \
+                                            mimpl_ver.get_short_version()
+                                if mimpl and mimpl_ver:
+                                        mimpl += "(@%s)" % mimpl_ver
+                                mimpl_source = mediators[m].get(
+                                    "implementation-source")
+
+                                mver = mediators[m].get("version")
+                                if mver:
+                                        mver = mver.get_short_version()
+                                mver_source = mediators[m].get(
+                                    "version-source")
+                        return mimpl, mver, mimpl_source, mver_source
+
+                for m in sorted(set(self._new_mediators.keys() +
+                    self._cfg_mediators.keys())):
+                        orig_impl, orig_ver, orig_impl_source, \
+                            orig_ver_source = get_mediation(
+                                self._cfg_mediators, m)
+                        new_impl, new_ver, new_impl_source, new_ver_source = \
+                            get_mediation(self._new_mediators, m)
+
+                        if orig_ver == new_ver and \
+                            orig_ver_source == new_ver_source and \
+                            orig_impl == new_impl and \
+                            orig_impl_source == new_impl_source:
+                                # Mediation not changed.
+                                continue
+
+                        out = (m,
+                            ((orig_ver, orig_ver_source),
+                            (new_ver, new_ver_source)),
+                            ((orig_impl, orig_impl_source),
+                            (new_impl, new_impl_source)))
+
+                        ret.append(out)
+
+                return ret
+
+        def get_mediators(self):
+                """Returns list of strings describing mediator changes."""
+
+                ret = []
+                for m, ver, impl in sorted(self.mediators):
+                        ((orig_ver, orig_ver_source),
+                            (new_ver, new_ver_source)) = ver
+                        ((orig_impl, orig_impl_source),
+                            (new_impl, new_impl_source)) = impl
+                        out = "mediator %s:\n" % m
+                        if orig_ver and new_ver:
+                                out += "           version: %s (%s default) " \
+                                    "-> %s (%s default)\n" % (orig_ver,
+                                    orig_ver_source, new_ver, new_ver_source)
+                        elif orig_ver:
+                                out += "           version: %s (%s default) " \
+                                    "-> None\n" % (orig_ver, orig_ver_source)
+                        elif new_ver:
+                                out += "           version: None -> " \
+                                    "%s (%s default)\n" % (new_ver,
+                                    new_ver_source)
+
+                        if orig_impl and new_impl:
+                                out += "    implementation: %s (%s default) " \
+                                    "-> %s (%s default)\n" % (orig_impl,
+                                    orig_impl_source, new_impl, new_impl_source)
+                        elif orig_impl:
+                                out += "    implementation: %s (%s default) " \
+                                    "-> None\n" % (orig_impl, orig_impl_source)
+                        elif new_impl:
+                                out += "    implementation: None -> " \
+                                    "%s (%s default)\n" % (new_impl,
+                                    new_impl_source)
+                        ret.append(out)
+                return ret
+
+        @property
+        def plan_desc(self):
+                """Get the proposed fmri changes."""
+                return self._fmri_changes
+
+        @property
+        def salvaged(self):
+                """A list of tuples of items that were salvaged during plan
+                execution.  Each tuple is of the form (original_path,
+                salvage_path).  Where 'original_path' is the path of the item
+                before it was salvaged, and 'salvage_path' is where the item was
+                moved to.  This property is only valid after plan execution
+                has completed."""
+                assert self.executed
+                return self._salvaged
+
+        @property
+        def varcets(self):
+                """Returns a tuple of two lists containing the facet and variant
+                changes in this plan."""
+                vs = []
+                if self._new_variants:
+                        vs = self._new_variants.items()
+                fs = []
+                fs.extend(self._changed_facets.items())
+                fs.extend([(f, None) for f in self._removed_facets])
+                return (vs, fs)
+
+        def get_varcets(self):
+                """Returns a formatted list of strings representing the
+                variant/facet changes in this plan"""
+                vs, fs = self.varcets
+                ret = []
+                ret.extend(["variant %s: %s" % a for a in vs])
+                ret.extend(["  facet %s: %s" % a for a in fs])
+                return ret
+
+        def get_changes(self):
+                """A generation function that yields tuples of PackageInfo
+                objects of the form (src_pi, dest_pi).
+
+                If 'src_pi' is None, then 'dest_pi' is the package being
+                installed.
+
+                If 'src_pi' is not None, and 'dest_pi' is None, 'src_pi'
+                is the package being removed.
+
+                If 'src_pi' is not None, and 'dest_pi' is not None,
+                then 'src_pi' is the original version of the package,
+                and 'dest_pi' is the new version of the package it is
+                being upgraded to."""
+
+                for pp in sorted(self.pkg_plans,
+                    key=operator.attrgetter("origin_fmri", "destination_fmri")):
+                        yield (PackageInfo.build_from_fmri(pp.origin_fmri),
+                            PackageInfo.build_from_fmri(pp.destination_fmri))
+
+        def get_actions(self):
+                """A generator function that yields action change descriptions
+                in the order they will be performed."""
+
+                # Unused variable '%s'; pylint: disable-msg=W0612
+                for pplan, o_act, d_act in itertools.chain(
+                    self.removal_actions,
+                    self.update_actions,
+                    self.install_actions):
+                # pylint: enable-msg=W0612
+                        yield "%s -> %s" % (o_act, d_act)
+
+        def get_licenses(self, pfmri=None):
+                """A generator function that yields information about the
+                licenses related to the current plan in tuples of the form
+                (dest_fmri, src, dest, accepted, displayed) for the given
+                package FMRI or all packages in the plan.  This is only
+                available for licenses that are being installed or updated.
+
+                'dest_fmri' is the FMRI of the package being installed.
+
+                'src' is a LicenseInfo object if the license of the related
+                package is being updated; otherwise it is None.
+
+                'dest' is the LicenseInfo object for the license that is being
+                installed.
+
+                'accepted' is a boolean value indicating that the license has
+                been marked as accepted for the current plan.
+
+                'displayed' is a boolean value indicating that the license has
+                been marked as displayed for the current plan."""
+
+                for pp in self.pkg_plans:
+                        dfmri = pp.destination_fmri
+                        if pfmri and dfmri != pfmri:
+                                continue
+
+                        # Unused variable; pylint: disable-msg=W0612
+                        for lid, entry in pp.get_licenses():
+                                src = entry["src"]
+                                src_li = None
+                                if src:
+                                        src_li = LicenseInfo(pp.origin_fmri,
+                                            src, img=pp.image)
+
+                                dest = entry["dest"]
+                                dest_li = None
+                                if dest:
+                                        dest_li = LicenseInfo(
+                                            pp.destination_fmri, dest,
+                                            img=pp.image)
+
+                                yield (pp.destination_fmri, src_li, dest_li,
+                                    entry["accepted"], entry["displayed"])
+
+                        if pfmri:
+                                break
+
+        def get_solver_errors(self):
+                """Returns a list of strings for all FMRIs evaluated by the
+                solver explaining why they were rejected.  (All packages
+                found in solver's trim database.)  Only available if
+                DebugValues["plan"] was set when the plan was created.
+                """
+
+                assert self.state >= EVALUATED_PKGS, \
+                        "%s >= %s" % (self.state, EVALUATED_PKGS)
+
+                # in case this operation doesn't use solver
+                if self._solver_errors is None:
+                        return []
+
+                return self._solver_errors
+
+        @property
+        def plan_type(self):
+                """Return the type of plan that was created (ex:
+                API_OP_UPDATE)."""
+                return self._op
+
+        @property
+        def update_index(self):
+                """Boolean indicating if indexes will be updated as part of an
+                image-modifying operation."""
+                return self._update_index
+
+        @property
+        def backup_be(self):
+                """Either None, True, or False.  If None then executing this
+                plan may create a backup BE.  If False, then executing this
+                plan will not create a backup BE.  If True, then executing
+                this plan will create a backup BE."""
+                return self._backup_be
+
+        @property
+        def be_name(self):
+                """The name of a new BE that will be created if this plan is
+                executed."""
+                return self._be_name
+
+        @property
+        def backup_be_name(self):
+                """The name of a new backup BE that will be created if this
+                plan is executed."""
+                return self._backup_be_name
+
+        @property
+        def activate_be(self):
+                """A boolean value indicating whether any new boot environment
+                will be set active on next boot."""
+                return self._be_activate
+
+        @property
+        def reboot_needed(self):
+                """A boolean value indicating that execution of the plan will
+                require a restart of the system to take effect if the target
+                image is an existing boot environment."""
+                return self._actuators.reboot_needed()
+
+        @property
+        def new_be(self):
+                """A boolean value indicating that execution of the plan will
+                take place in a clone of the current live environment"""
+                return self._new_be
+
+        @property
+        def update_boot_archive(self):
+                """A boolean value indicating whether or not the boot archive
+                will be rebuilt"""
+                return self._need_boot_archive
+
+        @property
+        def bytes_added(self):
+                """Estimated number of bytes added"""
+                return self._bytes_added
+
+        @property
+        def cbytes_added(self):
+                """Estimated number of download cache bytes added"""
+                return self._cbytes_added
+
+        @property
+        def bytes_avail(self):
+                """Estimated number of bytes available in image /"""
+                return self._bytes_avail
+
+        @property
+        def cbytes_avail(self):
+                """Estimated number of bytes available in download cache"""
+                return self._cbytes_avail
--- a/src/modules/client/progress.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/client/progress.py	Mon Jul 11 13:49:50 2011 -0700
@@ -25,8 +25,9 @@
 #
 
 import errno
+import itertools
+import os
 import sys
-import os
 import time
 
 from pkg.client import global_settings
@@ -57,11 +58,13 @@
             External consumers should base their subclasses on the
             NullProgressTracker class. """
 
-        def __init__(self, parsable_version=None, quiet=False, verbose=0):
+        def __init__(self, parsable_version=None, quiet=False, verbose=0,
+            progfd=None):
 
                 self.parsable_version = parsable_version
                 self.quiet = quiet
                 self.verbose = verbose
+                self.progfd = progfd
 
                 self.reset()
 
@@ -127,6 +130,27 @@
 
                 self.last_printed = 0 # when did we last emit status?
 
+        def _progfd_progress(self):
+                """In a child image, invoking this will tell the parent that
+                progress is being made.  This function should generally be
+                invoked when:
+                    - we reach the the end of an operation which doesn't give
+                      detailed progress updates (ex: catalog cache updates)
+                    - when we're updaging progress on an operation (ex: during
+                      a catalog refresh when we finish refreshing a
+                      publisher.)
+                """
+
+                if self.progfd is None:
+                        return
+                try:
+                        # write a byte out the progress pipe
+                        os.write(self.progfd, ".")
+                except IOError, e:
+                        if e.errno == errno.EPIPE:
+                                raise PipeError, e
+                        raise
+
         def catalog_start(self, catalog):
                 self.cat_cur_catalog = catalog
                 self.cat_output_start()
@@ -139,12 +163,14 @@
 
         def cache_catalogs_done(self):
                 self.cache_cats_output_done()
+                self._progfd_progress()
 
         def load_catalog_cache_start(self):
                 self.load_cat_cache_output_start()
 
         def load_catalog_cache_done(self):
                 self.load_cat_cache_output_done()
+                self._progfd_progress()
 
         def refresh_start(self, pub_cnt):
                 self.refresh_pub_cnt = pub_cnt
@@ -155,6 +181,7 @@
                 self.refresh_cur_pub = pub
                 self.refresh_cur_pub_cnt += 1
                 self.refresh_output_progress()
+                self._progfd_progress()
 
         def refresh_done(self):
                 self.refresh_output_done()
@@ -167,6 +194,7 @@
                 if fmri:
                         self.eval_cur_fmri = fmri
                 self.eval_output_progress()
+                self._progfd_progress()
 
         def evaluate_done(self, install_npkgs=-1, \
             update_npkgs=-1, remove_npkgs=-1):
@@ -246,6 +274,7 @@
                 elif self.republish_started:
                         if self.dl_goal_nbytes != 0:
                                 self.republish_output()
+                self._progfd_progress()
 
         def download_done(self):
                 """ Call when all downloading is finished """
@@ -288,6 +317,7 @@
                 self.act_cur_nactions += 1
                 if self.act_goal_nactions > 0:
                         self.act_output()
+                self._progfd_progress()
 
         def actions_done(self):
                 if self.act_goal_nactions > 0:
@@ -303,6 +333,7 @@
                 self.ind_cur_nitems += 1
                 if self.ind_goal_nitems > 0:
                         self.ind_output()
+                self._progfd_progress()
 
         def index_done(self):
                 if self.ind_goal_nitems > 0:
@@ -321,6 +352,7 @@
                 self.item_cur_nitems += 1
                 if self.item_goal_nitems > 0:
                         self.item_output()
+                self._progfd_progress()
 
         def item_done(self):
                 if self.item_goal_nitems > 0:
@@ -426,6 +458,7 @@
                 return
 
         def refresh_output_progress(self):
+                self._progfd_progress()
                 return
 
         def refresh_output_done(self):
@@ -443,17 +476,30 @@
                 raise NotImplementedError("eval_output_done() not implemented "
                     "in superclass")
 
-        def li_recurse_start(self, lin):
-                """Called when we recurse into a child linked image."""
+        def li_recurse_start(self):
+                """Called when we're preparing to recurse into linked images."""
+                raise NotImplementedError("li_recurse_start()"
+                    " not implemented in superclass")
 
-                raise NotImplementedError("li_recurse_start() not implemented "
-                    "in superclass")
+        def li_recurse_end(self):
+                """Called when we're finished recursing into linked images."""
+                raise NotImplementedError("li_recurse_end()"
+                    " not implemented in superclass")
 
-        def li_recurse_end(self, lin):
-                """Called when we return from a child linked image."""
+        def li_recurse(self, lic_running, done, pending):
+                """Called when recursing into new linked images."""
+                raise NotImplementedError("li_recurse()"
+                    " not implemented in superclass")
 
-                raise NotImplementedError("li_recurse_end() not implemented "
-                    "in superclass")
+        def li_recurse_output(self, lin, stdout, stderr):
+                """Called when displaying output from linked images."""
+                raise NotImplementedError("li_recurse_output()"
+                    " not implemented in superclass")
+
+        def li_recurse_progress(self, lin):
+                """Called during recursion when a child makes progress."""
+                raise NotImplementedError("li_recurse_progress()"
+                    " not implemented in superclass")
 
         def ver_output(self):
                 raise NotImplementedError("ver_output() not implemented in "
@@ -542,9 +588,10 @@
         """ This progress tracker outputs nothing, but is semantically
             intended to be "quiet"  See also NullProgressTracker below. """
 
-        def __init__(self, parsable_version=None):
+        def __init__(self, parsable_version=None, progfd=None):
                 ProgressTracker.__init__(self,
-                    parsable_version=parsable_version, quiet=True)
+                    parsable_version=parsable_version, quiet=True,
+                    progfd=progfd)
 
         def cat_output_start(self):
                 return
@@ -573,10 +620,19 @@
         def eval_output_done(self):
                 return
 
-        def li_recurse_start(self, lin):
+        def li_recurse_start(self):
+                return
+
+        def li_recurse_end(self):
                 return
 
-        def li_recurse_end(self, lin):
+        def li_recurse(self, lic_running, done, pending):
+                return
+
+        def li_recurse_output(self, lin, stdout, stderr):
+                return
+
+        def li_recurse_progress(self, lin):
                 return
 
         def ver_output(self):
@@ -652,10 +708,11 @@
             and so is appropriate for sending through a pipe.  This code
             is intended to be platform neutral. """
 
-        def __init__(self, parsable_version=None, quiet=False, verbose=0):
+        def __init__(self, parsable_version=None, quiet=False, verbose=0,
+            progfd=None):
                 ProgressTracker.__init__(self,
                     parsable_version=parsable_version, quiet=quiet,
-                    verbose=verbose)
+                    verbose=verbose, progfd=progfd)
                 self.last_printed_pkg = None
                 self.msg_prefix = ""
 
@@ -691,29 +748,56 @@
         def eval_output_done(self):
                 return
 
-        def li_recurse_start(self, lin):
-                msg = _("Recursing into linked image: %s") % lin
-                msg = "%s%s" % (self.msg_prefix, msg)
-
+        def __msg(self, msg, prefix=True, newline=True):
+                if prefix:
+                        msg = "%s%s" % (self.msg_prefix, msg)
                 try:
-                        print "%s\n" % msg
+                        print "%s" % msg,
+                        if newline:
+                                print
                         sys.stdout.flush()
                 except IOError, e:
                         if e.errno == errno.EPIPE:
                                 raise PipeError, e
                         raise
 
-        def li_recurse_end(self, lin):
-                msg = _("Returning from linked image: %s") % lin
-                msg = "%s%s" % (self.msg_prefix, msg)
+        def li_recurse_start(self):
+                msg = _("Preparing to process linked images.")
+                self.__msg(msg)
+
+        def li_recurse_end(self):
+                msg = _("Finished processing linked images.")
+                self.__msg(msg)
+
+        def li_recurse(self, lic_running, done, pending):
+                assert len(lic_running) > 0
+
+                total = len(lic_running) + pending + done
+                running = " ".join([str(lic.child_name) for lic in lic_running])
+                msg = _("Linked Images: %d/%d done; %d working: %s") % \
+                    (done, total, len(lic_running), running)
+                self.__msg(msg)
 
-                try:
-                        print "%s\n" % msg
-                        sys.stdout.flush()
-                except IOError, e:
-                        if e.errno == errno.EPIPE:
-                                raise PipeError, e
-                        raise
+        def li_recurse_output(self, lin, stdout, stderr):
+                # nothing to display
+                if not stdout and not stderr:
+                        return
+
+                # don't display anything
+                if self.parsable_version is not None:
+                        return
+
+                msg = _("\nStart output from linked image: %s") % lin
+                self.__msg(msg)
+                if stdout:
+                        self.__msg(stdout, prefix=False, newline=False)
+                if stderr:
+                        self.__msg(stderr, prefix=False)
+                msg = _("End output from linked image: %s") % lin
+                self.__msg(msg)
+
+        def li_recurse_progress(self, lin):
+                return
 
         def ver_output(self):
                 return
@@ -834,10 +918,11 @@
         #
         TERM_DELAY = 0.10
 
-        def __init__(self, parsable_version=None, quiet=False, verbose=0):
+        def __init__(self, parsable_version=None, quiet=False, verbose=0,
+            progfd=None):
                 ProgressTracker.__init__(self,
                     parsable_version=parsable_version, quiet=quiet,
-                    verbose=verbose)
+                    verbose=verbose, progfd=progfd)
 
                 self.act_started = False
                 self.ind_started = False
@@ -982,32 +1067,98 @@
                 self.__generic_done()
                 self.last_print_time = 0
 
-        def li_recurse_start(self, lin):
-                self.__generic_done()
+        def __msg(self, msg, prefix=True, newline=True):
+                if prefix:
+                        msg = "%s%s" % (self.msg_prefix, msg)
 
-                msg = _("Recursing into linked image: %s") % lin
-                msg = "%s%s" % (self.msg_prefix, msg)
+                suffix = ""
+                if self.needs_cr and len(msg) < self.curstrlen:
+                        # wipe out any extra old text
+                        suffix = " " * self.curstrlen - len(msg)
+                        msg = "%s%s" % (msg, suffix)
 
                 try:
-                        print "%s" % msg, self.cr
                         self.curstrlen = len(msg)
+                        if self.needs_cr:
+                                print self.cr,
+                        print msg,
+                        self.needs_cr = True
+                        if newline:
+                                print
+                                self.needs_cr = False
                         sys.stdout.flush()
                 except IOError, e:
                         if e.errno == errno.EPIPE:
                                 raise PipeError, e
                         raise
 
-        def li_recurse_end(self, lin):
-                msg = _("Returning from linked image: %s") % lin
-                msg = "%s%s" % (self.msg_prefix, msg)
+        def li_recurse_start(self):
+                msg = _("Preparing to process linked images.")
+                self.__msg(msg)
+
+        def li_recurse_end(self):
+                msg = _("Finished processing linked images.")
+                self.__msg(msg)
+
+        def __li_recurse_progress(self):
+                # display child progress output message and spinners
+                spinners = "".join([
+                        self.spinner_chars[i]
+                        for i in self.__lin_spinners
+                ])
+                msg = _("Child progress %s") % spinners
+                self.__msg(msg, newline=False)
+
+        def li_recurse(self, lic_running, done, pending):
+                assert len(lic_running) > 0
+
+                # initialize spinners for each child
+                self.__lin_list = sorted([
+                    lic.child_name for lic in lic_running
+                ])
+                self.__lin_spinners = list(
+                    itertools.repeat(0, len(self.__lin_list)))
 
-                try:
-                        print "%s" % msg, self.cr
-                        sys.stdout.flush()
-                except IOError, e:
-                        if e.errno == errno.EPIPE:
-                                raise PipeError, e
-                        raise
+                total = len(lic_running) + pending + done
+                running = [str(lin) for lin in self.__lin_list]
+                msg = _(
+                    "Linked Images: %d/%d done; %d working: %s") % \
+                    (done, total, len(running), " ".join(running))
+                msg = "%s%s" % (self.msg_prefix, msg)
+                self.__msg(msg)
+
+                # display child progress message
+                self.__li_recurse_progress()
+
+        def li_recurse_output(self, lin, stdout, stderr):
+                # nothing to display
+                if not stdout and not stderr:
+                        return
+
+                # don't display anything
+                if self.parsable_version is not None:
+                        return
+
+                msg = _("\nStart output from linked image: %s") % lin
+                self.__msg(msg)
+                if stdout:
+                        self.__msg(stdout, prefix=False, newline=False)
+                        self.needs_cr = False
+                if stderr:
+                        self.__msg(stderr, prefix=False)
+                msg = _("End output from linked image: %s") % lin
+                self.__msg(msg)
+
+        def li_recurse_progress(self, lin):
+                # find the index of the child that made progress
+                i = self.__lin_list.index(lin)
+
+                # update that child's spinner
+                self.__lin_spinners[i] = \
+                    (self.__lin_spinners[i] + 1) % len(self.spinner_chars)
+
+                # display child progress message
+                self.__li_recurse_progress()
 
         def ver_output(self):
                 try:
--- a/src/modules/client/transport/transport.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/client/transport/transport.py	Mon Jul 11 13:49:50 2011 -0700
@@ -2306,7 +2306,7 @@
 
                 return mfile
 
-        def _action_cached(self, action, pub, in_hash=None):
+        def _action_cached(self, action, pub, in_hash=None, verify=True):
                 """If a file with the name action.hash is cached,
                 and if it has the same content hash as action.chash,
                 then return the path to the file.  If the file can't
@@ -2321,10 +2321,12 @@
                         hashval = in_hash
                 for cache in self.cfg.get_caches(pub=pub, readonly=True):
                         cache_path = cache.lookup(hashval)
+                        if not cache_path:
+                                continue
                         try:
-                                if cache_path:
+                                if verify:
                                         self._verify_content(action, cache_path)
-                                        return cache_path
+                                return cache_path
                         except tx.InvalidContentException:
                                 # If the content in the cache doesn't match the
                                 # hash of the action, verify will have already
@@ -2333,6 +2335,28 @@
                 return None
 
         @staticmethod
+        def _make_opener(cache_path):
+                def opener():
+                        f = open(cache_path, "rb")
+                        return f
+                return opener
+
+        def action_cached(self, fmri, action):
+                try:
+                        pub = self.cfg.get_publisher(fmri.publisher)
+                except apx.UnknownPublisher:
+                        # Allow publishers that don't exist in configuration
+                        # to be used so that if data exists in the cache for
+                        # them, the operation will still succeed.  This only
+                        # needs to be done here as multi_file_ni is only used
+                        # for publication tools.
+                        pub = publisher.Publisher(fmri.publisher)
+
+                # cache content has already been verified
+                return self._make_opener(self._action_cached(action, pub,
+                    verify=False))
+
+        @staticmethod
         def _verify_content(action, filepath):
                 """If action contains an attribute that has the compressed
                 hash, read the file specified in filepath and verify
@@ -2889,7 +2913,6 @@
                 cpath = self._transport._action_cached(action,
                     self.get_publisher())
                 if cpath:
-                        action.data = self._make_opener(cpath)
                         if self._progtrack:
                                 filesz = int(misc.get_pkg_otw_size(action))
                                 file_cnt = 1
@@ -2916,23 +2939,15 @@
 
                 self._hash.setdefault(hashval, []).append(item)
 
-        @staticmethod
-        def _make_opener(cache_path):
-                def opener():
-                        f = open(cache_path, "rb")
-                        return f
-                return opener
-
         def file_done(self, hashval, current_path):
                 """Tell MFile that the transfer completed successfully."""
 
-                self._make_openers(hashval, current_path)
+                self._update_dlstats(hashval, current_path)
                 self.del_hash(hashval)
 
-        def _make_openers(self, hashval, cache_path):
+        def _update_dlstats(self, hashval, cache_path):
                 """Find each action associated with the hash value hashval.
-                Create an opener that points to the cache file for the
-                action's data method."""
+                Update the download statistics for this file."""
 
                 totalsz = 0
                 nfiles = 0
@@ -2942,7 +2957,6 @@
                         nfiles += 1
                         bn = os.path.basename(cache_path)
                         if action.name != "signature" or action.hash == bn:
-                                action.data = self._make_opener(cache_path)
                                 totalsz += misc.get_pkg_otw_size(action)
                         else:
                                 totalsz += action.get_chain_csize(bn)
--- a/src/modules/facet.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/facet.py	Mon Jul 11 13:49:50 2011 -0700
@@ -33,11 +33,11 @@
 import types
 
 class Facets(dict):
-        # store information on facets; subclass dict 
+        # store information on facets; subclass dict
         # and maintain ordered list of keys sorted
         # by length.
 
-        # subclass __getitem_ so that queries w/ 
+        # subclass __getitem_ so that queries w/
         # actual facets find match
 
         def __init__(self, init=EmptyI):
@@ -47,21 +47,33 @@
                 for i in init:
                         self[i] = init[i]
 
+        @staticmethod
+        def getstate(obj, je_state=None):
+                """Returns the serialized state of this object in a format
+                that that can be easily stored using JSON, pickle, etc."""
+                return dict(obj)
+
+        @staticmethod
+        def fromstate(state, jd_state=None):
+                """Update the state of this object using previously serialized
+                state obtained via getstate()."""
+                return Facets(init=state)
+
         def __repr__(self):
                 s =  "<"
                 s += ", ".join(["%s:%s" % (k, dict.__getitem__(self, k)) for k in self.__keylist])
                 s += ">"
 
                 return s
-                
-        def __setitem__(self, item, value):                
+
+        def __setitem__(self, item, value):
                 if not item.startswith("facet."):
                         raise KeyError, 'key must start with "facet".'
 
                 if not (value == True or value == False):
                         raise ValueError, "value must be boolean"
 
-                if item not in self: 
+                if item not in self:
                         self.__keylist.append(item)
                         self.__keylist.sort(cmp=lambda x, y: len(y) - len(x))
                 dict.__setitem__(self, item, value)
@@ -104,7 +116,7 @@
                 default = kwargs.get("default", None)
                 if args:
                         default = args[0]
-                return dict.pop(self, item, default) 
+                return dict.pop(self, item, default)
 
         def popitem(self):
                 popped = dict.popitem(self)
--- a/src/modules/fmri.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/fmri.py	Mon Jul 11 13:49:50 2011 -0700
@@ -164,6 +164,18 @@
 
                 self._hash = None
 
+        @staticmethod
+        def getstate(obj, je_state=None):
+                """Returns the serialized state of this object in a format
+                that that can be easily stored using JSON, pickle, etc."""
+                return str(obj)
+
+        @staticmethod
+        def fromstate(state, jd_state=None):
+                """Allocate a new object using previously serialized state
+                obtained via getstate()."""
+                return PkgFmri(state)
+
         def copy(self):
                 return PkgFmri(str(self))
 
--- a/src/modules/gui/misc_non_gui.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/gui/misc_non_gui.py	Mon Jul 11 13:49:50 2011 -0700
@@ -19,7 +19,7 @@
 #
 # CDDL HEADER END
 #
-# Copyright (c) 2008, 2011, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2008, 2012, Oracle and/or its affiliates. All rights reserved.
 #
 
 import os
@@ -41,7 +41,7 @@
 
 # The current version of the Client API the PM, UM and
 # WebInstall GUIs have been tested against and are known to work with.
-CLIENT_API_VERSION = 71
+CLIENT_API_VERSION = 72
 LOG_DIR = "/var/tmp"
 LOG_ERROR_EXT = "_error.log"
 LOG_INFO_EXT = "_info.log"
--- a/src/modules/lint/engine.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/lint/engine.py	Mon Jul 11 13:49:50 2011 -0700
@@ -21,7 +21,7 @@
 #
 
 #
-# Copyright (c) 2010, 2011, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2010, 2012, Oracle and/or its affiliates. All rights reserved.
 #
 
 import pkg.client.api
@@ -39,7 +39,7 @@
 import sys
 
 PKG_CLIENT_NAME = "pkglint"
-CLIENT_API_VERSION = 71
+CLIENT_API_VERSION = 72
 pkg.client.global_settings.client_name = PKG_CLIENT_NAME
 
 class LintEngineException(Exception):
--- a/src/modules/manifest.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/manifest.py	Mon Jul 11 13:49:50 2011 -0700
@@ -42,7 +42,44 @@
 from pkg.misc import EmptyDict, EmptyI, expanddirs, PKG_FILE_MODE, PKG_DIR_MODE
 from pkg.actions.attribute import AttributeAction
 
-ManifestDifference = namedtuple("ManifestDifference", "added changed removed")
+class ManifestDifference(
+    namedtuple("ManifestDifference", "added changed removed")):
+
+        __slots__ = []
+
+        __state__desc = tuple([
+            [ ( actions.generic.NSG, actions.generic.NSG ) ],
+            [ ( actions.generic.NSG, actions.generic.NSG ) ],
+            [ ( actions.generic.NSG, actions.generic.NSG ) ],
+        ])
+
+        __state__commonize = frozenset([
+            actions.generic.NSG,
+        ])
+
+        @staticmethod
+        def getstate(obj, je_state=None):
+                """Returns the serialized state of this object in a format
+                that that can be easily stored using JSON, pickle, etc."""
+                return misc.json_encode(ManifestDifference.__name__,
+                    tuple(obj),
+                    ManifestDifference.__state__desc,
+                    commonize=ManifestDifference.__state__commonize,
+                    je_state=je_state)
+
+        @staticmethod
+        def fromstate(state, jd_state=None):
+                """Allocate a new object using previously serialized state
+                obtained via getstate()."""
+
+                # decode serialized state into python objects
+                state = misc.json_decode(ManifestDifference.__name__,
+                    state,
+                    ManifestDifference.__state__desc,
+                    commonize=ManifestDifference.__state__commonize,
+                    jd_state=jd_state)
+
+                return ManifestDifference(*state)
 
 class Manifest(object):
         """A Manifest is the representation of the actions composing a specific
--- a/src/modules/misc.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/misc.py	Mon Jul 11 13:49:50 2011 -0700
@@ -22,28 +22,40 @@
 
 # Copyright (c) 2007, 2012, Oracle and/or its affiliates. All rights reserved.
 
+"""
+Misc utility functions used by the packaging system.
+"""
+
 import OpenSSL.crypto as osc
 import cStringIO
 import calendar
+import collections
 import datetime
 import errno
 import getopt
 import hashlib
+import itertools
 import locale
 import os
 import platform
 import re
+import resource
 import shutil
 import simplejson as json
 import socket
-from stat import *
 import struct
 import sys
+import threading
 import time
+import traceback
 import urllib
 import urlparse
 import zlib
 
+from stat import S_IFMT, S_IMODE, S_IRGRP, S_IROTH, S_IRUSR, S_IRWXU, \
+    S_ISBLK, S_ISCHR, S_ISDIR, S_ISFIFO, S_ISLNK, S_ISREG, S_ISSOCK, \
+    S_IWUSR, S_IXGRP, S_IXOTH
+
 import pkg.client.api_errors as api_errors
 import pkg.portable as portable
 
@@ -60,8 +72,10 @@
 SIGNATURE_POLICY = "signature-policy"
 
 # Bug URI Constants (deprecated)
+# Line too long; pylint: disable-msg=C0301
 BUG_URI_CLI = "https://defect.opensolaris.org/bz/enter_bug.cgi?product=pkg&component=cli"
 BUG_URI_GUI = "https://defect.opensolaris.org/bz/enter_bug.cgi?product=pkg&component=gui"
+# pylint: enable-msg=C0301
 
 # Traceback message.
 def get_traceback_message():
@@ -84,12 +98,12 @@
 
 def time_to_timestamp(t):
         """convert seconds since epoch to %Y%m%dT%H%M%SZ format"""
-        # XXX optimize?
+        # XXX optimize?; pylint: disable-msg=W0511
         return time.strftime("%Y%m%dT%H%M%SZ", time.gmtime(t))
 
 def timestamp_to_time(ts):
         """convert %Y%m%dT%H%M%SZ format to seconds since epoch"""
-        # XXX optimize?
+        # XXX optimize?; pylint: disable-msg=W0511
         return calendar.timegm(time.strptime(ts, "%Y%m%dT%H%M%SZ"))
 
 def timestamp_to_datetime(ts):
@@ -201,6 +215,8 @@
         return out
 
 def url_affix_trailing_slash(u):
+        """if 'u' donesn't have a trailing '/', append one."""
+
         if u[-1] != '/':
                 u = u + '/'
 
@@ -211,6 +227,7 @@
     portable.util.get_os_release(), platform.version())
 
 def user_agent_str(img, client_name):
+        """Return a string that can use to identify the client."""
 
         if not img or img.type is None:
                 imgtype = IMG_NONE
@@ -259,8 +276,7 @@
                 return False
 
         if o[0] == "file":
-                scheme, netloc, path, params, query, fragment = \
-                    urlparse.urlparse(url, "file", allow_fragments=0)
+                path = urlparse.urlparse(url, "file", allow_fragments=0)[2]
                 path = urllib.url2pathname(path)
                 if not os.path.abspath(path):
                         return False
@@ -268,7 +284,7 @@
                 return True
 
         # Next verify that the network location is valid
-        host, port = urllib.splitport(o[1])
+        host = urllib.splitport(o[1])[0]
 
         if not host or _invalid_host_chars.match(host):
                 return False
@@ -281,11 +297,10 @@
 def gunzip_from_stream(gz, outfile):
         """Decompress a gzipped input stream into an output stream.
 
-        The argument 'gz' is an input stream of a gzipped file (XXX make it do
-        either a gzipped file or raw zlib compressed data), and 'outfile' is is
-        an output stream.  gunzip_from_stream() decompresses data from 'gz' and
-        writes it to 'outfile', and returns the hexadecimal SHA-1 sum of that
-        data.
+        The argument 'gz' is an input stream of a gzipped file and 'outfile'
+        is is an output stream.  gunzip_from_stream() decompresses data from
+        'gz' and writes it to 'outfile', and returns the hexadecimal SHA-1 sum
+        of that data.
         """
 
         FHCRC = 2
@@ -347,6 +362,7 @@
         """ Pipe exception. """
 
         def __init__(self, args=None):
+                Exception.__init__(self)
                 self._args = args
 
 def msg(*text):
@@ -398,9 +414,9 @@
         their use is delayed by the program."""
         return message
 
-def bytes_to_str(bytes, format=None):
+def bytes_to_str(n, fmt=None):
         """Returns a human-formatted string representing the number of bytes
-        in the largest unit possible.  If provided, 'format' should be a string
+        in the largest unit possible.  If provided, 'fmt' should be a string
         which can be formatted with a dictionary containing a float 'num' and
         string 'unit'."""
 
@@ -415,23 +431,24 @@
         ]
 
         for uom, limit in units:
-                if uom != _("EB") and bytes >= limit:
+                if uom != _("EB") and n >= limit:
                         # Try the next largest unit of measure unless this is
                         # the largest or if the byte size is within the current
                         # unit of measure's range.
                         continue
                 else:
-                        if not format:
-                                format = "%(num).2f %(unit)s"
-                        return format % {
-                            "num": round(bytes / float(limit / 2**10), 2),
+                        if not fmt:
+                                fmt = "%(num).2f %(unit)s"
+                        return fmt % {
+                            "num": round(n / float(limit / 2**10), 2),
                             "unit": uom
                         }
 
 def get_rel_path(request, uri, pub=None):
-        # Calculate the depth of the current request path relative to our base
-        # uri. path_info always ends with a '/' -- so ignore it when
-        # calculating depth.
+        """Calculate the depth of the current request path relative to our
+        base uri. path_info always ends with a '/' -- so ignore it when
+        calculating depth."""
+
         rpath = request.path_info
         if pub:
                 rpath = rpath.replace("/%s/" % pub, "/")
@@ -552,28 +569,195 @@
         cfile.close()
         return csize, chash
 
+class ProcFS(object):
+        """This class is used as an interface to procfs."""
+
+        _ctype_formats = {
+            # This dictionary maps basic c types into python format characters
+            # that can be used with struct.unpack().  The format of this
+            # dictionary is:
+            #    <ctype>: (<repeat count>, <format char>)
+
+            # basic c types (repeat count should always be 1)
+            # char[] is used to encode character arrays
+            "char":        (1,  "c"),
+            "char[]":      (1,  "s"),
+            "int":         (1,  "i"),
+            "long":        (1,  "l"),
+            "uintptr_t":   (1,  "I"),
+            "ushort_t":    (1,  "H"),
+
+            # other simple types (repeat count should always be 1)
+            "ctid_t":      (1,  "i"), # ctid_t -> id_t -> int
+            "dev_t":       (1,  "L"), # dev_t -> ulong_t
+            "gid_t":       (1,  "I"), # gid_t -> uid_t -> uint_t
+            "pid_t":       (1,  "i"), # pid_t -> int
+            "poolid_t":    (1,  "i"), # poolid_t -> id_t -> int
+            "projid_t":    (1,  "i"), # projid_t -> id_t -> int
+            "size_t":      (1,  "L"), # size_t -> ulong_t
+            "taskid_t":    (1,  "i"), # taskid_t -> id_t -> int
+            "time_t":      (1,  "l"), # time_t -> long
+            "uid_t":       (1,  "I"), # uid_t -> uint_t
+            "zoneid_t":    (1,  "i"), # zoneid_t -> id_t -> int
+            "id_t":        (1,  "i"), # id_t -> int
+
+            # structures must be represented as character arrays
+            "timestruc_t": (8,  "s"), # sizeof (timestruc_t) = 8
+        }
+
+        _timestruct_desc = [
+            # this list describes a timestruc_t structure
+            # the entry format is (<ctype>, <repeat count>, <name>)
+            ("time_t", 1, "tv_sec"),
+            ("long",   1, "tv_nsec"),
+        ]
+
+        _psinfo_desc = [
+            # this list describes a psinfo_t structure
+            # the entry format is: (<ctype>, <repeat count>, <name>)
+            ("int",         1,  "pr_flag"),
+            ("int",         1,  "pr_nlwp"),
+            ("pid_t",       1,  "pr_pid"),
+            ("pid_t",       1,  "pr_ppid"),
+            ("pid_t",       1,  "pr_pgid"),
+            ("pid_t",       1,  "pr_sid"),
+            ("uid_t",       1,  "pr_uid"),
+            ("uid_t",       1,  "pr_euid"),
+            ("gid_t",       1,  "pr_gid"),
+            ("gid_t",       1,  "pr_egid"),
+            ("uintptr_t",   1,  "pr_addr"),
+            ("size_t",      1,  "pr_size"),
+            ("size_t",      1,  "pr_rssize"),
+            ("size_t",      1,  "pr_pad1"),
+            ("dev_t",       1,  "pr_ttydev"),
+            ("ushort_t",    1,  "pr_pctcpu"),
+            ("ushort_t",    1,  "pr_pctmem"),
+            ("timestruc_t", 1,  "pr_start"),
+            ("timestruc_t", 1,  "pr_time"),
+            ("timestruc_t", 1,  "pr_ctime"),
+            ("char[]",      16, "pr_fname"),
+            ("char[]",      80, "pr_psargs"),
+            ("int",         1,  "pr_wstat"),
+            ("int",         1,  "pr_argc"),
+            ("uintptr_t",   1,  "pr_argv"),
+            ("uintptr_t",   1,  "pr_envp"),
+            ("char",        1,  "pr_dmodel"),
+            ("char[]",      3,  "pr_pad2"),
+            ("taskid_t",    1,  "pr_taskid"),
+            ("projid_t",    1,  "pr_projid"),
+            ("int",         1,  "pr_nzomb"),
+            ("poolid_t",    1,  "pr_poolid"),
+            ("zoneid_t",    1,  "pr_zoneid"),
+            ("id_t",        1,  "pr_contract"),
+            ("int",         1,  "pr_filler"),
+        ]
+
+        _struct_descriptions = {
+            # this list contains all the known structure description lists
+            # the entry format is: <structure name>: \
+            #    [ <description>, <format string>, <namedtuple> ]
+            #
+            # Note that <format string> and <namedtuple> should be assigned
+            # None in this table, and then they will get pre-populated
+            # automatically when this class is instantiated
+            #
+            "psinfo_t":    [_psinfo_desc, None, None],
+            "timestruc_t": [_timestruct_desc, None, None],
+        }
+
+        # fill in <format string> and <namedtuple> in _struct_descriptions
+        for struct_name, v in _struct_descriptions.iteritems():
+                desc = v[0]
+
+                # update _struct_descriptions with a format string
+                v[1] = ""
+                for ctype, count1, name in desc:
+                        count2, fmt_char = _ctype_formats[ctype]
+                        v[1] = v[1] + str(count1 * count2) + fmt_char
+
+                # update _struct_descriptions with a named tuple
+                v[2] = collections.namedtuple(struct_name,
+                    [ i[2] for i in desc ])
+
+        @staticmethod
+        def _struct_unpack(data, name):
+                """Unpack 'data' using struct.unpack().  'name' is the name of
+                the data we're unpacking and is used to lookup a description
+                of the data (which in turn is used to build a format string to
+                decode the data)."""
+
+                # lookup the description of the data to unpack
+                desc, fmt, nt = ProcFS._struct_descriptions[name]
+
+                # unpack the data into a list
+                rv = list(struct.unpack(fmt, data))
+
+                # check for any nested data that needs unpacking
+                for index, v in enumerate(desc):
+                        ctype = v[0]
+                        if ctype not in ProcFS._struct_descriptions:
+                                continue
+                        rv[index] = ProcFS._struct_unpack(rv[index], ctype)
+
+                # return the data in a named tuple
+                return nt(*rv)
+
+        @staticmethod
+        def psinfo():
+                """Read the psinfo file and return its contents."""
+
+                # This works only on Solaris, in 32-bit mode.  It may not work
+                # on older or newer versions than 5.11.  Ideally, we would use
+                # libproc, or check sbrk(0), but this is expedient.  In most
+                # cases (there's a small chance the file will decode, but
+                # incorrectly), failure will raise an exception, and we'll
+                # fail safe.
+                psinfo_size = 232
+                try:
+                        psinfo_data = file("/proc/self/psinfo").read(
+                            psinfo_size)
+                # Catch "Exception"; pylint: disable-msg=W0703
+                except Exception:
+                        return None
+
+                # make sure we got the expected amount of data, otherwise
+                # unpacking it will fail.
+                if len(psinfo_data) != psinfo_size:
+                        return None
+
+                return ProcFS._struct_unpack(psinfo_data, "psinfo_t")
+
+
 def __getvmusage():
         """Return the amount of virtual memory in bytes currently in use."""
 
-        # This works only on Solaris, in 32-bit mode.  It may not work on older
-        # or newer versions than 5.11.  Ideally, we would use libproc, or check
-        # sbrk(0), but this is expedient.  In most cases (there's a small chance
-        # the file will decode, but incorrectly), failure will raise an
-        # exception, and we'll fail safe.
-        try:
-                # Read just the psinfo_t, not the tacked-on lwpsinfo_t
-                psinfo_arr = file("/proc/self/psinfo").read(232)
-                psinfo = struct.unpack("6i5I4LHH6L16s80siiIIc3x7i", psinfo_arr)
-                vsz = psinfo[11] * 1024
-        except Exception:
-                vsz = None
-
-        return vsz
+        psinfo = ProcFS.psinfo()
+        if psinfo is None:
+                return None
+        return psinfo.pr_size * 1024
+
+def _prstart():
+        """Return the process start time expressed as a floating point number
+        in seconds since the epoch, in UTC."""
+        psinfo = ProcFS.psinfo()
+        if psinfo is None:
+                return 0.0
+        return psinfo.pr_start.tv_sec + (float(psinfo.pr_start.tv_nsec) / 1e9)
 
 def out_of_memory():
         """Return an out of memory message, for use in a MemoryError handler."""
 
-        vsz = bytes_to_str(__getvmusage(), format="%(num).0f%(unit)s")
+        # figure out how much memory we're using (note that we could run out
+        # of memory while doing this, so check for that.
+        vsz = None
+        try:
+                vmusage = __getvmusage()
+                if vmusage is not None:
+                        vsz = bytes_to_str(vmusage, fmt="%(num).0f%(unit)s")
+        except (MemoryError, EnvironmentError), __e:
+                if isinstance(__e, EnvironmentError) and \
+                    __e.errno != errno.ENOMEM:
+                        raise
 
         if vsz is not None:
                 error = """\
@@ -592,10 +776,14 @@
         return _(error) % locals()
 
 
-# ImmutableDict and EmptyI for argument defaults
+# EmptyI for argument defaults
 EmptyI = tuple()
 
+# ImmutableDict for argument defaults
 class ImmutableDict(dict):
+        # Missing docstring; pylint: disable-msg=C0111
+        # Unused argument; pylint: disable-msg=W0613
+
         def __init__(self, default=EmptyI):
                 dict.__init__(self, default)
 
@@ -623,12 +811,15 @@
         def clear(self):
                 self.__oops()
 
-        def __oops(self):
+        @staticmethod
+        def __oops():
                 raise TypeError, "Item assignment to ImmutableDict"
 
 # A way to have a dictionary be a property
 
 class DictProperty(object):
+        # Missing docstring; pylint: disable-msg=C0111
+
         class __InternalProxy(object):
                 def __init__(self, obj, fget, fset, fdel, iteritems, keys,
                     values, iterator, fgetdefault, fsetdefault, update, pop):
@@ -719,6 +910,8 @@
                 self.__pop = pop
 
         def __get__(self, obj, objtype=None):
+                # Unused argument; pylint: disable-msg=W0613
+
                 if obj is None:
                         return self
                 return self.__InternalProxy(obj, self.__fget, self.__fset,
@@ -801,7 +994,7 @@
         """
 
         res = ""
-        for i, p in enumerate(s):
+        for p in s:
                 p = ord(p)
                 a = char_list[p % 16]
                 p = p/16
@@ -900,11 +1093,13 @@
         lock object present.  The object has a held value, that is used
         for _is_owned.  This is informational and doesn't actually
         provide mutual exclusion in any way whatsoever."""
+        # Missing docstring; pylint: disable-msg=C0111
 
         def __init__(self):
                 self.held = False
 
         def acquire(self, blocking=1):
+                # Unused argument; pylint: disable-msg=W0613
                 self.held = True
                 return True
 
@@ -924,16 +1119,16 @@
         """Set __metaclass__ to Singleton to create a singleton.
         See http://en.wikipedia.org/wiki/Singleton_pattern """
 
-        def __init__(self, name, bases, dictionary):
-                super(Singleton, self).__init__(name, bases, dictionary)
-                self.instance = None
-
-        def __call__(self, *args, **kw):
-                if self.instance is None:
-                        self.instance = super(Singleton, self).__call__(*args,
+        def __init__(mcs, name, bases, dictionary):
+                super(Singleton, mcs).__init__(name, bases, dictionary)
+                mcs.instance = None
+
+        def __call__(mcs, *args, **kw):
+                if mcs.instance is None:
+                        mcs.instance = super(Singleton, mcs).__call__(*args,
                             **kw)
 
-                return self.instance
+                return mcs.instance
 
 
 EmptyDict = ImmutableDict()
@@ -1151,6 +1346,32 @@
 
         return cmdpath
 
+def api_pkgcmd():
+        """When running a pkg(1) command from within a packaging module, try
+        to use the same pkg(1) path as our current invocation.  If we're
+        running pkg(1) from some other command (like the gui updater) then
+        assume that pkg(1) is in the default path."""
+
+        pkg_bin = "pkg"
+        cmdpath = api_cmdpath()
+        if cmdpath and os.path.basename(cmdpath) == "pkg":
+                try:
+                        # check if the currently running pkg command
+                        # exists and is accessible.
+                        os.stat(cmdpath)
+                        pkg_bin = cmdpath
+                except OSError:
+                        pass
+
+        pkg_cmd = [pkg_bin]
+
+        # propagate debug options
+        for k, v in DebugValues.iteritems():
+                pkg_cmd.append("-D")
+                pkg_cmd.append("%s=%s" % (k, v))
+
+        return pkg_cmd
+
 def liveroot():
         """Return path to the current live root image, i.e. the image
         that we are running from."""
@@ -1180,6 +1401,7 @@
                     for fname in fnames
                 )
         except EnvironmentError, e:
+                # Access to protected member; pylint: disable-msg=W0212
                 raise api_errors._convert_error(e)
 
 def get_listing(desired_field_order, field_data, field_values, out_format,
@@ -1223,6 +1445,7 @@
         metacharacters or embedded control sequences should be escaped
         before display.  (If applicable to the specified output format.)
         """
+        # Missing docstring; pylint: disable-msg=C0111
 
         # Custom sort function for preserving field ordering
         def sort_fields(one, two):
@@ -1316,7 +1539,7 @@
                         if isinstance(v, (list, tuple, set, frozenset)):
                                 return [fmt_val(e) for e in v]
                         if isinstance(v, dict):
-                                for k, e in v.items():
+                                for k, e in v.iteritems():
                                         v[k] = fmt_val(e)
                                 return v
                         return str(v)
@@ -1359,3 +1582,769 @@
                 output += "\n"
 
         return output
+
+def truncate_file(f, size=0):
+        """Truncate the specified file."""
+        try:
+                f.truncate(size)
+        except IOError:
+                pass
+        except OSError, e:
+                # Access to protected member; pylint: disable-msg=W0212
+                raise api_errors._convert_error(e)
+
+def flush_output():
+        """flush stdout and stderr"""
+
+        try:
+                sys.stdout.flush()
+        except IOError:
+                pass
+        except OSError, e:
+                # Access to protected member; pylint: disable-msg=W0212
+                raise api_errors._convert_error(e)
+
+        try:
+                sys.stderr.flush()
+        except IOError:
+                pass
+        except OSError, e:
+                # Access to protected member; pylint: disable-msg=W0212
+                raise api_errors._convert_error(e)
+
+# valid json types
+json_types_immediates = (bool, float, int, long, str, type(None), unicode)
+json_types_collections = (dict, list)
+json_types = tuple(json_types_immediates + json_types_collections)
+json_debug = False
+
+def json_encode(name, data, desc, commonize=None, je_state=None):
+        """A generic json encoder.
+
+        'name' a descriptive name of the data we're encoding.  If encoding a
+        class, this would normally be the class name.  'name' is used when
+        displaying errors to identify the data that caused the errors.
+
+        'data' data to encode.
+
+        'desc' a description of the data to encode.
+
+        'commonize' a list of objects that should be cached by reference.
+        this is used when encoding objects which may contain multiple
+        references to a single object.  In this case, each reference will be
+        replaced with a unique id, and the object that was pointed to will
+        only be encoded once.  This ensures that upon decoding we can restore
+        the original object and all references to it."""
+
+        # debugging
+        if je_state is None and json_debug:
+                print >> sys.stderr, "json_encode name: ", name
+                print >> sys.stderr, "json_encode data: ", data
+
+        # we don't encode None
+        if data is None:
+                return None
+
+        # initialize parameters to default
+        if commonize is None:
+                commonize = frozenset()
+
+        if je_state is None:
+                # this is the first invocation of this function, so "data"
+                # points to the top-level object that we want to encode.  this
+                # means that if we're commonizing any objects we should
+                # finalize the object cache when we're done encoding this
+                # object.
+                finish = True
+
+                # initialize recursion state
+                obj_id = [0]
+                obj_cache = {}
+                je_state = [obj_id, obj_cache, commonize]
+        else:
+                # we're being invoked recursively, do not finalize the object
+                # cache (since that will be done by a previous invocation of
+                # this function).
+                finish = False
+
+                # get recursion state
+                obj_id, obj_cache, commonize_old = je_state
+
+                # check if we're changing the set of objects to commonize
+                if not commonize:
+                        commonize = commonize_old
+                else:
+                        # update the set of objects to commonize
+                        # make a copy so we don't update our callers state
+                        commonize = frozenset(commonize_old | commonize)
+                        je_state = [obj_id, obj_cache, commonize]
+
+        # verify state
+        assert type(name) == str
+        assert type(obj_cache) == dict
+        assert type(obj_id) == list and len(obj_id) == 1 and obj_id[0] >= 0
+        assert type(commonize) == frozenset
+        assert type(je_state) == list and len(je_state) == 3
+
+        def je_return(name, data, finish, je_state):
+                """if necessary, finalize the object cache and merge it into
+                the state data.
+
+                while encoding, the object cache is a dictionary which
+                contains tuples consisting of an assigned unique object id
+                (obj_id) and an encoded object.  these tuples are hashed by
+                the python object id of the original un-encoded python object.
+                so the hash contains:
+
+                       { id(<obj>): ( <obj_id>, <obj_state> ) }
+
+                when we finish the object cache we update it so that it
+                contains just encoded objects hashed by their assigned object
+                id (obj_id).  so the hash contains:
+
+                       { str(<obj_id>): <obj_state> }
+
+                then we merge the state data and object cache into a single
+                dictionary and return that.
+                """
+                # Unused argument; pylint: disable-msg=W0613
+
+                if not finish:
+                        return data
+
+                # json.dump converts integer dictionary keys into strings, so
+                # we'll convert the object id keys (which are integers) into
+                # strings (that way we're encoder/decoder independent).
+                obj_cache = je_state[1]
+                obj_cache2 = {}
+                for obj_id, obj_state in obj_cache.itervalues():
+                        obj_cache2[str(obj_id)] = obj_state
+
+                data = { "json_state": data, "json_objects": obj_cache2 }
+
+                if DebugValues["plandesc_validate"]:
+                        json_validate(name, data)
+
+                # debugging
+                if json_debug:
+                        print >> sys.stderr, "json_encode finished name: ", name
+                        print >> sys.stderr, "json_encode finished data: ", data
+
+                return data
+
+        # check if the description is a type object
+        if isinstance(desc, type):
+                desc_type = desc
+        else:
+                # get the expected data type from the description
+                desc_type = type(desc)
+
+        # get the data type
+        data_type = getattr(data, "__metaclass__", type(data))
+
+        # sanity check that the data type matches the description
+        assert desc_type == data_type, \
+            "unexpected %s for %s, expected: %s, value: %s" % \
+                (data_type, name, desc_type, data)
+
+        # we don't need to do anything for basic types
+        if desc_type in json_types_immediates:
+                return je_return(name, data, finish, je_state)
+
+        # encode elements nested in a dictionary like object
+        # return elements in a dictionary
+        if desc_type in (dict, collections.defaultdict):
+                # we always return a new dictionary
+                rv = {}
+
+                # check if we're not encoding nested elements
+                if len(desc) == 0:
+                        rv.update(data)
+                        return je_return(name, rv, finish, je_state)
+
+                # lookup the first descriptor to see if we have
+                # generic type description.
+                desc_k, desc_v = desc.items()[0]
+
+                # if the key in the first type pair is a type then we
+                # have a generic type description that applies to all
+                # keys and values in the dictionary.
+                # check if the description is a type object
+                if isinstance(desc_k, type):
+                        # there can only be one generic type desc
+                        assert len(desc) == 1
+
+                        # encode all key / value pairs
+                        for k, v in data.iteritems():
+                                # encode the key
+                                name2 = "%s[%s].key()" % (name, desc_k)
+                                k2 = json_encode(name2, k, desc_k,
+                                    je_state=je_state)
+
+                                # encode the value
+                                name2 = "%s[%s].value()" % (name, desc_k)
+                                v2 = json_encode(name2, v, desc_v,
+                                    je_state=je_state)
+
+                                # save the result
+                                rv[k2] = v2
+                        return je_return(name, rv, finish, je_state)
+
+                # we have element specific value type descriptions.
+                # encode the specific values.
+                rv.update(data)
+                for desc_k, desc_v in desc.iteritems():
+                        # check for the specific key
+                        if desc_k not in rv:
+                                continue
+
+                        # encode the value
+                        name2 = "%s[%s].value()" % (name, desc_k)
+                        rv[desc_k] = json_encode(name2, rv[desc_k], desc_v,
+                            je_state=je_state)
+                return je_return(name, rv, finish, je_state)
+
+        # encode elements nested in a list like object
+        # return elements in a list
+        if desc_type in (tuple, list, set, frozenset):
+
+                # we always return a new list
+                rv = []
+
+                # check for an empty list since we use izip_longest
+                if len(data) == 0:
+                        return je_return(name, rv, finish, je_state)
+
+                # check if we're not encoding nested elements
+                if len(desc) == 0:
+                        rv.extend(data)
+                        return je_return(name, rv, finish, je_state)
+
+                # don't accidentally generate data via izip_longest
+                assert len(data) >= len(desc), \
+                    "%d >= %d" % (len(data), len(desc))
+
+                i = 0
+                for data2, desc2 in itertools.izip_longest(data, desc,
+                    fillvalue=list(desc)[0]):
+                        name2 = "%s[%i]" % (name, i)
+                        i += 1
+                        rv.append(json_encode(name2, data2, desc2,
+                            je_state=je_state))
+                return je_return(name, rv, finish, je_state)
+
+        # if we're commonizing this object and it's already been encoded then
+        # just return its encoded object id.
+        if desc_type in commonize and id(data) in obj_cache:
+                rv = obj_cache[id(data)][0]
+                return je_return(name, rv, finish, je_state)
+
+        # find an encoder for this class, which should be:
+        #     <class>.getstate(obj, je_state)
+        encoder = getattr(desc_type, "getstate", None)
+        assert encoder is not None, "no json encoder for: %s" % desc_type
+
+        # encode the data
+        rv = encoder(data, je_state)
+        assert rv is not None, "json encoder returned none for: %s" % desc_type
+
+        # if we're commonizing this object, then assign it an object id and
+        # save that object id and the encoded object into the object cache
+        # (which is indexed by the python id for the object).
+        if desc_type in commonize:
+                obj_cache[id(data)] = (obj_id[0], rv)
+                rv = obj_id[0]
+                obj_id[0] += 1
+
+        # return the encoded element
+        return je_return(name, rv, finish, je_state)
+
+def json_decode(name, data, desc, commonize=None, jd_state=None):
+        """A generic json decoder.
+
+        'name' a descriptive name of the data.  (used to identify unexpected
+        data errors.)
+
+        'desc' a programmatic description of data types.
+
+        'data' data to decode."""
+
+        # debugging
+        if jd_state is None and json_debug:
+                print >> sys.stderr, "json_decode name: ", name
+                print >> sys.stderr, "json_decode data: ", data
+
+        # we don't decode None
+        if data is None:
+                return (data)
+
+        # initialize parameters to default
+        if commonize is None:
+                commonize = frozenset()
+
+        if jd_state is None:
+                # this is the first invocation of this function, so when we
+                # return we're done decoding data.
+                finish = True
+
+                # first time here, initialize recursion state
+                if not commonize:
+                        # no common state
+                        obj_cache = {}
+                else:
+                        # load commonized state
+                        obj_cache = data["json_objects"]
+                        data = data["json_state"]
+                jd_state = [obj_cache, commonize]
+        else:
+                # we're being invoked recursively.
+                finish = False
+
+                obj_cache, commonize_old = jd_state
+
+                # check if the first object using commonization
+                if not commonize_old and commonize:
+                        obj_cache = data["json_objects"]
+                        data = data["json_state"]
+
+                # merge in any new commonize requests
+                je_state_changed = False
+
+                # check if we're updating the set of objects to commonize
+                if not commonize:
+                        commonize = commonize_old
+                else:
+                        # update the set of objects to commonize
+                        # make a copy so we don't update our callers state.
+                        commonize = frozenset(commonize_old | commonize)
+                        je_state_changed = True
+
+                if je_state_changed:
+                        jd_state = [obj_cache, commonize]
+
+        # verify state
+        assert type(name) == str, "type(name) == %s" % type(name)
+        assert type(obj_cache) == dict
+        assert type(commonize) == frozenset
+        assert type(jd_state) == list and len(jd_state) == 2
+
+        def jd_return(name, data, desc, finish, jd_state):
+                """Check if we're done decoding data."""
+                # Unused argument; pylint: disable-msg=W0613
+
+                # check if the description is a type object
+                if isinstance(desc, type):
+                        desc_type = desc
+                else:
+                        # get the expected data type from the description
+                        desc_type = type(desc)
+
+                # get the data type
+                data_type = getattr(data, "__metaclass__", type(data))
+
+                # sanity check that the data type matches the description
+                assert desc_type == data_type, \
+                    "unexpected %s for %s, expected: %s, value: %s" % \
+                        (data_type, name, desc_type, rv)
+
+                if not finish:
+                        return data
+
+                # debugging
+                if json_debug:
+                        print >> sys.stderr, "json_decode finished name: ", name
+                        print >> sys.stderr, "json_decode finished data: ", data
+                return data
+
+        # check if the description is a type object
+        if isinstance(desc, type):
+                desc_type = desc
+        else:
+                # get the expected data type from the description
+                desc_type = type(desc)
+
+        # we don't need to do anything for basic types
+        if desc_type in json_types_immediates:
+                return jd_return(name, data, desc, finish, jd_state)
+
+        # decode elements nested in a dictionary
+        # return elements in the specified dictionary like object
+        if isinstance(desc, dict):
+
+                # allocate the return object.  we don't just use
+                # type(desc) because that won't work for things like
+                # collections.defaultdict types.
+                rv = desc.copy()
+                rv.clear()
+
+                # check if we're not decoding nested elements
+                if len(desc) == 0:
+                        rv.update(data)
+                        return jd_return(name, rv, desc, finish, jd_state)
+
+                # lookup the first descriptor to see if we have
+                # generic type description.
+                desc_k, desc_v = desc.items()[0]
+
+                # if the key in the descriptor is a type then we have
+                # a generic type description that applies to all keys
+                # and values in the dictionary.
+                # check if the description is a type object
+                if isinstance(desc_k, type):
+                        # there can only be one generic type desc
+                        assert len(desc) == 1
+
+                        # decode all key / value pairs
+                        for k, v in data.iteritems():
+                                # decode the key
+                                name2 = "%s[%s].key()" % (name, desc_k)
+                                k2 = json_decode(name2, k, desc_k,
+                                    jd_state=jd_state)
+
+                                # decode the value
+                                name2 = "%s[%s].value()" % (name, desc_k)
+                                v2 = json_decode(name2, v, desc_v,
+                                    jd_state=jd_state)
+
+                                # save the result
+                                rv[k2] = v2
+                        return jd_return(name, rv, desc, finish, jd_state)
+
+                # we have element specific value type descriptions.
+                # copy all data and then decode the specific values
+                rv.update(data)
+                for desc_k, desc_v in desc.iteritems():
+                        # check for the specific key
+                        if desc_k not in rv:
+                                continue
+
+                        # decode the value
+                        name2 = "%s[%s].value()" % (name, desc_k)
+                        rv[desc_k] = json_decode(name2, rv[desc_k],
+                            desc_v, jd_state=jd_state)
+                return jd_return(name, rv, desc, finish, jd_state)
+
+        # decode elements nested in a list
+        # return elements in the specified list like object
+        if isinstance(desc, (tuple, list, set, frozenset)):
+                # get the return type
+                rvtype = type(desc)
+
+                # check for an empty list since we use izip_longest
+                if len(data) == 0:
+                        rv = rvtype([])
+                        return jd_return(name, rv, desc, finish, jd_state)
+
+                # check if we're not encoding nested elements
+                if len(desc) == 0:
+                        rv = rvtype(data)
+                        return jd_return(name, rv, desc, finish, jd_state)
+
+                # don't accidentally generate data via izip_longest
+                assert len(data) >= len(desc), \
+                    "%d >= %d" % (len(data), len(desc))
+
+                rv = []
+                i = 0
+                for data2, desc2 in itertools.izip_longest(data, desc,
+                    fillvalue=list(desc)[0]):
+                        name2 = "%s[%i]" % (name, i)
+                        i += 1
+                        rv.append(json_decode(name2, data2, desc2,
+                            jd_state=jd_state))
+                rv = rvtype(rv)
+                return jd_return(name, rv, desc, finish, jd_state)
+
+        # find a decoder for this data, which should be:
+        #     <class>.fromstate(state, jd_state)
+        decoder = getattr(desc_type, "fromstate", None)
+        assert decoder is not None, "no json decoder for: %s" % desc_type
+
+        # if this object was commonized then get a reference to it from the
+        # object cache.
+        if desc_type in commonize:
+                assert type(data) == int
+                # json.dump converts integer dictionary keys into strings, so
+                # obj_cache was indexed by integer strings.
+                data = str(data)
+                rv = obj_cache[data]
+
+                # get the data type
+                data_type = getattr(rv, "__metaclass__", type(rv))
+
+                if data_type != desc_type:
+                        # this commonized object hasn't been decoded yet
+                        # decode it and update the cache with the decoded obj
+                        rv = decoder(rv, jd_state)
+                        obj_cache[data] = rv
+        else:
+                # decode the data
+                rv = decoder(data, jd_state)
+
+        return jd_return(name, rv, desc, finish, jd_state)
+
+def json_validate(name, data):
+        """Validate that a named piece of data can be represented in json and
+        that the data can be passed directly to json.dump().  If the data
+        can't be represented as json we'll trigger an assert.
+
+        'name' is the name of the data to validate
+
+        'data' is the data to validate
+
+        'recurse' is an optional integer that controls recursion.  if it's a
+        negative number (the default) we recursively check any nested lists or
+        dictionaries.  if it's a positive integer than we only recurse to
+        the specified depth."""
+
+        assert isinstance(data, json_types), \
+            "invalid json type \"%s\" for \"%s\", value: %s" % \
+            (type(data), name, str(data))
+
+        if type(data) == dict:
+                for k in data:
+                        # json.dump converts integer dictionary keys into
+                        # strings, which is a bit unexpected.  so make sure we
+                        # don't have any of those.
+                        assert type(k) != int, \
+                            "integer dictionary keys detected for: %s" % name
+
+                        # validate the key and the value
+                        new_name = "%s[%s].key()" % (name, k)
+                        json_validate(new_name, k)
+                        new_name = "%s[%s].value()" % (name, k)
+                        json_validate(new_name, data[k])
+
+        if type(data) == list:
+                for i in range(len(data)):
+                        new_name = "%s[%i]" % (name, i)
+                        json_validate(new_name, data[i])
+
+def json_diff(name, d0, d1):
+        """Compare two json encoded objects to make sure they are
+        identical, assert() if they are not."""
+
+        assert type(d0) == type(d1), ("Json data types differ for \"%s\":\n"
+                "type 1: %s\ntype 2: %s\n") % (name, type(d0), type(d1))
+
+        if type(d0) == dict:
+                assert set(d0) == set(d1), (
+                   "Json dictionary keys differ for \"%s\":\n"
+                   "dict 1 missing: %s\n"
+                   "dict 2 missing: %s\n") % (name,
+                   set(d1) - set(d0), set(d0) - set(d1))
+
+                for k in d0:
+                        new_name = "%s[%s]" % (name, k)
+                        json_diff(new_name, d0[k], d1[k])
+
+        if type(d0) == list:
+                assert len(d0) == len(d1), (
+                   "Json list lengths differ for \"%s\":\n"
+                   "list 1 length: %s\n"
+                   "list 2 length: %s\n") % (name,
+                   len(d0), len(d1))
+
+                for i in range(len(d0)):
+                        new_name = "%s[%i]" % (name, i)
+                        json_diff(new_name, d0[i], d1[i])
+
+class Timer(object):
+        """A class which can be used for measuring process times (user,
+        system, and wait)."""
+
+        __precision = 3
+        __log_fmt = "utime: %7.3f; stime: %7.3f; wtime: %7.3f"
+
+        def __init__(self, module):
+                self.__module = module
+                self.__timings = []
+
+                # we initialize our time values to account for all time used
+                # since the start of the process.  (user and system time are
+                # obtained relative to process start time, but wall time is an
+                # absolute time value so here we initialize out initial wall
+                # time value to the time our process was started.)
+                self.__utime = self.__stime = 0
+                self.__wtime = _prstart()
+
+        def __zero1(self, delta):
+                """Return True if a number is zero (up to a certain level of
+                precision.)"""
+                return int(delta * (10 ** self.__precision)) == 0
+
+        def __zero(self, udelta, sdelta, wdelta):
+                """Return True if all the passed in values are zero."""
+                return self.__zero1(udelta) and \
+                    self.__zero1(sdelta) and \
+                    self.__zero1(wdelta)
+
+        def __str__(self):
+                s = "\nTimings for %s: [\n" % self.__module
+                utotal = stotal = wtotal = 0
+                phases = [i[0] for i in self.__timings] + ["total"]
+                phase_width = max([len(i) for i in phases]) + 1
+                fmt = "  %%-%ss %s;\n" % (phase_width, Timer.__log_fmt)
+                for phase, udelta, sdelta, wdelta in self.__timings:
+                        if self.__zero(udelta, sdelta, wdelta):
+                                continue
+                        utotal += udelta
+                        stotal += sdelta
+                        wtotal += wdelta
+                        s += fmt % (phase + ":", udelta, sdelta, wdelta)
+                s += fmt % ("total:", utotal, stotal, wtotal)
+                s += "]\n"
+                return s
+
+        def reset(self):
+                """Update saved times to current process values."""
+                self.__utime, self.__stime, self.__wtime = self.__get_time()
+
+        @staticmethod
+        def __get_time():
+                """Get current user, system, and wait times for this
+                process."""
+
+                rusage = resource.getrusage(resource.RUSAGE_SELF)
+                utime = rusage[0]
+                stime = rusage[1]
+                wtime = time.time()
+                return (utime, stime, wtime)
+
+        def record(self, phase, logger=None):
+                """Record the difference between the previously saved process
+                time values and the current values.  Then update the saved
+                values to match the current values"""
+
+                utime, stime, wtime = self.__get_time()
+
+                udelta = utime - self.__utime
+                sdelta = stime - self.__stime
+                wdelta = wtime - self.__wtime
+
+                self.__timings.append((phase, udelta, sdelta, wdelta))
+                self.__utime, self.__stime, self.__wtime = utime, stime, wtime
+
+                rv = "%s: %s: " % (self.__module, phase)
+                rv += Timer.__log_fmt % (udelta, sdelta, wdelta)
+                if logger:
+                        logger.debug(rv)
+                return rv
+
+
+class AsyncCallException(Exception):
+        """Exception class for AsyncCall() errors.
+
+        Any exceptions caught by the async call thread get bundled into this
+        Exception because otherwise we'll lose the stack trace associated with
+        the original exception."""
+
+        def __init__(self, e=None):
+                Exception.__init__(self)
+                self.e = e
+                self.tb = None
+
+        def __str__(self):
+                if self.tb:
+                        return str(self.tb) + str(self.e)
+                return str(self.e)
+
+
+class AsyncCall(object):
+        """Class which can be used to call a function asynchronously.
+        The call is performed via a dedicated thread."""
+
+        def __init__(self):
+                self.rv = None
+                self.e = None
+
+                # keep track of what's been done
+                self.started = False
+
+                # internal state
+                self.__thread = None
+
+                # pre-allocate an exception that we'll used in case everything
+                # goes horribly wrong.
+                self.__e = AsyncCallException(
+                    Exception("AsyncCall Internal Error"))
+
+        def __thread_cb(self, dummy, cb, *args, **kwargs):
+                """Dedicated call thread.
+
+                'dummy' is a dummy parameter that is not used.  this is done
+                because the threading module (which invokes this function)
+                inspects the first argument of "args" to check if it's
+                iterable, and that may cause bizarre failures if cb is a
+                dynamically bound class (like xmlrpclib._Method).
+
+                We need to be careful here and catch all exceptions.  Since
+                we're executing in our own thread, any exceptions we don't
+                catch get dumped to the console."""
+                # Catch "Exception"; pylint: disable-msg=W0703
+
+                try:
+                        if DebugValues["async_thread_error"]:
+                                raise Exception("async_thread_error")
+
+                        rv = e = None
+                        try:
+                                rv = cb(*args, **kwargs)
+                        except Exception, e:
+                                self.e = self.__e
+                                self.e.e = e
+                                self.e.tb = traceback.format_exc()
+                                return
+
+                        self.rv = rv
+
+                except Exception, e:
+                        # if we raise an exception here, we're hosed
+                        self.rv = None
+                        self.e = self.__e
+                        self.e.e = e
+                        try:
+                                if DebugValues["async_thread_error"]:
+                                        raise Exception("async_thread_error")
+                                self.e.tb = traceback.format_exc()
+                        except Exception:
+                                pass
+
+        def start(self, cb, *args, **kwargs):
+                """Start a call to an rpc server."""
+
+                assert not self.started
+                self.started = True
+                # prepare the arguments for the thread
+                if args:
+                        args = (0, cb) + args
+                else:
+                        args = (0, cb)
+
+                # initialize and return the thread
+                self.__thread = threading.Thread(target=self.__thread_cb,
+                    args=args, kwargs=kwargs)
+                self.__thread.daemon = True
+                self.__thread.start()
+
+        def join(self):
+                """Wait for an rpc call to finish."""
+                assert self.started
+                self.__thread.join()
+
+        def is_done(self):
+                """Check if an rpc call is done."""
+                assert self.started
+                return not self.__thread.is_alive()
+
+        def result(self):
+                """Finish a call to an rpc server."""
+                assert self.started
+                # wait for the async call thread to exit
+                self.join()
+                assert self.is_done()
+                if self.e:
+                        # if the calling thread hit an exception, re-raise it
+                        # Raising NoneType; pylint: disable-msg=E0702
+                        raise self.e
+                return self.rv
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/src/modules/pipeutils.py	Mon Jul 11 13:49:50 2011 -0700
@@ -0,0 +1,619 @@
+#!/usr/bin/python
+#
+# CDDL HEADER START
+#
+# The contents of this file are subject to the terms of the
+# Common Development and Distribution License (the "License").
+# You may not use this file except in compliance with the License.
+#
+# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+# or http://www.opensolaris.org/os/licensing.
+# See the License for the specific language governing permissions
+# and limitations under the License.
+#
+# When distributing Covered Code, include this CDDL HEADER in each
+# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+# If applicable, add the following below this CDDL HEADER, with the
+# fields enclosed by brackets "[]" replaced with your own identifying
+# information: Portions Copyright [yyyy] [name of copyright owner]
+#
+# CDDL HEADER END
+#
+
+#
+# Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
+#
+
+"""
+Interfaces to allow us to do RPC over pipes.
+
+The following classes are implemented to allow pipes to be used in place of
+file and socket objects:
+        PipeFile
+        PipeSocket
+
+The following classes are implemented to allow HTTP client operations over a
+pipe:
+        PipedHTTPResponse
+        PipedHTTPConnection
+        PipedHTTP
+
+The following classes are implemented to allow RPC servers operations
+over a pipe:
+        _PipedServer
+        _PipedTransport
+        _PipedHTTPRequestHandler
+        _PipedRequestHandler
+        PipedRPCServer
+
+The following classes are implemented to allow RPC clients operations
+over a pipe:
+        PipedServerProxy
+
+RPC clients should be prepared to catch the following exceptions:
+        ProtocolError1
+        ProtocolError2
+        IOError
+
+A RPC server can be implemented as follows:
+
+        server = PipedRPCServer(server_pipe_fd)
+        server.register_introspection_functions()
+        server.register_function(lambda x,y: x+y, 'add')
+        server.serve_forever()
+
+A RPC client can be implemented as follows:
+
+        client_rpc = PipedServerProxy(client_pipe_fd)
+        print client_rpc.add(1, 2)
+        del client_rpc
+"""
+
+import SocketServer
+import errno
+import fcntl
+import httplib
+import os
+import socket
+import stat
+import struct
+import sys
+import tempfile
+import threading
+import traceback
+
+# import JSON RPC libraries and objects
+import jsonrpclib as rpclib
+import jsonrpclib.jsonrpc as rpc
+from jsonrpclib.SimpleJSONRPCServer import SimpleJSONRPCRequestHandler as \
+    SimpleRPCRequestHandler
+from jsonrpclib.SimpleJSONRPCServer import SimpleJSONRPCDispatcher as \
+    SimpleRPCDispatcher
+
+#
+# These includes make it easier for clients to catch the specific
+# exceptions that can be raised by this module.
+#
+# Unused import; pylint: disable-msg=W0611
+from jsonrpclib import ProtocolError as ProtocolError1
+from xmlrpclib import ProtocolError as ProtocolError2
+# Unused import; pylint: enable-msg=W0611
+
+# debugging
+pipeutils_debug = (os.environ.get("PKG_PIPEUTILS_DEBUG", None) is not None)
+
+class PipeFile(object):
+        """Object which makes a pipe look like a "file" object.
+
+        Note that all data transmitted via this pipe is transmitted
+        indirectly.  Any data written to or read from the pipe is actually
+        transmitted via temporary files.  For sending data, the data is
+        written to a temporary file and then the associated file descriptor is
+        sent via the pipe.  For receiving data we try to read a file
+        descriptor from the pipe and when we get one we return the data from
+        the temporary file associated with the file descriptor that we just
+        read.  This is done to help ensure that processes don't block while
+        writing to these pipes (otherwise consumers of these interfaces would
+        have to create threads to constantly drain data from these pipes to
+        prevent clients from blocking).
+
+        This class also support additional non-file special operations like
+        sendfd() and recvfd()."""
+
+        def __init__(self, fd, debug_label, debug=pipeutils_debug):
+                self.__pipefd = fd
+                self.__readfh = None
+                self.closed = False
+
+                # Pipes related objects should never live past an exec
+                flags = fcntl.fcntl(self.__pipefd, fcntl.F_GETFD)
+                flags |= fcntl.FD_CLOEXEC
+                fcntl.fcntl(self.__pipefd, fcntl.F_SETFD, flags)
+
+                self.debug = debug
+                self.debug_label = debug_label
+                self.debug_msg("__init__")
+
+        def __del__(self):
+                self.debug_msg("__del__")
+                if not self.closed:
+                        self.close()
+
+        def debug_msg(self, op, msg=None):
+                """If debugging is enabled display msg."""
+                if not self.debug:
+                        return
+
+                if msg is not None:
+                        msg = ": %s" % msg
+                else:
+                        msg = ""
+
+                if self.debug_label is not None:
+                        label = "%s: %s" % (os.getpid(), self.debug_label)
+                else:
+                        label = "%s" % os.getpid()
+
+                print >> sys.stderr, "%s: %s.%s(%d)%s" % \
+                    (label, op, type(self).__name__, self.__pipefd, msg)
+
+        def debug_dumpfd(self, op, fd):
+                """If debugging is enabled dump the contents of fd."""
+                if not self.debug:
+                        return
+
+                si = os.fstat(fd)
+                if not stat.S_ISREG(si.st_mode):
+                        msg = "fd=%d" % fd
+                else:
+                        os.lseek(fd, os.SEEK_SET, 0)
+                        msg = "".join(os.fdopen(os.dup(fd)).readlines())
+                        msg = "msg=%s" % (msg)
+                        os.lseek(fd, os.SEEK_SET, 0)
+
+                self.debug_msg(op, msg)
+
+        def fileno(self):
+                """Required to support select.select()."""
+                return self.__pipefd
+
+        def readline(self):
+                """Read one entire line from the pipe.
+                Can block waiting for input."""
+
+                if self.__readfh is not None:
+                        # read from the fd that we received over the pipe
+                        data = self.__readfh.readline()
+                        if data != "":
+                                return data
+                        # the fd we received over the pipe is empty
+                        self.__readfh = None
+
+                # recieve a file descriptor from the pipe
+                fd = self.recvfd()
+                if fd == -1:
+                        return ""
+                self.__readfh = os.fdopen(fd)
+                # return data from the received fd
+                return self.readline()
+
+        def read(self, size=-1):
+                """Read at most size bytes from the pipe.
+                Can block waiting for input."""
+
+                if self.__readfh is not None:
+                        # read from the fd that we received over the pipe
+                        data = self.__readfh.read(size)
+                        if data != "":
+                                return data
+                        # the fd we received over the pipe is empty
+                        self.__readfh = None
+
+                # recieve a file descriptor from the pipe
+                fd = self.recvfd()
+                if fd == -1:
+                        return ""
+                self.__readfh = os.fdopen(fd)
+                # return data from the received fd
+                return self.read(size)
+
+        def write(self, msg):
+                """Write a string to the pipe."""
+                mf = tempfile.TemporaryFile()
+                mf.write(msg)
+                mf.flush()
+                self.sendfd(mf.fileno())
+                mf.close()
+
+        def close(self):
+                """Close the pipe."""
+                if self.closed:
+                        return
+                self.debug_msg("close")
+                os.close(self.__pipefd)
+                self.__readfh = None
+                self.closed = True
+
+        def flush(self):
+                """A NOP since we never do any buffering of data."""
+                pass
+
+        def sendfd(self, fd):
+                """Send a file descriptor via the pipe."""
+
+                if self.closed:
+                        self.debug_msg("sendfd", "failed (closed)")
+                        raise IOError(
+                            "sendfd() called for closed %s" %
+                            type(self).__name__)
+
+                self.debug_dumpfd("sendfd", fd)
+                try:
+                        fcntl.ioctl(self.__pipefd, fcntl.I_SENDFD, fd)
+                except:
+                        self.debug_msg("sendfd", "failed")
+                        raise
+
+        def recvfd(self):
+                """Receive a file descriptor via the pipe."""
+
+                if self.closed:
+                        self.debug_msg("recvfd", "failed (closed)")
+                        raise IOError(
+                            "sendfd() called for closed %s" %
+                            type(self).__name__)
+
+                try:
+                        fcntl_args = struct.pack('i', -1)
+                        fcntl_rv = fcntl.ioctl(self.__pipefd,
+                            fcntl.I_RECVFD, fcntl_args)
+                        fd = struct.unpack('i', fcntl_rv)[0]
+                except IOError, e:
+                        if e.errno == errno.ENXIO:
+                                # other end of the connection was closed
+                                return -1
+                        self.debug_msg("recvfd", "failed")
+                        raise e
+                assert fd != -1
+
+                # debugging
+                self.debug_dumpfd("recvfd", fd)
+
+                # reset the current file pointer
+                si = os.fstat(fd)
+                if stat.S_ISREG(si.st_mode):
+                        os.lseek(fd, os.SEEK_SET, 0)
+
+                return fd
+
+
+class PipeSocket(PipeFile):
+        """Object which makes a pipe look like a "socket" object."""
+
+        def __init__(self, fd, debug_label, debug=pipeutils_debug):
+                PipeFile.__init__(self, fd, debug_label, debug=debug)
+
+        def makefile(self, mode='r', bufsize=-1):
+                """Return a file-like object associated with this pipe.
+                The pipe will be duped for this new object so that the object
+                can be closed and garbage-collected independently."""
+                # Unused argument; pylint: disable-msg=W0613
+
+                dup_fd = os.dup(self.fileno())
+                self.debug_msg("makefile", "dup fd=%d" % dup_fd)
+                return PipeFile(dup_fd, self.debug_label, debug=self.debug)
+
+        def recv(self, bufsize, flags=0):
+                """Receive data from the pipe.
+                Can block waiting for input."""
+                # Unused argument; pylint: disable-msg=W0613
+                return self.read(bufsize)
+
+        def send(self, msg, flags=0):
+                """Send data to the Socket.
+                Should never really block."""
+                # Unused argument; pylint: disable-msg=W0613
+                return self.write(msg)
+
+        def sendall(self, msg):
+                """Send data to the pipe.
+                Should never really block."""
+                self.write(msg)
+
+        @staticmethod
+        def shutdown(how):
+                """Nothing to do here.  Move along."""
+                # Unused argument; pylint: disable-msg=W0613
+                return
+
+
+class PipedHTTPResponse(httplib.HTTPResponse):
+        """Create a httplib.HTTPResponse like object that can be used with
+        a pipe as a transport.  We override the minimum number of parent
+        routines necessary."""
+
+        def begin(self):
+                """Our connection will never be automatically closed, so set
+                will_close to False."""
+
+                httplib.HTTPResponse.begin(self)
+                self.will_close = False
+                return
+
+
+class PipedHTTPConnection(httplib.HTTPConnection):
+        """Create a httplib.HTTPConnection like object that can be used with
+        a pipe as a transport.  We override the minimum number of parent
+        routines necessary."""
+
+        # we use PipedHTTPResponse in place of httplib.HTTPResponse
+        response_class = PipedHTTPResponse
+
+        def __init__(self, fd, port=None, strict=None):
+                assert port is None
+
+                # invoke parent constructor
+                httplib.HTTPConnection.__init__(self, "localhost",
+                    strict=strict)
+
+                # self.sock was initialized by httplib.HTTPConnection
+                # to point to a socket, overwrite it with a pipe.
+                assert(type(fd) == int) and os.fstat(fd)
+                self.sock = PipeSocket(fd, "client-connection")
+
+        def __del__(self):
+                # make sure the destructor gets called for our pipe
+                if self.sock is not None:
+                        self.close()
+
+        def close(self):
+                """Close our pipe fd."""
+                self.sock.close()
+                self.sock = None
+
+        def fileno(self):
+                """Required to support select()."""
+                return self.sock.fileno()
+
+
+class PipedHTTP(httplib.HTTP):
+        """Create httplib.HTTP like object that can be used with
+        a pipe as a transport.  We override the minimum number of parent
+        routines necessary.
+
+        xmlrpclib uses the legacy httplib.HTTP class interfaces (instead of
+        the newer class httplib.HTTPConnection interfaces), so we need to
+        provide a "Piped" compatibility class that wraps the httplib.HTTP
+        compatibility class."""
+
+        _connection_class = PipedHTTPConnection
+
+        @property
+        def sock(self):
+                """Return the "socket" associated with this HTTP pipe
+                connection."""
+                return self._conn.sock
+
+
+class _PipedTransport(rpc.Transport):
+        """Create a Transport object which can create new PipedHTTP
+        connections via an existing pipe."""
+
+        def __init__(self, fd, http_enc=True):
+                self.__pipe_file = PipeFile(fd, "client-transport")
+                self.__http_enc = http_enc
+                rpc.Transport.__init__(self)
+                self.verbose = False
+
+        def __del__(self):
+                # make sure the destructor gets called for our connection
+                if self.__pipe_file is not None:
+                        self.close()
+
+        def close(self):
+                """Close the pipe associated with this transport."""
+                self.__pipe_file.close()
+                self.__pipe_file = None
+
+        def make_connection(self, host):
+                """Create a new PipedHTTP connection to the server.  This
+                involves creating a new pipe, and sending one end of the pipe
+                to the server, and then wrapping the local end of the pipe
+                with a PipedHTTP object.  This object can then be subsequently
+                used to issue http requests."""
+                # Redefining name from outer scope; pylint: disable-msg=W0621
+
+                assert self.__pipe_file is not None
+
+                client_pipefd, server_pipefd = os.pipe()
+                self.__pipe_file.sendfd(server_pipefd)
+                os.close(server_pipefd)
+
+                if self.__http_enc:
+                        # we're using http encapsulation so return a
+                        # PipedHTTP connection object
+                        return PipedHTTP(client_pipefd)
+
+                # we're not using http encapsulation so return a
+                # PipeSocket object
+                return PipeSocket(client_pipefd, "client-connection")
+
+        def request(self, host, handler, request_body, verbose=0):
+                """Send a request to the server."""
+
+                if self.__http_enc:
+                        # we're using http encapsulation so just pass the
+                        # request to our parent class.
+                        return rpc.Transport.request(self,
+                            host, handler, request_body, verbose)
+
+                c = self.make_connection(host)
+                c.send(request_body)
+                return self._parse_response(c.makefile(), c)
+
+
+class _PipedServer(SocketServer.BaseServer):
+        """Modeled after SocketServer.TCPServer."""
+
+        def __init__(self, fd, RequestHandlerClass):
+                self.__pipe_file = PipeFile(fd, "server-transport")
+                self.__shutdown_initiated = False
+
+                SocketServer.BaseServer.__init__(self,
+                    server_address="localhost",
+                    RequestHandlerClass=RequestHandlerClass)
+
+        def fileno(self):
+                """Required to support select.select()."""
+                return self.__pipe_file.fileno()
+
+        def initiate_shutdown(self):
+                """Trigger a shutdown of the RPC server.  This is done via a
+                separate thread since the shutdown() entry point is
+                non-reentrant."""
+
+                if self.__shutdown_initiated:
+                        return
+                self.__shutdown_initiated = True
+
+                def shutdown_self(server_obj):
+                        """Shutdown the server thread."""
+                        server_obj.shutdown()
+
+                t = threading.Thread(
+                    target=shutdown_self, args=(self,))
+                t.start()
+
+        def get_request(self):
+                """Get a request from the client.  Returns a tuple containing
+                the request and the client address (mirroring the return value
+                from self.socket.accept())."""
+
+                fd = self.__pipe_file.recvfd()
+                if fd == -1:
+                        self.initiate_shutdown()
+                        raise socket.error()
+
+                return (PipeSocket(fd, "server-connection"),
+                    ("localhost", None))
+
+
+class _PipedHTTPRequestHandler(SimpleRPCRequestHandler):
+        """Piped RPC request handler that uses HTTP encapsulation."""
+
+        def setup(self):
+                """Prepare to handle a request."""
+
+                rv = SimpleRPCRequestHandler.setup(self)
+
+                # StreamRequestHandler will have duped our PipeSocket via
+                # makefile(), so close the connection socket here.
+                self.connection.close()
+                return rv
+
+
+class _PipedRequestHandler(_PipedHTTPRequestHandler):
+        """Piped RPC request handler that doesn't use HTTP encapsulation."""
+
+        def handle_one_request(self):
+                """Handle one client request."""
+
+                request = self.rfile.readline()
+                response = ""
+                try:
+                        # Access to protected member; pylint: disable-msg=W0212
+                        response = self.server._marshaled_dispatch(request)
+                except:
+                        # No exception type specified; pylint: disable-msg=W0702
+                        # The server had an unexpected exception.
+                        # dump the error to stderr
+                        print >> sys.stderr, traceback.format_exc()
+
+                        # Return the error to the caller.
+                        err_lines = traceback.format_exc().splitlines()
+                        trace_string = '%s | %s' % \
+                            (err_lines[-3], err_lines[-1])
+                        fault = rpclib.Fault(-32603,
+                            'Server error: %s' % trace_string)
+                        response = fault.response()
+
+                        # tell the server to exit
+                        self.server.initiate_shutdown()
+
+                self.wfile.write(response)
+                self.wfile.flush()
+
+
+class PipedRPCServer(_PipedServer, SimpleRPCDispatcher):
+        """Modeled after SimpleRPCServer.  Differs in that
+        SimpleRPCServer is derived from SocketServer.TCPServer but we're
+        derived from _PipedServer."""
+
+        def __init__(self, addr,
+            logRequests=False, encoding=None, http_enc=True):
+
+                self.logRequests = logRequests
+                SimpleRPCDispatcher.__init__(self, encoding)
+
+                requestHandler = _PipedHTTPRequestHandler
+                if not http_enc:
+                        requestHandler = _PipedRequestHandler
+
+                _PipedServer.__init__(self, addr, requestHandler)
+
+        def  __check_for_server_errors(self, response):
+                """Check if a response is actually a fault object.  If so
+                then it's time to die."""
+
+                if type(response) != rpclib.Fault:
+                        return
+
+                # server encountered an error, time for seppuku
+                self.initiate_shutdown()
+
+        def _dispatch(self, *args, **kwargs):
+                """Check for unexpected server exceptions while handling a
+                request."""
+                # pylint: disable-msg=W0221
+                # Arguments differ from overridden method;
+
+                response = SimpleRPCDispatcher._dispatch(
+                    self, *args, **kwargs)
+                self.__check_for_server_errors(response)
+                return response
+
+        def _marshaled_single_dispatch(self, *args, **kwargs):
+                """Check for unexpected server exceptions while handling a
+                request."""
+                # pylint: disable-msg=W0221
+                # Arguments differ from overridden method;
+
+                response = SimpleRPCDispatcher._marshaled_single_dispatch(
+                    self, *args, **kwargs)
+                self.__check_for_server_errors(response)
+                return response
+
+        def _marshaled_dispatch(self, *args, **kwargs):
+                """Check for unexpected server exceptions while handling a
+                request."""
+                # pylint: disable-msg=W0221
+                # Arguments differ from overridden method;
+
+                response = SimpleRPCDispatcher._marshaled_dispatch(
+                    self, *args, **kwargs)
+                self.__check_for_server_errors(response)
+                return response
+
+
+class PipedServerProxy(rpc.ServerProxy):
+        """Create a ServerProxy object that can be used to make calls to
+        an RPC server on the other end of a pipe."""
+
+        def __init__(self, pipefd, encoding=None, verbose=0, version=None,
+            http_enc=True):
+                self.__piped_transport = _PipedTransport(pipefd,
+                    http_enc=http_enc)
+                rpc.ServerProxy.__init__(self,
+                    "http://localhost/RPC2",
+                    transport=self.__piped_transport,
+                    encoding=encoding, verbose=verbose, version=version)
--- a/src/modules/version.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/modules/version.py	Mon Jul 11 13:49:50 2011 -0700
@@ -348,6 +348,18 @@
                 else:
                         self.timestr = None
 
+        @staticmethod
+        def getstate(obj, je_state=None):
+                """Returns the serialized state of this object in a format
+                that that can be easily stored using JSON, pickle, etc."""
+                return str(obj)
+
+        @staticmethod
+        def fromstate(state, jd_state=None):
+                """Allocate a new object using previously serialized state
+                obtained via getstate()."""
+                return Version(state, None)
+
         def compatible_with_build(self, target):
                 """target is a DotSequence for the target system."""
                 if self.build_release < target:
--- a/src/pkg/external_deps.txt	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/pkg/external_deps.txt	Mon Jul 11 13:49:50 2011 -0700
@@ -13,6 +13,7 @@
     pkg:/gnome/theme/hicolor-icon-theme
     pkg:/library/python-2/cherrypy-26
     pkg:/library/python-2/coverage-26
+    pkg:/library/python-2/jsonrpclib-26
     pkg:/library/python-2/locale-services
     pkg:/library/python-2/m2crypto-26
     pkg:/library/python-2/mako-26
--- a/src/pkg/manifests/developer:opensolaris:pkg5.p5m	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/pkg/manifests/developer:opensolaris:pkg5.p5m	Mon Jul 11 13:49:50 2011 -0700
@@ -36,6 +36,7 @@
 depend type=require fmri=pkg:/developer/python/pylint
 depend type=require fmri=pkg:/developer/versioning/mercurial
 depend type=require fmri=pkg:/library/python-2/coverage-26
+depend type=require fmri=pkg:/library/python-2/jsonrpclib-26
 depend type=require fmri=pkg:/library/python-2/locale-services
 depend type=require fmri=pkg:/library/python-2/pygobject-26
 depend type=require fmri=pkg:/library/python-2/pygtk2-26
--- a/src/pkg/manifests/package:pkg.p5m	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/pkg/manifests/package:pkg.p5m	Mon Jul 11 13:49:50 2011 -0700
@@ -98,6 +98,8 @@
 file path=$(PYDIRVP)/pkg/client/pkg_solver.py
 file path=$(PYDIRVP)/pkg/client/pkgdefs.py
 file path=$(PYDIRVP)/pkg/client/pkgplan.py
+file path=$(PYDIRVP)/pkg/client/pkgremote.py
+file path=$(PYDIRVP)/pkg/client/plandesc.py
 file path=$(PYDIRVP)/pkg/client/progress.py
 file path=$(PYDIRVP)/pkg/client/publisher.py
 file path=$(PYDIRVP)/pkg/client/query_parser.py
@@ -150,6 +152,7 @@
 file path=$(PYDIRVP)/pkg/p5i.py
 file path=$(PYDIRVP)/pkg/p5p.py
 file path=$(PYDIRVP)/pkg/p5s.py
+file path=$(PYDIRVP)/pkg/pipeutils.py
 file path=$(PYDIRVP)/pkg/pkggzip.py
 file path=$(PYDIRVP)/pkg/pkgsubprocess.py
 file path=$(PYDIRVP)/pkg/pkgtarfile.py
--- a/src/pkgdep.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/pkgdep.py	Mon Jul 11 13:49:50 2011 -0700
@@ -21,7 +21,7 @@
 #
 
 #
-# Copyright (c) 2009, 2011, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2009, 2012, Oracle and/or its affiliates. All rights reserved.
 #
 
 import getopt
@@ -42,7 +42,7 @@
 import pkg.publish.dependencies as dependencies
 from pkg.misc import msg, emsg, PipeError
 
-CLIENT_API_VERSION = 71
+CLIENT_API_VERSION = 72
 PKG_CLIENT_NAME = "pkgdepend"
 
 DEFAULT_SUFFIX = ".res"
--- a/src/setup.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/setup.py	Mon Jul 11 13:49:50 2011 -0700
@@ -302,8 +302,13 @@
 
 pylint_targets = [
         'pkg.altroot',
+        'pkg.client.api',
         'pkg.client.linkedimage',
         'pkg.client.pkgdefs',
+        'pkg.client.pkgremote',
+        'pkg.client.plandesc',
+        'pkg.misc',
+        'pkg.pipeutils',
         ]
 
 web_files = []
@@ -1588,14 +1593,15 @@
     ext_modules = ext_modules,
     )
 
-# We don't support 64-bit yet, but 64-bit _actions.so, _common.so and _varcet.so
-# are needed for a system repository mod_wsgi application, sysrepo_p5p.py.
-# Remove the others.
-remove_libs = ["arch.so",
+# We don't support 64-bit yet, but 64-bit _actions.so, _common.so, and
+# _varcet.so are needed for a system repository mod_wsgi application,
+# sysrepo_p5p.py.  Remove the others.
+remove_libs = [
+    "arch.so",
     "elf.so",
     "pspawn.so",
     "solver.so",
-    "syscallat.so"
+    "syscallat.so",
 ]
 pkg_64_path = os.path.join(root_dir, "usr/lib/python2.6/vendor-packages/pkg/64")
 for lib in remove_libs:
--- a/src/sysrepo.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/sysrepo.py	Mon Jul 11 13:49:50 2011 -0700
@@ -56,7 +56,7 @@
 orig_cwd = None
 
 PKG_CLIENT_NAME = "pkg.sysrepo"
-CLIENT_API_VERSION = 71
+CLIENT_API_VERSION = 72
 pkg.client.global_settings.client_name = PKG_CLIENT_NAME
 
 # exit codes
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/src/tests/api/t_async_rpc.py	Mon Jul 11 13:49:50 2011 -0700
@@ -0,0 +1,252 @@
+#!/usr/bin/python
+#
+# CDDL HEADER START
+#
+# The contents of this file are subject to the terms of the
+# Common Development and Distribution License (the "License").
+# You may not use this file except in compliance with the License.
+#
+# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+# or http://www.opensolaris.org/os/licensing.
+# See the License for the specific language governing permissions
+# and limitations under the License.
+#
+# When distributing Covered Code, include this CDDL HEADER in each
+# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+# If applicable, add the following below this CDDL HEADER, with the
+# fields enclosed by brackets "[]" replaced with your own identifying
+# information: Portions Copyright [yyyy] [name of copyright owner]
+#
+# CDDL HEADER END
+#
+
+#
+# Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
+#
+
+import testutils
+if __name__ == "__main__":
+	testutils.setup_environment("../../../proto")
+import pkg5unittest
+
+import multiprocessing
+import os
+import random
+import signal
+import sys
+import threading
+import time
+import traceback
+
+import pkg.nrlock
+import pkg.pipeutils
+
+from pkg.client.debugvalues import DebugValues
+from pkg.misc import AsyncCall, AsyncCallException
+
+class TestAsyncRPC(pkg5unittest.Pkg5TestCase):
+
+        @staticmethod
+        def __nop():
+                pass
+
+        @staticmethod
+        def __add(x, y):
+                return x + y
+
+        @staticmethod
+        def __raise_ex():
+                raise Exception("raise_ex()")
+
+        @staticmethod
+        def __sleep(n):
+                time.sleep(n)
+
+        def test_async_basics(self):
+                # test a simple async call with no parameters
+                ac = AsyncCall()
+                ac.start(self.__nop)
+                ac.result()
+
+                # test a simple async call with positional parameters
+                ac = AsyncCall()
+                ac.start(self.__add, 1, 2)
+                rv = ac.result()
+                self.assertEqual(rv, 3)
+
+                # test a simple async call with keyword parameters
+                ac = AsyncCall()
+                ac.start(self.__add, x=1, y=2)
+                rv = ac.result()
+                self.assertEqual(rv, 3)
+
+                # test async call with invalid arguments
+                ac = AsyncCall()
+                ac.start(self.__add, 1, 2, 3)
+                self.assertRaisesRegexp(AsyncCallException,
+                    "takes exactly 2 arguments",
+                    ac.result)
+                ac = AsyncCall()
+                ac.start(self.__add, x=1, y=2, z=3)
+                self.assertRaisesRegexp(AsyncCallException,
+                    "got an unexpected keyword argument",
+                    ac.result)
+                ac = AsyncCall()
+                ac.start(self.__add, y=2, z=3)
+                self.assertRaisesRegexp(AsyncCallException,
+                    "got an unexpected keyword argument",
+                    ac.result)
+
+        def test_async_thread_errors(self):
+                # test exceptions raised in the AsyncCall class
+                DebugValues["async_thread_error"] = 1
+                ac = AsyncCall()
+                ac.start(self.__nop)
+                self.assertRaisesRegexp(AsyncCallException,
+                    "async_thread_error",
+                    ac.result)
+
+        def __server(self, client_pipefd, server_pipefd, http_enc=True):
+                """Setup RPC Server."""
+
+                os.close(client_pipefd)
+                server = pkg.pipeutils.PipedRPCServer(server_pipefd,
+                    http_enc=http_enc)
+                server.register_introspection_functions()
+                server.register_function(self.__nop, "nop")
+                server.register_function(self.__add, "add")
+                server.register_function(self.__raise_ex, "raise_ex")
+                server.register_function(self.__sleep, "sleep")
+                server.serve_forever()
+
+        def __server_setup(self, http_enc=True, use_proc=True):
+                """Setup an rpc server."""
+
+                # create a pipe to communicate between the client and server
+                client_pipefd, server_pipefd = os.pipe()
+
+                # check if the server should be a process or thread
+                alloc_server = multiprocessing.Process
+                if not use_proc:
+                        threading.Thread
+
+                # fork off and start server process/thread
+                server_proc = alloc_server(
+                    target=self.__server,
+                    args=(client_pipefd, server_pipefd),
+                    kwargs={ "http_enc": http_enc })
+                server_proc.daemon = True
+                server_proc.start()
+                os.close(server_pipefd)
+
+                # Setup ourselves as the client
+                client_rpc = pkg.pipeutils.PipedServerProxy(client_pipefd,
+                    http_enc=http_enc)
+
+                return (server_proc, client_rpc)
+
+        def __server_setup_and_call(self, method, http_enc=True,
+            use_proc=True, **kwargs):
+                """Setup an rpc server and make a call to it.
+                All calls are made asynchronously."""
+
+                server_proc, client_rpc = self.__server_setup(
+                    http_enc=http_enc, use_proc=use_proc)
+                method_cb = getattr(client_rpc, method)
+                ac = AsyncCall()
+                ac.start(method_cb, **kwargs)
+
+                # Destroying all references to the client object should close
+                # the client end of our pipe to the server, which in turn
+                # should cause the server to cleanly exit.  If we hang waiting
+                # for the server to exist then that's a bug.
+                del method_cb, client_rpc
+
+                try:
+                        rv = ac.result()
+                except AsyncCallException, ex:
+                        # we explicity delete the client rpc object to try and
+                        # ensure that any connection to the server process
+                        # gets closed (so that the server process exits).
+                        server_proc.join()
+                        raise
+
+                server_proc.join()
+                return rv
+
+        def __test_rpc_basics(self, http_enc=True, use_proc=True):
+
+                # our rpc server only support keyword parameters
+
+                # test rpc call with no arguments
+                rv = self.__server_setup_and_call("nop",
+                    http_enc=http_enc, use_proc=use_proc)
+                self.assertEqual(rv, None)
+
+                # test rpc call with two arguments
+                rv = self.__server_setup_and_call("add", x=1, y=2,
+                    http_enc=http_enc, use_proc=use_proc)
+                self.assertEqual(rv, 3)
+
+                # test rpc call with an invalid number of arguments
+                self.assertRaisesRegexp(AsyncCallException,
+                    "Invalid parameters.",
+                    self.__server_setup_and_call,
+                    "add", x=1, y=2, z=3,
+                    http_enc=http_enc, use_proc=use_proc)
+
+                # test rpc call of a non-existant method
+                self.assertRaisesRegexp(AsyncCallException,
+                    "Method foo not supported.",
+                    self.__server_setup_and_call,
+                    "foo",
+                    http_enc=http_enc, use_proc=use_proc)
+
+                # test rpc call of a server function that raises an exception
+                self.assertRaisesRegexp(AsyncCallException,
+                    "Server error: .* Exception: raise_ex()",
+                    self.__server_setup_and_call,
+                    "raise_ex",
+                    http_enc=http_enc, use_proc=use_proc)
+
+        def __test_rpc_interruptions(self, http_enc):
+
+                # sanity check rpc sleep call
+                rv = self.__server_setup_and_call("sleep", n=0,
+                    http_enc=http_enc)
+
+                # test interrupted rpc calls by killing the server
+                for i in range(10):
+                        server_proc, client_rpc = self.__server_setup(
+                            http_enc=http_enc)
+                        ac = AsyncCall()
+
+                        method = getattr(client_rpc, "sleep")
+                        ac.start(method, n=10000)
+                        del method, client_rpc
+
+                        # add optional one second delay so that we can try
+                        # kill before and after the call has been started.
+                        time.sleep(random.randint(0, 1))
+
+                        # vary how we kill the target
+                        if random.randint(0, 1) == 1:
+                                server_proc.terminate()
+                        else:
+                                os.kill(server_proc.pid, signal.SIGKILL)
+
+                        self.assertRaises(AsyncCallException, ac.result)
+                        server_proc.join()
+
+        def test_rpc_basics(self):
+                # tests rpc calls to another process
+                self.__test_rpc_basics()
+                self.__test_rpc_basics(http_enc=False)
+
+                # tests rpc calls to another thread
+                self.__test_rpc_basics(use_proc=False)
+                self.__test_rpc_basics(http_enc=False, use_proc=False)
+
+        def test_rpc_interruptions(self):
+                self.__test_rpc_interruptions(http_enc=True)
+                self.__test_rpc_interruptions(http_enc=False)
--- a/src/tests/api/t_linked_image.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/tests/api/t_linked_image.py	Mon Jul 11 13:49:50 2011 -0700
@@ -156,7 +156,6 @@
             "tmp/baz",
             "tmp/dricon2_da",
             "tmp/dricon_n2m",
-            "license.txt",
         ]
 
         p_files2 = {
@@ -172,6 +171,9 @@
 sys::3:root
 adm::4:root
 """,
+            "tmp/license.txt": """
+This is a license.
+""",
         }
 
         # generate packages that don't need to be synced
@@ -321,7 +323,7 @@
 
                     add group groupname=muppets
                     add user username=Kermit group=adm home-dir=/export/home/Kermit
-                    add license license="Foo" path=license.txt must-display=True must-accept=True
+                    add license license="Foo" path=tmp/license.txt must-display=True must-accept=True
                     close\n"""
                 p_all.append(p_data)
 
@@ -913,9 +915,9 @@
                                 lin=self.i_lin[0], li_path=self.i_path[0],
                                 noexecute=True)
 
-                # create images, attach children (p2c), and update publishers
+                # create images, attach one child (p2c), and update publishers
                 api_objs = self._imgs_create(5)
-                self._children_attach(0, [1, 2, 3, 4])
+                self._children_attach(0, [2])
                 configure_pubs1(self)
 
                 # test recursive parent operations
@@ -947,6 +949,40 @@
                         api_objs[0].gen_plan_uninstall(*args, **kwargs)),
                         [self.p_sync1_name_gen])
 
+                # create images, attach children (p2c), and update publishers
+                api_objs = self._imgs_create(5)
+                self._children_attach(0, [1, 2, 3, 4])
+                configure_pubs1(self)
+
+                # test recursive parent operations
+                assertRaises(
+                    (apx_verify, {
+                        "e_type": apx.LinkedImageException,
+                        "e_member": "lix_bundle"}),
+                    lambda *args, **kwargs: list(
+                        api_objs[0].gen_plan_install(*args, **kwargs)),
+                        [self.p_sync1_name[0]])
+                assertRaises(
+                    (apx_verify, {
+                        "e_type": apx.LinkedImageException,
+                        "e_member": "lix_bundle"}),
+                    lambda *args, **kwargs: list(
+                        api_objs[0].gen_plan_update(*args, **kwargs)))
+                assertRaises(
+                    (apx_verify, {
+                        "e_type": apx.LinkedImageException,
+                        "e_member": "lix_bundle"}),
+                    lambda *args, **kwargs: list(
+                        api_objs[0].gen_plan_change_varcets(*args, **kwargs)),
+                        variants={"variant.foo": "baz"})
+                assertRaises(
+                    (apx_verify, {
+                        "e_type": apx.LinkedImageException,
+                        "e_member": "lix_bundle"}),
+                    lambda *args, **kwargs: list(
+                        api_objs[0].gen_plan_uninstall(*args, **kwargs)),
+                        [self.p_sync1_name_gen])
+
                 # test operations on child nodes
                 rvdict = {1: EXIT_NOP, 2: EXIT_OOPS, 3: EXIT_OOPS,
                     4: EXIT_OOPS}
--- a/src/tests/api/t_misc.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/tests/api/t_misc.py	Mon Jul 11 13:49:50 2011 -0700
@@ -20,20 +20,22 @@
 # CDDL HEADER END
 #
 
-# Copyright 2010 Sun Microsystems, Inc.  All rights reserved.
-# Use is subject to license terms.
+#
+# Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
+#
 
 import testutils
 if __name__ == "__main__":
         testutils.setup_environment("../../../proto")
 import pkg5unittest
 
-import unittest
+import ctypes
 import os
+import shutil
 import stat
 import sys
 import tempfile
-import shutil
+import unittest
 
 import pkg.misc as misc
 import pkg.actions as action
@@ -85,5 +87,33 @@
                 self.assertTrue(misc.valid_pub_url(
                     "http://pkg.opensolaris.org/dev"))
 
+        def test_out_of_memory(self):
+                """Verify that misc.out_of_memory doesn't raise an exception
+                and displays the amount of memory that was in use."""
+
+                self.assertRegexp(misc.out_of_memory(),
+                    "virtual memory was in use")
+
+        def test_psinfo(self):
+                """Verify that psinfo gets us some reasonable data."""
+
+                psinfo = misc.ProcFS.psinfo()
+
+                # verify pids
+                self.assertEqual(psinfo.pr_pid, os.getpid())
+                self.assertEqual(psinfo.pr_ppid, os.getppid())
+
+                # verify user/group ids
+                self.assertEqual(psinfo.pr_uid, os.getuid())
+                self.assertEqual(psinfo.pr_euid, os.geteuid())
+                self.assertEqual(psinfo.pr_gid, os.getgid())
+                self.assertEqual(psinfo.pr_egid, os.getegid())
+
+                # verify zoneid (it's near the end of the structure so if it
+                # is right then we likely got most the stuff inbetween decoded
+                # correctly).
+                libc = ctypes.CDLL('libc.so')
+                self.assertEqual(psinfo.pr_zoneid, libc.getzoneid())
+
 if __name__ == "__main__":
         unittest.main()
--- a/src/tests/cli/t_pkg_linked.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/tests/cli/t_pkg_linked.py	Mon Jul 11 13:49:50 2011 -0700
@@ -21,7 +21,7 @@
 #
 
 #
-# Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved.
 #
 
 import testutils
@@ -1543,5 +1543,13 @@
                 self._pkg([1], "verify")
                 self._pkg([2, 3, 4], "verify", rv=EXIT_OOPS)
 
+        def test_staged_noop(self):
+                self._imgs_create(1)
+
+                # test staged execution with an noop/empty plan
+                self._pkg([0], "update --stage=plan", rv=EXIT_NOP)
+                self._pkg([0], "update --stage=prepare")
+                self._pkg([0], "update --stage=execute")
+
 if __name__ == "__main__":
         unittest.main()
--- a/src/tests/cli/t_pkg_temp_sources.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/tests/cli/t_pkg_temp_sources.py	Mon Jul 11 13:49:50 2011 -0700
@@ -20,7 +20,9 @@
 # CDDL HEADER END
 #
 
-# Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.
+#
+# Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved.
+#
 
 import testutils
 if __name__ == "__main__":
@@ -878,6 +880,27 @@
                 assert os.path.exists(vpath)
                 self.assertEqual(os.stat(vpath).st_size, 21)
 
+        def test_05_staged_execution(self):
+                """Verify that staged execution works with temporary
+                origins."""
+
+                # Create an image and verify no packages are known.
+                self.image_create(self.empty_rurl, prefix=None)
+                self.pkg("list -a", exit=1)
+
+                # Install an older version of a known package.
+                self.pkg("install -g %s [email protected]" % self.all_arc)
+                self.pkg("list [email protected] [email protected]")
+
+                # Verify that packages can be updated using temporary origins.
+                self.pkg("update --stage=plan -g %s -g %s" %
+                    (self.incorp_arc, self.quux_arc))
+                self.pkg("update --stage=prepare -g %s -g %s" %
+                    (self.incorp_arc, self.quux_arc))
+                self.pkg("update --stage=execute -g %s -g %s" %
+                    (self.incorp_arc, self.quux_arc))
+                self.pkg("list [email protected] [email protected]")
+
 
 if __name__ == "__main__":
         unittest.main()
--- a/src/tests/pkg5unittest.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/tests/pkg5unittest.py	Mon Jul 11 13:49:50 2011 -0700
@@ -39,7 +39,6 @@
 import gettext
 import hashlib
 import httplib
-import json
 import logging
 import multiprocessing
 import os
@@ -122,7 +121,7 @@
 
 # Version test suite is known to work with.
 PKG_CLIENT_NAME = "pkg"
-CLIENT_API_VERSION = 71
+CLIENT_API_VERSION = 72
 
 ELIDABLE_ERRORS = [ TestSkippedException, depotcontroller.DepotStateException ]
 
@@ -272,6 +271,34 @@
 
         base_port = property(lambda self: self.__base_port, __set_base_port)
 
+        def assertRegexp(self, text, regexp):
+                """Test that a regexp search matches text."""
+
+                if re.search(regexp, text):
+                        return
+                raise self.failureException, \
+                    "\"%s\" does not match \"%s\"" % (regexp, text)
+
+        def assertRaisesRegexp(self, excClass, regexp,
+            callableObj, *args, **kwargs):
+                """Perform the same logic as assertRaises, but then verify
+                that the stringified version of the exception contains the
+                regexp pattern.
+
+                Introduced in in python 2.7"""
+
+                try:
+                        callableObj(*args, **kwargs)
+
+                except excClass, e:
+                        if re.search(regexp, str(e)):
+                                return
+                        raise self.failureException, \
+                            "\"%s\" does not match \"%s\"" % (regexp, str(e))
+
+                raise self.failureException, \
+                    "%s not raised" % excClass
+
         def assertRaisesStringify(self, excClass, callableObj, *args, **kwargs):
                 """Perform the same logic as assertRaises, but then verify that
                 the exception raised can be stringified."""
@@ -2215,6 +2242,7 @@
                 if debug_smf and "smf_cmds_dir" not in command:
                         command = "--debug smf_cmds_dir=%s %s" % \
                             (DebugValues["smf_cmds_dir"], command)
+                command = "-D plandesc_validate=1 %s" % command
                 if use_img_root and "-R" not in command and \
                     "image-create" not in command and "version" not in command:
                         command = "-R %s %s" % (self.get_img_path(), command)
@@ -2782,19 +2810,26 @@
             **kwargs):
                 self.debug("install %s" % " ".join(pkg_list))
 
-                if accept_licenses:
-                        kwargs["accept"] = True
-
+                plan = None
                 for pd in api_obj.gen_plan_install(pkg_list,
                     noexecute=noexecute, **kwargs):
-                        continue
+
+                        if plan is not None:
+                                continue
+                        plan = api_obj.describe()
+
+                        # update licesnse status
+                        for pfmri, src, dest, accepted, displayed in \
+                            plan.get_licenses():
+                                api_obj.set_plan_license_status(pfmri,
+                                    dest.license,
+                                    displayed=show_licenses,
+                                    accepted=accept_licenses)
 
                 if noexecute:
                         return
 
-                self._api_finish(api_obj, catch_wsie=catch_wsie,
-                    show_licenses=show_licenses,
-                    accept_licenses=accept_licenses)
+                self._api_finish(api_obj, catch_wsie=catch_wsie)
 
         def _api_uninstall(self, api_obj, pkg_list, catch_wsie=True, **kwargs):
                 self.debug("uninstall %s" % " ".join(pkg_list))
@@ -2818,18 +2853,7 @@
                         continue
                 self._api_finish(api_obj, catch_wsie=catch_wsie)
 
-        def _api_finish(self, api_obj, catch_wsie=True,
-            show_licenses=False, accept_licenses=False):
-
-                plan = api_obj.describe()
-                if plan:
-                        # update licenses displayed and/or accepted state
-                        for pfmri, src, dest, accepted, displayed in \
-                            plan.get_licenses():
-                                api_obj.set_plan_license_status(pfmri,
-                                    dest.license,
-                                    displayed=show_licenses,
-                                    accepted=accept_licenses)
+        def _api_finish(self, api_obj, catch_wsie=True):
 
                 api_obj.prepare()
                 try:
@@ -3217,8 +3241,10 @@
         # run from within the test suite.
         os.environ["PKG_NO_RUNPY_CMDPATH"] = "1"
 
-        # always print out recursive linked image commands
-        os.environ["PKG_DISP_LINKED_CMDS"] = "1"
+        # verify PlanDescription serialization and that the PlanDescription
+        # isn't modified while we're preparing to for execution.
+        DebugValues["plandesc_validate"] = 1
+        os.environ["PKG_PLANDESC_VALIDATE"] = "1"
 
         # Pretend that we're being run from the fakeroot image.
         assert pkg_cmdpath != "TOXIC"
--- a/src/tests/pylintrc	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/tests/pylintrc	Mon Jul 11 13:49:50 2011 -0700
@@ -19,7 +19,7 @@
 # CDDL HEADER END
 
 #
-# Copyright (c) 2008, 2011, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2008, 2012, Oracle and/or its affiliates. All rights reserved.
 #
 
 # This file is used to control pylint when checking python source
@@ -49,7 +49,20 @@
 
 # Disable the message(s) with the given id(s).
 # C0103 Invalid name "%s" Used when the const/var/class name doesn't match regex
-disable-msg=C0103
+# C0302 Too many lines in module
+# R0901 Too many ancestors (%s/%s)
+# R0902 Too many instance attributes
+# R0904 Too many public methods
+# R0911 Too many return statements
+# R0912 Too many branches
+# R0913 Too many arguments
+# R0914 Too many local variables
+# R0915 Too many statements
+# R0923 Interface not implemented
+# W0141 Used builtin function '%s'
+# W0142 Used * or ** magic
+# W0614 Unused import %s from wildcard import
+disable-msg=C0103,C0302,R0901,R0902,R0904,R0911,R0912,R0913,R0914,R0915,R0923,W0141,W0142,W0614
 
 [REPORTS]
 # set the output format. Available formats are text, parseable, colorized, msvs
@@ -87,7 +100,7 @@
 # * dangerous default values as arguments
 # * redefinition of function / method / class
 # * uses of the global statement
-# 
+#
 [BASIC]
 
 # Required attributes for module, separated by a comma
@@ -136,7 +149,7 @@
 
 
 # try to find bugs in the code using type inference
-# 
+#
 [TYPECHECK]
 
 # Tells wether missing members accessed in mixin class should be ignored. A
@@ -157,11 +170,11 @@
 # * undefined variables
 # * redefinition of variable from builtins or from an outer scope
 # * use of variable before assigment
-# 
+#
 [VARIABLES]
 
 # Tells wether we should check for unused import in __init__ files.
-init-import=no
+init-import=yes
 
 # A regular expression matching names used for dummy variables (i.e. not used).
 dummy-variables-rgx=_|dummy
@@ -181,7 +194,7 @@
 # * attributes not defined in the __init__ method
 # * supported interfaces implementation
 # * unreachable code
-# 
+#
 [CLASSES]
 
 # List of interface methods to ignore, separated by a comma. This is used for
@@ -195,7 +208,7 @@
 # checks for sign of poor/misdesign:
 # * number of methods, attributes, local variables...
 # * size, complexity of functions, methods
-# 
+#
 [DESIGN]
 
 # Maximum number of arguments for function / method
@@ -231,7 +244,7 @@
 # * relative / wildcard imports
 # * cyclic imports
 # * uses of deprecated modules
-# 
+#
 [IMPORTS]
 
 # Deprecated modules which should not be used, separated by a comma
@@ -255,7 +268,7 @@
 # * strict indentation
 # * line length
 # * use of <> instead of !=
-# 
+#
 [FORMAT]
 
 # Maximum number of characters on a single line.
@@ -272,7 +285,7 @@
 # checks for:
 # * warning notes in the code like FIXME, XXX
 # * PEP 263: source code with non ascii character but no encoding declaration
-# 
+#
 [MISCELLANEOUS]
 
 # List of note tags to take in consideration, separated by a comma.
@@ -282,7 +295,7 @@
 # checks for similarities and duplicated code. This computation may be
 # memory / CPU intensive, so you should disable it if you experiments some
 # problems.
-# 
+#
 [SIMILARITIES]
 
 # Minimum lines number of a similarity.
--- a/src/tests/run.py	Fri Jun 15 16:58:18 2012 -0700
+++ b/src/tests/run.py	Mon Jul 11 13:49:50 2011 -0700
@@ -24,7 +24,7 @@
 # Copyright (c) 2008, 2012, Oracle and/or its affiliates. All rights reserved.
 #
 
-import json
+import simplejson as json
 import multiprocessing
 import os
 import sys