I work on a couple of different programs and packages in Python. They are each developed in their own Git repository, but frequently need to import modules defined in other packages. For instance, during development, the directory structure looks something like:
| |-- client.py
| |-- server.py
| |-- package-a
| |-- __init__.py
| |-- module.py
| |-- package-b
| | |-- __init__.py
| | |-- other_module.py
| |-- package-c
| |-- __init__.py
| |-- third_module.py
When they are all installed, they work fine; they are all installed such that each package is in your Python path, and you can import from them as you need.
However, in development, I want the development version of each of these to be in my Python path, not the installed version. When making changes, I don't want to have to install each package that I'm changing to test it, I want the changes to take effect immediately. That means my Python path needs to include the directories
Our current solution is just to have an environment.bash in the top level, which you source in your shell and it sets
PYTHONPATH. That works OK, but I frequently forget to do so; since this is a client server application, with communications between servers, I need to have at least four windows open to different VMs to run this, and it happens pretty often that I forget to source environment.bash in at least one of those, leading me to try debugging strange behavior until I realize I'm importing the wrong things.
Another solution would be to set
sys.path from within the top level client.py or server.py. This would work fine for launching them directly, but I would also need the path set up for running tools like Pylint or Sphinx, which that solution wouldn't cover. I'd also need a way to distinguish between running from source (when I want the path to include
../project-b) and running the installed version (which should use the standard path without modification).
Another choice would be to have a Makefile which sets up
PYTHONPATH appropriately for various targets like
make doc, and so on. That's OK for those targets, which don't require any options, but would be inconvenient for running the client, which takes arguments.
make run-client ARGS='foo bar' is a fairly cumbersome way to invoke it.
Is there any common way of setting up the Python path during development so that both my executables and tools like Pylint and Sphinx can pick it up appropriately, without interfering with how it will behave when installed?
A straightforward solution would be to simply symlink in the directories for each module in a separate folder, and run things from there. That way Python sees them all being in the same location, even though the actual sources are in different repositories.
src/ |-- project-a/ | |-- client.py | |-- server.py | |-- package-a | |-- __init__.py | |-- module.py |-- project-b/ |-- package-b | |-- __init__.py | |-- other_module.py |-- package-c |-- __init__.py |-- third_module.py run/ |-- client.py --> ../src/project-a/client.py |-- server.py --> ../src/project-a/server.py |-- package-a/ --> ../src/project-a/package-a/ |-- package-b/ --> ../src/project-b/package-b/ |-- package-c/ --> ../src/project-b/package-c/