#1258 closed defect (fixed)

having Tahoe or any dependency installed in site-packages (or any other shared directory) can cause us to run or test the wrong code

Reported by: davidsarah Owned by: davidsarah
Priority: major Milestone: 1.11.0
Component: dev-infrastructure Version: 1.8.0
Keywords: setuptools test docs Cc:
Launchpad Bug:

Description (last modified by warner)

From ticket:1253#comment:20 :

There is now a test failure on zomp (Mac OS 10.6) which looks like it might be caused by these patches:

http://tahoe-lafs.org/buildbot/builders/Zooko%20zomp%20Mac-amd64%2010.6%20py2.6/builds/208/steps/test/logs/stdio

[FAIL]
Traceback (most recent call last):
  File "/Users/tahoebuildslave/Zooko zomp Mac-amd64 10.6 py2.6/build/src/allmydata/test/test_system.py", line 1609, in _check_mv_with_http_proxy
    self.failUnlessEqual(rc_or_sig, 0, str(res))
twisted.trial.unittest.FailTest: ('', 'Traceback (most recent call last):\n  File "/Users/tahoebuildslave/Zooko zomp Mac-amd64 10.6 py2.6/build/support/bin/tahoe", line 9, in <module>\n    load_entry_point(\'allmydata-tahoe==1.8.0-r4818\', \'console_scripts\', \'tahoe\')()\n  File "/Library/Python/2.6/site-packages/allmydata/scripts/runner.py", line 114, in run\n    rc = runner(sys.argv[1:], install_node_control=install_node_control)\n  File "/Library/Python/2.6/site-packages/allmydata/scripts/runner.py", line 100, in runner\n    rc = cli.dispatch[command](so)\n  File "/Library/Python/2.6/site-packages/allmydata/scripts/cli.py", line 504, in mv\n    rc = tahoe_mv.mv(options, mode="move")\n  File "/Library/Python/2.6/site-packages/allmydata/scripts/tahoe_mv.py", line 31, in mv\n    data = urllib.urlopen(from_url + "?t=json").read()\n  File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 87, in urlopen\n    return opener.open(url)\n  File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 203, in open\n    return getattr(self, name)(url)\n  File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 338, in open_http\n    h.endheaders()\n  File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 868, in endheaders\n    self._send_output()\n  File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 740, in _send_output\n    self.send(msg)\n  File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 699, in send\n    self.connect()\n  File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 683, in connect\n    self.timeout)\n  File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 512, in create_connection\n    raise error, msg\nIOError: [Errno socket error] [Errno 60] Operation timed out\n', 1)
not equal:
a = 1
b = 0

and ticket:1253#comment:21 :

src/allmydata/scripts/tahoe_mv.py@4818#L31 is not "data = urllib.urlopen(from_url + "?t=json").read()". That's what it was before the fix.

Somehow the code being tested has the test patch 01a53650510a9a4e, but not the fix patch cb777ad14f12a249.

Oh, it is testing "/Library/Python?/2.6/site-packages/allmydata/scripts/tahoe_mv.py". Wrong! This looks like the issue that we fixed for the test-from-egg and test-from-prefixdir steps in #1137.

Attachments (2)

add-test-import-in-repl.darcs.patch (11.8 KB) - added by davidsarah at 2011-01-01T20:21:13Z.
test_runner: add test_import_in_repl, which uses 'bin/tahoe debug repl' to import the allmydata module and checks that it comes from the right path. Also, fix a latent bug that caused test_the_right_code to incorrectly conclude that a path mismatch was due to a Unicode path (causing a skip rather than a failure). This version of the patch avoids a confusing shadowing of 'srcfile'. refs #1258
auto-deps-and-init-changes.darcs.patch (54.4 KB) - added by davidsarah at 2011-01-21T06:21:12Z.
Refactor _auto_deps.py and init.py, adding more robust checking of dependency versions, and not trusting pkg_resources to get the versions right. refs #1258, #1287

Download all attachments as: .zip

Change History (44)

comment:1 Changed at 2010-11-14T03:44:09Z by davidsarah

This might have the same cause as ticket:1246#comment:6 .

comment:2 follow-up: Changed at 2010-11-14T08:49:54Z by zooko

I ran sudo /bin/rm -rf /Library/Python/2.6/site-packages/allmydata* on zomp and then rebuilt a build that had failed. Here is the result:

http://tahoe-lafs.org/buildbot/builders/Zooko%20zomp%20Mac-amd64%2010.6%20py2.6/builds/212

comment:3 in reply to: ↑ 2 Changed at 2010-11-14T17:36:54Z by davidsarah

  • Summary changed from test step is not testing the right code to having Tahoe or any dependency installed in site-packages (or any other shared directory) can cause us to run or test the wrong code

Replying to zooko:

I ran sudo /bin/rm -rf /Library/Python/2.6/site-packages/allmydata* on zomp and then rebuilt a build that had failed. Here is the result:

http://tahoe-lafs.org/buildbot/builders/Zooko%20zomp%20Mac-amd64%2010.6%20py2.6/builds/212

OK, that succeeding is consistent with the underlying issue being the same as in #1246. The site-packages directory has a pycrypto-2.1.0-py2.6.egg-info file, which could have caused it to be put on the sys.path by setuptools.

If I'm understanding correctly, this can cause us to import the wrong code in any buildbot test step, and also when running bin/tahoe.

comment:4 follow-up: Changed at 2010-11-27T19:03:11Z by zooko

  • Owner changed from somebody to davidsarah

David-Sarah: I think we can safely put off this ticket til after v1.8.1. Do you agree?

comment:5 in reply to: ↑ 4 Changed at 2010-11-28T01:04:07Z by davidsarah

  • Keywords docs added

Replying to zooko:

David-Sarah: I think we can safely put off this ticket til after v1.8.1. Do you agree?

Yes if we add documentation recommending not to use setup.py install (and saying why).

comment:6 follow-up: Changed at 2010-12-31T06:22:17Z by davidsarah

I just saw what is probably another instance of this (on a tree with local changes):

$ python setup.py trial --rterror -s allmydata.test.test_system.SystemTest.test_filesystem
running darcsver
setup.py darcsver: wrote '1.8.1-r4900' into src/allmydata/_version.py
running trial
running egg_info
writing requirements to src\allmydata_tahoe.egg-info\requires.txt
writing src\allmydata_tahoe.egg-info\PKG-INFO
writing top-level names to src\allmydata_tahoe.egg-info\top_level.txt
writing dependency_links to src\allmydata_tahoe.egg-info\dependency_links.txt
writing entry points to src\allmydata_tahoe.egg-info\entry_points.txt
writing manifest file 'src\allmydata_tahoe.egg-info\SOURCES.txt'
running build_ext
allmydata.test.test_system
  SystemTest
    test_filesystem ... Traceback (most recent call last):
  File "d:\tahoe\tahoe-clean\src\allmydata\test\test_system.py", line 1609, in _check_mv_with_http_proxy
    self.failUnlessEqual(rc_or_sig, 0, str(res))
twisted.trial.unittest.FailTest: ('', 'Traceback (most recent call last):\r\n  File "d:\\tahoe\\tahoe-clean\\support\\Sc
ripts\\tahoe.pyscript", line 16, in <module>\r\n    load_entry_point(\'allmydata-tahoe==1.8.1-r4900\', \'console_scripts
\', \'tahoe\')()\r\n  File "c:\\python26\\lib\\site-packages\\allmydata\\scripts\\runner.py", line 118, in run\r\n    rc
 = runner(sys.argv[1:], install_node_control=install_node_control)\r\n  File "c:\\python26\\lib\\site-packages\\allmydat
a\\scripts\\runner.py", line 104, in runner\r\n    rc = cli.dispatch[command](so)\r\n  File "c:\\python26\\lib\\site-pac
kages\\allmydata\\scripts\\cli.py", line 503, in mv\r\n    rc = tahoe_mv.mv(options, mode="move")\r\n  File "c:\\python2
6\\lib\\site-packages\\allmydata\\scripts\\tahoe_mv.py", line 31, in mv\r\n    data = urllib.urlopen(from_url + "?t=json
").read()\r\n  File "c:\\Python26\\lib\\urllib.py", line 87, in urlopen\r\n    return opener.open(url)\r\n  File "c:\\Py
thon26\\lib\\urllib.py", line 203, in open\r\n    return getattr(self, name)(url)\r\n  File "c:\\Python26\\lib\\urllib.p
y", line 342, in open_http\r\n    h.endheaders()\r\n  File "c:\\Python26\\lib\\httplib.py", line 868, in endheaders\r\n
   self._send_output()\r\n  File "c:\\Python26\\lib\\httplib.py", line 740, in _send_output\r\n    self.send(msg)\r\n  F
ile "c:\\Python26\\lib\\httplib.py", line 699, in send\r\n    self.connect()\r\n  File "c:\\Python26\\lib\\httplib.py",
line 683, in connect\r\n    self.timeout)\r\n  File "c:\\Python26\\lib\\socket.py", line 512, in create_connection\r\n
  raise error, msg\r\nIOError: [Errno socket error] [Errno 10060] A connection attempt failed because the connected part
y did not properly respond after a period of time, or established connection failed because connected host has failed to
 respond\r\n', 1)
not equal:
a = 1
b = 0

[FAIL]

Notice the c:\\python26\\lib\\site-packages\\allmydata paths in the traceback. (I don't know how that directory got there; I thought that I'd deleted it and not installed Tahoe since.)

More significantly, when running the whole test suite, this was the only test that failed. I thought that when running tests, this bug was restricted to testing the wrong code and complaining about it, but apparently it can still fail silently, despite test_the_right_code that we added for #1137 :-(

I will investigate tomorrow why test_the_right_code didn't detect this.

comment:7 in reply to: ↑ 6 ; follow-up: Changed at 2011-01-01T03:45:38Z by davidsarah

Replying to davidsarah:

I will investigate tomorrow why test_the_right_code didn't detect this.

This is a really confusing bug.

python setup.py trial is importing the right code in its own process. The problem is that the subprocess created by _run_cli_in_subprocess in test_system.py (called from the test added for #1253) is importing the wrong code, from a version installed before #1253 was fixed.

However, that situation -- where we import the wrong code when running bin/tahoe in a subprocess -- should have been picked up by test_path in test_runner.py.

_run_cli_in_subprocess does not run bin/tahoe in exactly the same way as test_path (they differ in whether the command uses sys.executable, and in the passed environment), but the differences seem not to be significant. In fact adding code to run bin/tahoe --version-and-path using _run_cli_in_subprocess, like this:

        def _subprocess_tahoe_version(ign):
            env = os.environ
            env['http_proxy'] = env['HTTP_PROXY'] = "http://127.0.0.0:12345"  # invalid address
            return self._run_cli_in_subprocess(["--version-and-path"], env=env)
        d.addCallback(_subprocess_tahoe_version)

        def _check_subprocess_tahoe_version(res):
            print repr(res)
        d.addCallback(_check_subprocess_tahoe_version)

gives output showing a version of

allmydata-tahoe: 1.8.1-r4908 (d:\\tahoe\\tahoe-clean\\src)

which is correct. (The difference between r4908 here and r4900 in comment:6 is not significant, I committed a few patches.) The value of the PYTHONPATH variable set by d:\tahoe\tahoe-clean\bin\tahoe.pyscript is d:\tahoe\tahoe-clean\support\Lib\site-packages, which is also right.

So, I have no explanation of why the --version-and-path output from the subprocess is correct, but it a subprocess run in exactly the same way is still importing the wrong code. (c:\python26\lib\site-packages\allmydata\_version.py is present, so it is not the case that it is falling back to a later sys.path entry when importing _version.py because that file is missing.)

Last edited at 2011-01-01T03:53:48Z by davidsarah (previous) (diff)

comment:8 in reply to: ↑ 7 ; follow-ups: Changed at 2011-01-01T11:09:26Z by davidsarah

Replying to davidsarah:

So, I have no explanation of why the --version-and-path output from the subprocess is correct, but a subprocess run in exactly the same way is still importing the wrong code.

It turned out that the --version-and-path output was wrong. At src/allmydata/__init__.py@4796#L227, the versions and paths obtained from pkg_resources.require take precedence over those obtained by importing modules. But in a case such as this bug where the imported code is different from what pkg_resources attempted to put on the sys.path, the result of pkg_resources.require is the wrong thing to use.

comment:9 Changed at 2011-01-01T11:42:18Z by davidsarah

  • Keywords review-needed added

comment:10 Changed at 2011-01-01T11:42:38Z by davidsarah

  • Milestone changed from 1.8.1 to 1.9.0

comment:11 in reply to: ↑ 8 Changed at 2011-01-01T11:50:56Z by davidsarah

Replying to davidsarah:

It turned out that the --version-and-path output was wrong.

That is ticket #1287. add-test-import-in-repl.darcs.patch adds a different test which would catch any problem with importing the right version of Tahoe in subprocesses, without relying on bin/tahoe --version-and-path to be correct.

comment:12 Changed at 2011-01-01T11:56:44Z by davidsarah

Note that it is still not explained what the difference is between the setup.py process and the bin/tahoe subprocesses, that causes us to import the right code in the former and the wrong code in the latter.

comment:13 Changed at 2011-01-01T19:51:33Z by davidsarah

  • Keywords review-needed removed
  • Status changed from new to assigned

add-test-import-in-repl.darcs.patch has a confusing shadowing of the srcfile variable. I will fix that.

Changed at 2011-01-01T20:21:13Z by davidsarah

test_runner: add test_import_in_repl, which uses 'bin/tahoe debug repl' to import the allmydata module and checks that it comes from the right path. Also, fix a latent bug that caused test_the_right_code to incorrectly conclude that a path mismatch was due to a Unicode path (causing a skip rather than a failure). This version of the patch avoids a confusing shadowing of 'srcfile'. refs #1258

comment:14 Changed at 2011-01-01T20:21:49Z by davidsarah

  • Keywords review-needed added
  • Owner davidsarah deleted
  • Status changed from assigned to new

comment:15 Changed at 2011-01-01T20:27:37Z by davidsarah

Oh, and adding test_import_in_repl has the side-effect of testing bin/tahoe debug repl, which is not tested anywhere else :-)

comment:16 Changed at 2011-01-14T23:30:44Z by zooko

  • Owner set to zooko
  • Status changed from new to assigned

comment:17 Changed at 2011-01-15T05:14:16Z by david-sarah@…

In [4940/ticket1306]:

test_runner: add test_import_in_repl, which uses 'bin/tahoe debug repl' to import the allmydata module and checks that it comes from the right path. Also, fix a latent bug that caused test_the_right_code to incorrectly conclude that a path mismatch was due to a Unicode path (causing a skip rather than a failure). This version of the patch avoids a confusing shadowing of 'srcfile'. refs #1258

comment:18 in reply to: ↑ 8 ; follow-up: Changed at 2011-01-15T07:39:07Z by zooko

Replying to davidsarah

It turned out that the --version-and-path output was wrong. At src/allmydata/__init__.py@4796#L227, the versions and paths obtained from pkg_resources.require take precedence over those obtained by importing modules. But in a case such as this bug where the imported code is different from what pkg_resources attempted to put on the sys.path, the result of pkg_resources.require is the wrong thing to use.

Two questions about this:

  1. Could we start writing a unit test for this by taking current trunk and adding an assertion in allmydata/__init__.py get_package_versions_and_locations() that the pkg_resources.require() version and the specific-attribute-of-this-module version have to match or else exit noisily? Then we could hopefully produce a minimal test case which triggered that assertion.
  1. If we "solve" this bug by removing the pkg_resources.require(), as is done by [20110101110141-93fa1-3557bc2136f970fae05c1d20e336c32fec8e3d6d]/ticket1306, is it really a full solution? Could it be then that we are relying on the zetuptoolz-based dependency-resolution to make sure we have deps of the right version number, and there is a bug in zetuptoolz's pkg_resources so that we're actually getting a package of the wrong version number, and [20110101110141-93fa1-3557bc2136f970fae05c1d20e336c32fec8e3d6d] causes us to ignore this fact?

comment:19 Changed at 2011-01-16T01:39:28Z by david-sarah@…

In [4959/ticket1306]:

Use a copy of verlib from https://bitbucket.org/tarek/distutilsversion/src/17df9a7d96ef (in allmydata.util.verlib) to normalize and compare versions of dependencies. refs #1258, #1287

comment:20 in reply to: ↑ 18 ; follow-ups: Changed at 2011-01-16T02:15:41Z by davidsarah

Replying to zooko:

Replying to davidsarah

It turned out that the --version-and-path output was wrong. At src/allmydata/__init__.py@4796#L227, the versions and paths obtained from pkg_resources.require take precedence over those obtained by importing modules. But in a case such as this bug where the imported code is different from what pkg_resources attempted to put on the sys.path, the result of pkg_resources.require is the wrong thing to use.

Two questions about this:

  1. Could we start writing a unit test for this by taking current trunk and adding an assertion in allmydata/__init__.py get_package_versions_and_locations() that the pkg_resources.require() version and the specific-attribute-of-this-module version have to match or else exit noisily? Then we could hopefully produce a minimal test case which triggered that assertion.
  1. If we "solve" this bug by removing the pkg_resources.require(), as is done by [20110101110141-93fa1-3557bc2136f970fae05c1d20e336c32fec8e3d6d]/ticket1306, is it really a full solution? Could it be then that we are relying on the zetuptoolz-based dependency-resolution to make sure we have deps of the right version number, and there is a bug in zetuptoolz's pkg_resources so that we're actually getting a package of the wrong version number, and [20110101110141-93fa1-3557bc2136f970fae05c1d20e336c32fec8e3d6d] causes us to ignore this fact?

The calls to pkg_resources.require() that were removed in [4936/ticket1306], were never relied on to perform version checking. It was the calls in _auto_deps.require_auto_deps() that were supposed to be doing that. But they weren't working either.

In [4942/ticket1306], we replace require_auto_deps with check_all_requirements, which compares the requirement with the imported version as obtained directly from each package module. Then in [4959/ticket1306], we change that check to use verlib and add tests for both verlib and check_requirement.

If we did as you suggest in 1, we'd be using pkg_resources.require only in order to detect the case where it's giving the wrong result. (As it happens, that corresponds to a case where __requires__ = ...; import pkg_resources in the support script failed to do the right thing. However, that's almost a coincidence! We shouldn't rely on the fact that two setuptools bugs happen to coincide. The [4942/ticket1306] change already adds a reliable test for what we actually want to know, which is that the imported dependencies satisfy the requirements.)

Caveat: check_all_requirements doesn't currently check indirect dependencies, e.g. pyOpenSSL, pyutil, argparse, and zbase32. require_auto_deps didn't check those either.

comment:21 in reply to: ↑ 20 Changed at 2011-01-16T02:22:09Z by davidsarah

Replying to davidsarah:

In [4942/ticket1306], we replace require_auto_deps with check_all_requirements, which compares the requirement with the imported version as obtained directly from each package module. Then in [4959/ticket1306], we change that check to use verlib and add tests for both verlib and check_requirement.

I should also have mentioned [4949/ticket1306], which generalizes the check to handle disjunctive requirements like "pycrypto == 2.0.1, == 2.1.0, >= 2.3" (and also exposes check_requirement at module level to make it testable).

comment:22 Changed at 2011-01-16T02:46:13Z by david-sarah@…

In [4960/ticket1306]:

_auto_deps.py: mock might not have a version attribute. For mock, zope.interface, pyasn1 and pywin32, try to get the version number but fall back to 'unknown'. refs #1258, #1287

comment:21 Changed at 2011-01-16T05:20:48Z by david-sarah@…

In 727b25f622b01593:

Temporary hack to investigate whether we are getting the right version of foolscap on trunk. refs #1258

comment:22 in reply to: ↑ 20 Changed at 2011-01-17T07:22:05Z by zooko

Replying to davidsarah:

Replying to zooko:

  1. If we "solve" this bug by removing the pkg_resources.require(), as is done by [20110101110141-93fa1-3557bc2136f970fae05c1d20e336c32fec8e3d6d]/ticket1306, is it really a full solution? Could it be then that we are relying on the zetuptoolz-based dependency-resolution to make sure we have deps of the right version number, and there is a bug in zetuptoolz's pkg_resources so that we're actually getting a package of the wrong version number, and [20110101110141-93fa1-3557bc2136f970fae05c1d20e336c32fec8e3d6d] causes us to ignore this fact?

The calls to pkg_resources.require() that were removed in [4936/ticket1306], were never relied on to perform version checking. It was the calls in _auto_deps.require_auto_deps() that were supposed to be doing that. But they weren't working either.

I didn't mean either of those, I meant that python setup.py build, python setup.py install, and easy_install allmydata-tahoe rely on pkg-resources-based dependency resolution. (The former two rely on zetuptoolz, the latter relies on either setuptools or distribute, depending on which one provides your local easy_install executable.)

So, if pkg_resources.require() is giving us the wrong version of a dependency, does that suggest that the setup step that we just did may have been satisfied with an extant dependency even though it was of an incompatible version?

comment:23 in reply to: ↑ 20 Changed at 2011-01-17T07:22:33Z by zooko

Replying to davidsarah:

Caveat: check_all_requirements doesn't currently check indirect dependencies, e.g. pyOpenSSL, pyutil, argparse, and zbase32. require_auto_deps didn't check those either.

Hm. If we were using pkg_resources, then pkg_resources.require("allmydata-tahoe") would check all dependencies including indirect ones like pyOpenSSL.

comment:24 Changed at 2011-01-18T06:39:15Z by zooko

  • Keywords review-needed removed
  • Owner changed from zooko to davidsarah
  • Status changed from assigned to new

I'm unsetting review-needed and assigning to davidsarah for davidsarah (or anyone else who knows) to answer my questions from comment:18 and comment:22. In short: I think we're depending on pkg_resources to install the right versions of deps, so if pkg_resources is reporting incorrect version numbers when queried, perhaps this indicates a bug that we need to fix more deeply than just to stop asking it what the version numbers are.

comment:25 Changed at 2011-01-18T06:39:45Z by zooko

  • Milestone changed from 1.9.0 to 1.8.2

comment:26 follow-ups: Changed at 2011-01-18T07:30:47Z by zooko

  • Milestone changed from 1.8.2 to 1.9.0

Brian (Release Master for 1.8.2) says that for 1.8.2 we intend to land a patch to detect if this problem has occurred but not to fix the problem.

David-Sarah: I would give +1 to [20110101110141-93fa1-3557bc2136f970fae05c1d20e336c32fec8e3d6d] if we put back the pkg_resources.require() and add code to abort with a specific error message if the answer provided by pkg_resources.require() differs from the one detected by importing and inspecting the module.

This would satisfy my current uncertainty (re comment:24), would be in keeping with the intent to detect this problem in 1.8.2, would fail-safe, and would not cause any problem in the case that pkg_resources was behaving correctly. What do you think?

comment:27 in reply to: ↑ 26 ; follow-up: Changed at 2011-01-19T04:41:48Z by davidsarah

Replying to zooko:

David-Sarah: I would give +1 to [20110101110141-93fa1-3557bc2136f970fae05c1d20e336c32fec8e3d6d] if we put back the pkg_resources.require() and add code to abort with a specific error message if the answer provided by pkg_resources.require() differs from the one detected by importing and inspecting the module.

This would satisfy my current uncertainty (re comment:24), would be in keeping with the intent to detect this problem in 1.8.2, would fail-safe, and would not cause any problem in the case that pkg_resources was behaving correctly. What do you think?

+1. This check might have to be disabled for a frozen executable (but in that case the dependencies should also have been frozen at build time, so I don't think the check is needed in that case).

comment:28 Changed at 2011-01-19T08:57:04Z by david-sarah@…

In fd6cdc48ae1ccbe1:

src/allmydata/test/test_runner.py: add test_import_from_repl, which checks that we are running the right code in a bin/tahoe subprocess. refs #1258

comment:29 in reply to: ↑ 27 Changed at 2011-01-20T09:48:31Z by zooko

Replying to davidsarah:

+1. This check might have to be disabled for a frozen executable (but in that case the dependencies should also have been frozen at build time, so I don't think the check is needed in that case).

Okay, I did this in a series of patches [20110120070136-92b7f-88ac67b4d3aa7d3402186eac8c4ee7dbb42f0f1f]/ticket1306, [20110120081620-92b7f-6e0039e28365c86a97a2f26072aedf0b742d6c79]/ticket1306, [20110120084014-92b7f-e7777f5b76d7bbd269272cdd9dddf02922f4e197]/ticket1306, [20110120085221-92b7f-cf62522aca14becf31748723fee15458333d7774]/ticket1306, [20110120085325-92b7f-028722d1cf43f20122d2fa022d0c5cf66272eaaf]/ticket1306, [20110120085608-92b7f-aad66de877bf56c05918a921078a7a405130173c]/ticket1306.

comment:30 follow-up: Changed at 2011-01-20T20:41:58Z by davidsarah

Thise check is causing several failures on the buildslaves. Arguably they are false positives from a user's perspective (because we are importing acceptable versions of the dependencies in these cases), even though they indicate that pkg_resources is deeply confused.

I propose that we should only perform this cross-check when one of the version-checking options is used (bin/tahoe --version, --version-and-path or --debug-version-and-path), or when setup.py test or setup.py trial is used. In the latter case, it should not stop us running tests.

comment:31 in reply to: ↑ 30 Changed at 2011-01-20T21:09:28Z by zooko

Replying to davidsarah:

I propose that we should only perform this cross-check when one of the version-checking options is used (bin/tahoe --version, --version-and-path or --debug-version-and-path), or when setup.py test or setup.py trial is used. In the latter case, it should not stop us running tests.

+1

Changed at 2011-01-21T06:21:12Z by davidsarah

Refactor _auto_deps.py and init.py, adding more robust checking of dependency versions, and not trusting pkg_resources to get the versions right. refs #1258, #1287

comment:32 Changed at 2011-01-21T06:26:20Z by david-sarah@…

In 29336a09163cd3d5:

Refactor _auto_deps.py and init.py, adding more robust checking of dependency versions, and not trusting pkg_resources to get the versions right. refs #1258, #1287

comment:25 Changed at 2011-01-21T06:43:51Z by david-sarah@…

In b1f15a630a6389cf:

Add src/allmydata/util/verlib.py, which is a copy of verlib from https://bitbucket.org/tarek/distutilsversion/src/17df9a7d96ef . It is used to normalize and compare versions of dependencies. refs #1258

comment:26 follow-ups: Changed at 2011-05-28T19:31:12Z by davidsarah

  • Milestone changed from 1.9.0 to eventually

I don't know how to fix this in the general case.

comment:27 follow-up: Changed at 2011-05-28T20:07:31Z by zooko

How about if we establish a weaker goal: to detect at run-time whether this has happened and raise an exception if so. That's probably more doable than actually fixing the Python packaging system to always import the right code.

comment:28 in reply to: ↑ 26 Changed at 2011-08-15T22:32:01Z by davidsarah

Replying to davidsarah:

I don't know how to fix this in the general case.

To be more precise, I don't know how to do it while still using setuptools. If we ditched setuptools then I would know how to do it (by making sure that the dependencies are always added at the start of sys.path, and/or by using a custom import hook). The amount of work needed to replace the affected setuptools functionality shouldn't be underestimated, though.

How about if we establish a weaker goal: to detect at run-time whether this has happened and raise an exception if so.

We already check that the version found by import is acceptable, except in the case of packages that don't declare their version in an easily inspectable manner (#1473). That isn't equivalent to checking whether this bug has occurred, though: we can (and frequently do) import a package that is not the version that setuptools intended to import, but that just happens to have an acceptable version. We know which version and path setuptools intended to import, and the original code I wrote to check for mismatches did treat this as an error. We changed it to be a warning that is only displayed in the situations described in comment:30, because the error was happening too often :-(

(The ticket1306 branch, which did the more thorough check, has been deleted from the trac, so links to it no longer work. It can still be checked out using darcs get --lazy http://tahoe-lafs.org/source/tahoe-lafs/ticket1306, though.)

comment:29 Changed at 2013-03-28T11:56:23Z by gdt

This interacts particularly badly with release tarballs with bad permissions (600, rather than 644), because those tarballs, at least with packaging systems that don't attempt to muck with them, result in installed versions of tahoe were non-root users may not read the egg files. So the next attempt to build tahoe (as a non-root user, because that build stage does not require root) will fail. The workaround is just to remove the old/wrong package.

comment:30 follow-up: Changed at 2014-10-08T11:45:57Z by Daira Hopwood <daira@…>

In b0b76a7c5b89c3fed5a65ef6732dc45e578f12f4/trunk:

Improve comments in _auto_deps.py. refs #2249, #2028, #2193, #2005, #1258

Signed-off-by: Daira Hopwood <daira@…>

comment:31 Changed at 2016-03-27T18:31:53Z by warner

  • Description modified (diff)
  • Milestone changed from eventually to 1.11.0
  • Resolution set to fixed
  • Status changed from new to closed

We now recommend tox for running tests, and virtualenv for running tahoe at all, which gives us the desired isolation between any system/site-packages -installed python packages and tahoe's own requirements. So we can close this now.

comment:32 Changed at 2024-07-11T11:35:34Z by gdt

I came back to this while reviewing the pkgsrc package. While it is certainly ok to *recommend* using a virtualenv, it is not ok to say that tahoe working properly outside of a virtualenv is beyond the specification.

Certainly it is fair to expect that the installed versions of dependencies are ok. It's really the separation of the installed, likely previous working tahoe, and the being-built next version for testing that matters. It is normal for packaging systems to build and test not installed versions.

Do people think this problem is resolved, for running tests in a build dir while a previous version is in site packages? Or is this just tahoe adopting an usual definition of correctness? (Really asking; I have no idea.)

Note: See TracTickets for help on using tickets.