Skip to main content

git merge

I'm so used to git merge doing the right thing re: merging files - and possibly used to working mostly by myself - that when git merge fails I always expect something messy has happened. However, just now, I got a merge complain that amounted to this:

from setuptools import setup, find_packages

setup(
      ... some things ...

<<<<<<< HEAD
    ext_modules=cythonize('lib/*.pyx'),
    entry_points={
      # Register the built in plugins
              ....
      # Command line scripts
              ....
    }
)
=======
    install_requires=
             ....
    cython_ext='lib/*.pyx'
)
>>>>>>> origin/fix_setup

I was surprised since the changes I made and my colleague made were simple and non-overlapping. They just need to come in sequence.

I took a look at the git merge documentation which said

When both sides made changes to the same area, however, Git cannot randomly pick one side over the other, and asks you to resolve it by leaving what both sides did to that area.

What happened here was that there were no intervening lines. We both appended our changes below "... some things ..." and above ")".

git merge's algorithm can't assume (rightly so) that these are tandem changes. Which should come first? Should they both be kept? And so on. It's not a mess, but it still takes humans - actually people working on the code - to figure out that, in this case, both should be kept, and it does not matter which comes first.


Comments

Popular posts from this blog

A note on Python's __exit__() and errors

Python's context managers are a very neat way of handling code that needs a teardown once you are done. Python objects have do have a destructor method ( __del__ ) called right before the last instance of the object is about to be destroyed. You can do a teardown there. However there is a lot of fine print to the __del__ method. A cleaner way of doing tear-downs is through Python's context manager , manifested as the with keyword. class CrushMe: def __init__(self): self.f = open('test.txt', 'w') def foo(self, a, b): self.f.write(str(a - b)) def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.f.close() return True with CrushMe() as c: c.foo(2, 3) One thing that is important, and that got me just now, is error handling. I made the mistake of ignoring all those 'junk' arguments ( exc_type, exc_val, exc_tb ). I just skimmed the docs and what popped out is that you need to return True or...

Store numpy arrays in sqlite

Use numpy.getbuffer (or sqlite3.Binary ) in combination with numpy.frombuffer to lug numpy data in and out of the sqlite3 database: import sqlite3, numpy r1d = numpy.random.randn(10) con = sqlite3.connect(':memory:') con.execute("CREATE TABLE eye(id INTEGER PRIMARY KEY, desc TEXT, data BLOB)") con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", sqlite3.Binary(r1d))) con.execute("INSERT INTO eye(desc,data) VALUES(?,?)", ("1d", numpy.getbuffer(r1d))) res = con.execute("SELECT * FROM eye").fetchall() con.close() #res -> #[(1, u'1d', <read-write buffer ptr 0x10371b220, size 80 at 0x10371b1e0>), # (2, u'1d', <read-write buffer ptr 0x10371b190, size 80 at 0x10371b150>)] print r1d - numpy.frombuffer(res[0][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] print r1d - numpy.frombuffer(res[1][2]) #->[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] Note that for work where data ty...