I'm planning to move our repository from SVN to Git and I heard a lot about how Git is very inefficient in dealing with binary files. But I don't really understand what may be the issues (besides the repository size) I will face regarding this topic, since we do have a lot of binary files in our repository.
This is our scenario:
We have a single repository of 800MB that contains 2 directories:
This is the current size considering no history (let's assume we start the Git repo from scratch, without any history).
The binary files never exceed the 25MB, most of them are lower than 10MB, and are rarely changed (2 or 3 times a year).
Can I expect issues with a repository like this when using Git? If the only issue with Git is the fact that all the history is kept in each local repository then I don't expect it to grow so much since these files are not changed often.
But might the Git performance (when committing or checking the status) be affected by the fact that I have a lof of binary files in the repository? Could the Git Subtree feature help on this (by making the directory "libs" a Subtree of the main repository) ?
EDIT: I know I could use something like Maven to store these binaries outside however we have a restriction here that we must keep these files together.
UPDATE: I made a series of tests and I concluded that Git is smart enough to analyze the zip content and save deltas: for instance, if I add a zip file of 20MB and then I modify one text file inside the zip, when I commit the new version of the zip and run 'git gc', the size is almost unchanged (still has 20MB). So I can assume Git work fine with zip files. Can someone confirm this?
The main issue you might run into is that every git repository stores complete history of all files. Even when they will be packed together, there no easy way to make a "light" checkout of only one subdirectory with sources files want you need to work on.
If you have 500 MB of binary files that change 2-3 times a year, it means that after three years you'll need to handle 3+ GB history (ok, compressed a bit) whenever you check out the repo or have it somewhere. This may get a bit irritating.
In my experience, git submodules are not a tremendous help this regard: you still have a git repo with the files (i.e. a big and growing repository), and submodules mostly complicate things. The best approach is to try to avoid large binaries, for example by storing the sources you use to build them (and perhaps cache them somewhere if that takes too long).
Nevertheless, git will definitely survive your use case, so if you don't mind a bit of disk space give it a shot.
The main reason you see difference in term of size with git (compared to svn) is because git and svn aren't build the same way.
Svn : To handle files, svn uses deltas. I.e the first time you commit a file, svn creates the files, and when you commit modification, svn only stores the differences between the two files. If I remember correctly (and to be precise), svn stores the full last file you committed and stores the deltas negativly. This is pretty quick when you have few revision and when you want to get the HEAD commit, but the more revision you'll have, the slower it'll get to fetch a specific revision, since svn will have to rebuild the file using the deltas.
GIT : Git works in a completely different way from svn. It doesn't store deltas, it stores blob (binarie large object). When you commit a file, it stores the file in a blob with the revision label. If you commit without modifying the file, git create a symlink to the blob from the previous commit. If you modify the file, git stores the full blob. This has the advantage of being equally fast for each revision, but you repository can grow quite quick.
I won't answer how to deal with binaries, because i believe this is fully present on internet (and i'm sure it is on SO).
I hope it helped you