This article goes in detail how to setup all that. It also allows you to store incomplete / on-going work on a shared branch that doesn't deploy at all, so that you know the thing you've been working on your laptop is safe on the shared repo, even though it can't be sent to the test server because it's not complete and would break the test server, making other people unable to work or something.
This allows you to setup multiple deployment repositories, so you could have another branch prod associated with one of those that would actually update production code when you push stuff to it.
So even if you commit stuff to other branches and push them, the server won't be affected. The docroot repo is checked out on a specific branch (let's say develop). This can be achieved by using post-receive hooks on the shared repo. When the shared repo receives a commit, it will make the docroot repo execute git pull. Besides the bare shared repo, you'll want a non-bare clone of the shared Git repo which will serve as your Apache docroot. This is normally achieved by having a shared Git repository you push to. It's like deploying, only you deploy to a test server instead of production. What you want to do is have local Git repos that push code to your integration server (the one that actually runs the code). The fact that you don't connect to a remote server to commit stuff is by design. Instead of building a layer into each and every application to support networked file access, doing it in the kernel (or via FUSE) and then just treating it like a local filesystem gives you that support in every application for free. It is written in a mix of C and shell script all of that would have to be rewritten to cope with remote files.Ī text editor has a much easier job it reads one file when you open it, and writes when you save, while Git reads and writes many files in the course of a single operation like commit.Ī networked filesystem would mean that all tools (Git and otherwise) will work on your remote files. In order for them to support remote operation, core Git would have to as well. Most Git GUIs do some of their work by calling out to the git command. I am surprised git software can't deal with remote repos as the working version. I do this so that I can mount my current working copy on multiple machines at once, and also so that I still have my local working copy even when I'm not connected to the server. Or depending on your preferences and setup, you could use SMB, NFS, DAV, or another network filesystem.Īnother way to do it, that I bring up in the comments, is to export the network filesystem from your development machine to your server.
If you only have SSH access to the remote machine, you could try using SSHFS via FUSE (on Linux) or OSXFUSE on Mac OS X. One solution, which doesn't rely on the front-end to support manipulating a remote repo directly, would be to mount the remote as a networked filesystem.