stonesweep 43 days ago [-]
Please do not follow this blog's advice of compiling your own cgit with a git pull off master; the blog author is now (a) using some random commit point (untaggged release) which contains who knows what, and (b) has to now manually upgrade cgit for the life of his server. At this exact moment there are 13 commits on master ahead of the latest tag, who knows what that might be. https://git.zx2c4.com/cgit/log/

Find a repo (apt/PPA, dnf/EPEL, AUR, whatever works for you) and use a repo to manage your server software unless you have no other choice than to hand-compile it. Hand-compiling server daemons like this means they never get upgraded with bugfixes and CVE issues regularly - this isn't 1995, we did this back then because we had to do it. It's not a good idea in 2021 to be doing this unless you're going to hand-manage that cgit binary every day of your life going forward, this is an internet facing endpoint which will get people probing it to attack (assuming you used a cloud server).

If nothing else, clone off the last tagged release by the upstream authors not HEAD.

matheusmoreira 43 days ago [-]
> this is an internet facing endpoint which will get people probing it to attack (assuming you used a cloud server)

Honestly this is the number one problem with internet security today. Why are computers allowed to talk to strangers at all? They should just drop all data coming from unauthenticated connections. This would destroy the mass appeal of the modern internet but it makes a lot of sense for computers hosting personal services that are meant to be used by one or few users.

Technologies such as port knocking and single packet authorization make vulnerabilities irrelevant since there's no way to exploit them before authenticating oneself.

megous 43 days ago [-]
What's this scaremongering? It's a git repo, "who knows what" can be easily checked with a quick git diff ORIG_HEAD after a pull. And you actually see the changes you're going to be building, rather than relying on some random untrusted thirdparty repo or AUR package, that can be abandoned at any time, or worse.

The update checking workflow can be as simple as putting the repo commit or tag list, or news feed into your RSS reader, and just do a quick check if update is warranted when you get some update in your feed. Many projects have an announce mailing list that you can subscribe to. There are special announce lists for security issues.

And in case of cgit, if you're running a tagged release, how do you know that git 2.25.1 (some random release from a year ago) doesn't have any security vulnerabilities when used with cgit? Building from git and basing cgit on top of 2.30.0 seems a bit more reasonable to me in this case.

matheusmoreira 43 days ago [-]
AUR packages also make it easy for users to see what they're building. The PKGBUILD is just a script that downloads the software and builds it.
megous 42 days ago [-]
Not really, if your workflow includes checking changes in the actual code.
mbreese 43 days ago [-]
I’m not going to disagree with you on the usage of package managers. No only is it easier to install, but easier to maintain.

But, isn’t cgit basically read-only? If so, that doesn’t seem to be particularly risky. All of the main interactions with gitolite are through SSH, which I’d expect to be the main area of attack.

drdec 43 days ago [-]
> But, isn’t cgit basically read-only?

So are SELECT statements until little Bobby Tables shows up.

Which is to say that SQL injection can be used to turn read-only functionality into write functionality.

Secondly, if you care about who has access to the data, even read-only vulnerabilities can be damaging.

mbreese 43 days ago [-]
True, but exploiting cgit would be significantly more difficult than a SQL injection. I could only find a handful of CVEs for cgit, and the last one was in 2018 (v 1.2.1)[1]. It involved reading files outside the git tree. That said, it's not the common software to see publicly, so probably hasn't been thoroughly vetted. But I think the point still stands that installing from source isn't going to be a horrible security risk.

Similarly though -- It is a mature project without a lot of new features. I don't think there are killer features in master that would make me want to install from source over `apt install cgit`

[1] https://www.cvedetails.com/vulnerability-list/vendor_id-1577...

stonesweep 43 days ago [-]
You don't know you have a CVE until you have a CVE, it's not hard to find any given story of bugs being found a decade later with high vulnerability scores. https://securityboulevard.com/2020/07/microsoft-patches-17-y...
a012 43 days ago [-]
I use Gitea for personal git server, they have a curated list of things can do with it: https://gitea.com/gitea/awesome-gitea What I like about it is the old UI of Github, mean time to start from scratch is very fast and low maintenance.
h_anna_h 43 days ago [-]
Please avoid gitea. I can't say anything about the other software but gitea is known for having accessibility issues, see https://github.com/go-gitea/gitea/issues/7057

Edit: the reason that I posted this was not due to some hypothetical blind people but rather because I have been in a situation where I want to show a cool project to a friend with vision issues but I can't because the project is hosted on a gitea instance. I posted this because I was unaware of this issue until recently and presumed that there would be others here who are also not aware of this.

stonesweep 43 days ago [-]
I tried out Gitea hosted via codeberg, and within a week disabled my account - Gitea allows usernames to be reused, there were bugs allowing people to push repos into other people's accounts (someone ran rampant on codeberg), you can view private repos via the Pages system and whatnot. In many of the issues, the current state is "it's a problem with Gitea" such as: https://codeberg.org/Codeberg/Community/issues/356

HN readers are downvoting you, but Gitea's problems are real and they exist. Enough of them are listed in codeberg's issue pages that I felt the software/platform is very buggy and has a lot of security issues as just a generic user with one week of experience, much less an admin.

kenniskrag 43 days ago [-]
> there were bugs allowing people to push repos into other people's accounts (someone ran rampant on codeberg), you can view private repos via the Pages system and whatnot.

Do you have a bug report? I'm a gitea user and want to reproduce it.

stonesweep 43 days ago [-]
I dug up the first one real quick for you, at the time I was just reviewing issues to see if I could help (codeberg) and browsed the list - you can farm here, and they tend to create an upstream issue in Gitea and link to it for many of the items (provide a good feedback loop to upstream):

https://codeberg.org/Codeberg/Community/issues/355

(edit: grammar)

mekster 43 days ago [-]
And for such a rare case we have to give up on using Gitea?

Why not just point out its problem and let the users decide?

fao_ 43 days ago [-]
> Why not just point out its problem and let the users decide?

Is that not what the comment was saying?

MrGilbert 43 days ago [-]
Nope, the comment started with:

> Please avoid gitea. Gitea [...] is known for having accessibility issues, see [...]

Which is not the same as saying:

> Gitea [...] is known for having accessibility issues, see [...]

The first one offers a clear request of action combined with an explaining fact, while the ladder just shows the fact and leaves the action item up to the reader.

fao_ 43 days ago [-]
Right, but I don't see a tangible difference. People are responsible for themselves, if someone is going to stop using Gitea based on the fact that a random internet commenter decided to put "Please avoid Gitea" at the start of their explanation of flaws, isn't that their choice? I'm not sure how those three extra words "remove" anyone's "choice", can you explain why you think that is the case? Thanks.
HexDecOctBin 43 days ago [-]
Why is this being downvoted? Have people always been this selfish?

I know that this website presents a corporate money-minded version of the Hacker ethos, but this level of psychopathy? Downvoting pleas for accessibility because you don't need it? Really?

dec0dedab0de 43 days ago [-]
I'm guessing because it is a reply to someone saying they use it as a personal server. That is sort of like telling someone not to buy a sports car because it cant fit a wheelchair.
h_anna_h 43 days ago [-]
My intent was to inform the people that were going to read the comment and decide to try gitea due to it of the issue.

Although I would say that my comment would also be relevant to that user as long as their personal server is public and they do not want to exclude people with vision issues from viewing the projects that they host.

dec0dedab0de 43 days ago [-]
I understood what you were saying. I was just trying to explain why some people would take offense to it. If you started with "if you intend for others to access your server" then I cant imagine any reasonable person would have had a problem with it, and there is no sense worrying about unreasonable people.
microtherion 43 days ago [-]
I for one appreciate learning about the accessibility issues. It's something to keep in mind if I ever wanted to rely on my personal repo for a project shared with other people, and it's an aspect I cannot easily test myself, and generally don't think of testing myself until it's I've already committed myself to a particular solution.
eeZah7Ux 43 days ago [-]
> I know that this website presents a corporate money-minded version of the Hacker ethos

That would be an oxymoron. The https://en.wikipedia.org/wiki/Hacker_ethic is anti-corporate by definition.

> but this level of psychopathy?

Many times I've seen people downvoted to hell for pointing out accessibility issues around heavy websites.

fao_ 43 days ago [-]
> That would be an oxymoron. The https://en.wikipedia.org/wiki/Hacker_ethic is anti-corporate by definition.

Have you looked at the contents of the site and the driving force of the company behind it?

> Many times I've seen people downvoted to hell for pointing out accessibility issues around heavy websites.

Me too, it's a terrible shame.

eeZah7Ux 43 days ago [-]
> Have you looked at the contents of the site and the driving force of the company behind it?

Of course - hence my point about the conflict between hacker ethic and HN.

DrBazza 43 days ago [-]
Came here to say gitea. It's a single exe, the set up is trivial, and it just runs on my raspberry pi and backs up to my NAS.
m463 43 days ago [-]
My journey went like this:

I ran git in a container "like you're supposed to" with https access but I wanted better management than adding user access to apache.

So I decided to upgrade to gitea. It was a pretty involved installation but I finally got it going. I set up a gitea cotnainer and was able to use it for a while. But one day it stopped working and I had to spend a bunch of time diagnosing why mariadb wasn't coming up. I can't recall exactly what i did (something with ib_logfile) but i finally got it going.

And a few days later when checking on it, I noticed it was using a bunch of cpu time. Apparently it uses resources just sitting there idle, like 5% cpu.

In the end, I made a git container again. I just used SSH access and one user: git

  # pacman -S git
  # useradd -m -G wheel -s /usr/bin/git-shell git
  # systemctl enable sshd
repos are in ~/git/foo.git and ~/git/bar.git, ~/.ssh/authorized_keys for the hosts I give automated access to, password for others.

  $ git clone [email protected]:git/foo.git
I liked the idea of git+https. And running gitea seemed like it would be like having my own git infrastructure!! But in my experience, git is easy with a minimal setup.
dpedu 43 days ago [-]
A database server? Gitea is as "involved" as you make it - mariadb certainly is not a requirement for Gitea. I've been using Gitea with sqlite for a couple years and upgrades have never ever been more complex than replacing the image with a new one. If minimality is a concern then you really can't get any simpler than `./gitea`.
3np 43 days ago [-]
I’m on the contrary here; In my experience SQLite for these kinds of projects tend to become bottlenecks, failure points and/or corrupted eventually.

As long as I pushed it forward, setting up a decent Postgres cluster with failover that I use over local file systems when available has been great.

I groan a bit anytime don’t thing requires persisting configuration to the local file system during the lifetime of the process.

Backups are a lot smoother too. The time invested does come back pretty quick.

dpedu 42 days ago [-]
That's pretty surprising to hear, considering that SQLite is one of the most widely used pieces of software in existence and has a solid reputation. And the insane amount of testing they do - https://www.sqlite.org/testing.html.
3np 41 days ago [-]
I want to stress the (unfairly loosely defined) "these kinds of projects" there.

There is nothing wrong with SQLite as such and in many situation it is perfect for the task. For a backend service that may need to scale, have availability requirements, is going to be run in a clustered or distributed environment, etc, it is not suitable. For one, you suddenly need to couple physical location of persisted data with usage of the same. Running it on networked filesystems is not supported.

Yes, there are some caveats there and ways to work around it but it's a square-peg-round-hole kind of situation.

For mobile or desktop apps, data jobs that can't be parallelized, local or light-weight analytics, embedded, it's great (though I do think many times it's used where something like leveldb or rocksdb would have been more appropriate but w/e).

I'm sure there are other use-cases in both "great choice" and "terrible choice" I saw in the wild and can't recall.

For something like gitea - it depends on the hosting environment,. requirements and scale. Obv for you it is not a pain-point, for me it absolutely is.

---

I really appreciate when project give the user the choice, like they do here!

ORMs are not the devil and can afford flexibility without having to spend time implementing for each supported backend specifically, if performance isn't critical enough that DB-specific optimizations are needed.

IMO for a project that is going to be self-hosted by a wide range of users, it's often premature optimization to make the v1 tied to a specific DB.

m463 43 days ago [-]
hmmm... I guess I learned something new from you.

I followed these instructions setting it up:

https://wiki.archlinux.org/index.php/gitea

I see it does mention sqlite so that would have simplified things. I wonder if it would have prevented the container from using cpu.

Now I'll have to spin up the gitea container again and experiment :)

simias 43 days ago [-]
I recommend looking into gitolite, it's very simple and low maintenance and makes managing access a lot easier and with much more granularity than authorized_keys.

It's basically just a simple perl script and a couple configuration files. It just work in my experience.

If you end up scaling to the point where you have dozens of users and need to make regular changes to the config it becomes rather cumbersome, but for small groups of people (with a couple of build bots and the like that need read-only repo access) it's very well suited.

ausjke 43 days ago [-]
gitea like everything else won't make everyone happy but for me it's a godsend, it's my personal git server for the last two years and I use it daily.

however if you have special requirement like a large team, or accessiblity(which I didn't check) you can try something else. The point here is that, it's much easier to use gitea than to use git+gitolite+nginx unless you want to spend more time with sysadmin instead of development itself.

a20eac1d 43 days ago [-]
How do you handle backups with your instance? I have a simple bash script that sync the entire data directory to Backblaze and I'd love to hear easier or better ways to do this.
a012 42 days ago [-]
Yes, a bash script will do.
ckpowell 43 days ago [-]
It is funny that the author mentions the security dilemma. I fall into the category of people who never get anything done because they worry about exploits all the time.

If you also fall into that category, it is possible to provide read only ("dumb" in the official terminology) git access via http(s).

I think that it is important that the primary git repository of projects is not on GitHub, so that may be a first step for projects where the main development happens on mailing lists.

For CI, a GitHub mirror could be used that is updated with a cron job.

43 days ago [-]
tgragnato 43 days ago [-]
If you do not want to mange a full featured web server, you can also replace nginx with haproxy.

global

  master-worker

program fcgiwrap

  command /usr/sbin/fcgiwrap -c <preforks> -f -s tcp:127.0.0.1:8080

  user www-data

  group www-data

frontend

  ...
backend cgit

  use-fcgi-app cgit

  server fcgiwrap 127.0.0.1:8080 proto fcgi check

fcgi-app cgit

  log-stderr global

  option keep-conn

  docroot <path>

  set-param SCRIPT_FILENAME /usr/lib/cgit/cgit.cgi

  set-param PATH_INFO %[path]

  set-param QUERY_STRING %[query]

  set-param HTTP_HOST %[hdr(host)]
jrwr 43 days ago [-]
How is HAProxy any easier then nginx when it comes to proxying fcgi work?
tgragnato 43 days ago [-]
I'd say that they have a different set of features: use the software that suits best your needs. A default installation of NginX in Ubuntu does many things, which you need to administer and maintain. Haproxy is simpler because it's not a webserver and does fewer things.
xaduha 43 days ago [-]
All you need to run a 'personal git server' is sshd and a user. If you plan to have many others to use it, then it's not really personal, is it?

> Even though we've turned off password based authentication in a previous section, we will still receive a significant amount of bots wasting our compute cycles trying to login.

I think that's proven to be false.

nanidin 43 days ago [-]
I also came to chime in with this. I have been running a "personal git server" this way for almost a decade.

On the server:

    cd ~
    mkdir directoryname.git
    cd directoryname.git
    git init --bare
On the client(s):

    git clone [email protected]:directoryname.git
That's all there is to it!
shuntress 43 days ago [-]
>All you need to run a 'personal git server' is sshd

To expand on this slightly: making your "personal" git server accessible to collaborators (without needing to manage user accounts user accounts) is also very simple. You can statically serve your repository using any web server.

Collaboration is then based on each contributor pushing (ssh) to their personal server from which everyone else may pull (http).

anderspitman 43 days ago [-]
I dream of the day when the internet is decentralized again (either by ipv6 or tunnel proxies[0]) and we can just push/pull directly to each other.

[0]: https://github.com/anderspitman/awesome-tunneling

pokstad 43 days ago [-]
aigoota 43 days ago [-]
I used to use a personal git server but the possible security issues and/or making the entire server vulnerable scared me away from it. Instead, I simply use bare git repos on the server and the server is only accessible via ssh/private key. I can simply fetch/push my repos directly. There's really no need for a frontend...
alexhutcheson 43 days ago [-]
Another option is to run a web GUI like cgit, gitweb, Gitea, or Gogs without exposing it to the internet (so it's only accessible at http://localhost:8888 or similar).

When you want to view it from another machine you can just use SSH port forwarding, like:

    ssh -L 8888:localhost:8888 [email protected]
moehm 43 days ago [-]
You interact with Gitolite via ssh. The things Gitolite makes easier for you is it generate those repos for you (no need to login, just push a config file) and it handles access via ssh keys.

I for instance have an own key for my phone, so I can access the repo with my notes etc, but if you steal my phone you cannot push any code.

43 days ago [-]
brobdingnagians 43 days ago [-]
I do the same. There are plenty of tools for local analysis and visualization of git repos; I finder it better to use those as appropriate and just use bare repos cloned from the server.
flowerlad 43 days ago [-]
My personal git server is just a shared folder on a Raspberry Pi running Ubuntu. This is on my intranet, so I don't worry about someone else accessing or checking in code. The server runs Samba, that's it!

Git does not require a server for simple scenarios such as mine. The "remote" can be a folder, and the folder can be on be on your local box, or it can be a mounted remote folder. I am even able to kick off Jenkins job on the Raspberry Pi when I push.

iveqy 43 days ago [-]
The big thing missing here is a good way to do code reviews and CI/CD. This might not be important for a single user but if you're collaborating with someone it can.

For CI/CD you can get quite a long way with git hooks and for code review I would look into git appraise. that's the best one I've found but I would really like to hear if someone else has a better idea here!

arminiusreturns 43 days ago [-]
I can second git hooks as a vastly underrated approach to automation. Lots of people who only use github don't know about them. Copy-pasta from an old comment of mine:

"So, git hooks are basically executable scripts (of whatever language you have available, I use bash) placed in the .git/hooks dir, whcih are then executed at whichever event is designated (by the name of the script itself) For me, it's post-recieve. After initial setup, a push automatically triggers the post-recieve hook that does something like the following from my git dir:

    GIT_WORK_TREE=..../example.com git checkout -f
and voila, pushes are insta-live, but commits can be pulled and worked on in the meantime.

Learned from a few sources (shout out to Dreamhost for my original intro to this idea), but here are some relevant readings:

https://www.ibm.com/developerworks/library/wa-git/

https://opensource.com/life/16/8/how-construct-your-own-git-... "

Wilya 43 days ago [-]
I used the exact same stack (gitolite+cgit) in the early stages of a previous startup, and code reviews were the big missing part that made us move to something more full featured (for us, gitlab).

It's pretty easy to trigger ci runs via git hooks, and once you're used to it, checking their results in jenkins instead of in the git repository UI makes no difference. But code reviews really need a dedicated interface.

krageon 43 days ago [-]
I wouldn't enjoy writing personal code without CI/CD if it grew beyond a few hundred lines. It's a lot more convenient to have a setup like that, to the point where I'd say it's essential.
tgragnato 43 days ago [-]
The old school way is to notify the ci/cd server using a post-receive hook. You can do it, but it’s not integrated in cgit, it must be set from the command line when you initialize the repo
mnutt 43 days ago [-]
You can automatically add hooks to new repositories using gitolite.
aigoota 43 days ago [-]
I still like this method, and I take advantage of pre-push, pre-commit hooks for dev tools like linters and scanners.
a20eac1d 43 days ago [-]
The one thing that gives me hypertension when self hosting a code repository is backups.

The hard drive of your server can fail at any time and when self hosting you are responsible for your backups.

This is giving me night terrors, especially when its on a cloud server and I don't have access to the hardware.

Currently, I'm running a cron task once per day executing a simple backup script that does the following:

Stop the Gitea container, copy the entire Gitea directory (including the docker-compose.yml and the data directory) to a backup folder, restart the container, sync that folder to a Backblaze bucket, delete the backup folder.

Restoring the backup is (should) be as easy as downloading the bucket from Backblaze and simply docker-composing it up.

I'm looking for other ideas advice that will help me sleep at night. Thanks!

LinuxBender 43 days ago [-]
For what its worth, another thing you might consider looking into is using RSnapshot [0]. RSnapshot helps me sleep at night. This will create multiple directories that are hard links to files that did not change, thus saving disk space and giving you multiple days or weeks of snapshots, in the even something was corrupted and you want to roll back at the filesystem level. Most questions one could come up about rsnapshot with are answered on Serverfault [1] There are many how-to sites [2] with usage examples. You can create snapshots locally or remote. On mac you can brew install rsnapshot.

[0] - https://rsnapshot.org/

[1] - https://serverfault.com

[2] - https://linuxconfig.org/guide-to-rsnapshot-and-incremental-b...

hnlmorg 43 days ago [-]
Snapshots aren't a backup. However they do obviously have their merits too. Personally I'd recommend ZFS over RSnapshot and then make use of raidz so you have redundancy at the hardware level as well as at the snapshots.
LinuxBender 43 days ago [-]
Rsnapshot creates a backup and then creates filesystem diffs from that backup. The backup can be local and/or remote.
hnlmorg 40 days ago [-]
You can do that with ZFS as well but that still doesn't make snapshots a backup. The backup is when you send those snapshots to a remote volume. If a backup isn't stored remotely it isn't a backup (it also shouldn't be mounted locally).
LinuxBender 39 days ago [-]
Yup, aware what ZFS could do. I was not going to suggest it since they are talking about a personal git server. ZFS likes memory, lots of it. Rsnapshot is a simple perl script that uses rsync. The backups and snapshots are wherever you point it to. Local, remote, both, 20 other locations if you are so inclined.
hnlmorg 38 days ago [-]
The stories about ZFS memory consumption are largely exaggerated. But I do take your point :)
GuB-42 43 days ago [-]
One of the advantages of systems like git is that you just need to clone and you have a complete copy of the original repository, you can synchronize with push/pull. No need to mess with backblaze and containers unless you want to back up your server configuration too. If you lose your server, just push your working copy on a new server. With enough people, losing data becomes almost impossible.

In fact you don't even need a server, just push/pull between your machines, a server just makes things more convenient.

hellcow 43 days ago [-]
Do you not have all your code locally as well? It seems like if your drive failed for a personal git host, you could just re-push everything back up to it.
3np 43 days ago [-]
Have you considered a distributed filesystem?

For example:

Glusterfs Replicated 3 (1 arbiter/parity), put on top of zfs filesystems with checksumming.

Users/consuming services can Fuse mount over the network

Incremental backups of bricks (1 should be enough) to a mirror or spinning rust

Then all you need to worry about is offsite

If you really only need to solve for gitea, this is prob overkill but if you have more services putting stuff on disk it could be worth it. Works great for me.

mdtrooper 43 days ago [-]
I think is more easy https://fossil-scm.org and a cloud such as nextcloud or google drive. Yes, it is not git, but it is not bad.
diego_moita 43 days ago [-]
Personally, I think fossil is far better than git. It offers you a lot more than git: wiki, ticket system, chat, ...
alexhutcheson 43 days ago [-]
If you just want to browse some repos via a read-only web interface, you might also want to check out gitweb[1] and the wrapper script git-instaweb[2], which are bundled with git.

[1] https://git-scm.com/docs/gitweb

[2] https://git-scm.com/docs/git-instaweb

nirui 43 days ago [-]
Since we are recommending personal Git servers at this point ... if you want a really really lightweight Git repo server without web interface support, maybe checkout gitdir (https://github.com/belak/gitdir)

There is another project called Sorcia (https://sorcia.org, https://news.ycombinator.com/item?id=22685914) which is also very lightweight and includes web interface.

On the security side, you can put them in Docker and apply all necessary measures there. PS: You can also run SSHd and git in a Docker container.

codetrotter 43 days ago [-]
Sorcia website doesn’t load for me
nirui 42 days ago [-]
Yeah that website is kind flicking sometimes, maybe try again later ;)
ed25519FUUU 43 days ago [-]
Gogs is probably the most painless way to do this, especially if you’re setting up git on an existing host and don’t want to change or install a bunch of stuff.

https://gogs.io/

waffletower 43 days ago [-]
Keeping running fail2ban, but I also recommend using a non-standard SSH port. When port 22 is closed, bots are stopped at the firewall. git urls can include custom ports and you can also use bash aliases to manage the difference.
tyingq 43 days ago [-]
For anyone that has issues with fail2ban, I'm a fan of pam_shield. It's a pam module, so it's intercepting live requests rather than tailing logs, which seems cleaner to me. For debian, it was a relatively simple apt-get type install (libpam-shield). But, yeah, changing the default port surely kills off the noise.
Simplicitas 43 days ago [-]
Good article. But yeah, I too opted for a Gitea's Dockerized setup (https://github.com/lencap/lux). Looking to secure it better.
warmwaffles 43 days ago [-]
Or host your own sourcehut https://sourcehut.org/
moehm 43 days ago [-]
I just want to note I once run this stack on an old Raspberry Pi 2, and it was still really, really fast. Also it's very lightweight. My current setup runs on a box with 256 mb RAM and for a lot of small git repos it's fine. (Larger ones get OOM reaped.)
LeSaucy 43 days ago [-]
This is great to use old-school tools. What really rubs me the wrong way is how manual it is to setup/configure/deploy. Things like this run great for years until they don't, and then you are left walking through the entire setup process again to debug things.
kenniskrag 43 days ago [-]
Is cgit read only? Or do you have to restrict access (e.g. push)