Find a repo (apt/PPA, dnf/EPEL, AUR, whatever works for you) and use a repo to manage your server software unless you have no other choice than to hand-compile it. Hand-compiling server daemons like this means they never get upgraded with bugfixes and CVE issues regularly - this isn't 1995, we did this back then because we had to do it. It's not a good idea in 2021 to be doing this unless you're going to hand-manage that cgit binary every day of your life going forward, this is an internet facing endpoint which will get people probing it to attack (assuming you used a cloud server).
If nothing else, clone off the last tagged release by the upstream authors not HEAD.
Honestly this is the number one problem with internet security today. Why are computers allowed to talk to strangers at all? They should just drop all data coming from unauthenticated connections. This would destroy the mass appeal of the modern internet but it makes a lot of sense for computers hosting personal services that are meant to be used by one or few users.
Technologies such as port knocking and single packet authorization make vulnerabilities irrelevant since there's no way to exploit them before authenticating oneself.
The update checking workflow can be as simple as putting the repo commit or tag list, or news feed into your RSS reader, and just do a quick check if update is warranted when you get some update in your feed. Many projects have an announce mailing list that you can subscribe to. There are special announce lists for security issues.
And in case of cgit, if you're running a tagged release, how do you know that git 2.25.1 (some random release from a year ago) doesn't have any security vulnerabilities when used with cgit? Building from git and basing cgit on top of 2.30.0 seems a bit more reasonable to me in this case.
But, isn’t cgit basically read-only? If so, that doesn’t seem to be particularly risky. All of the main interactions with gitolite are through SSH, which I’d expect to be the main area of attack.
So are SELECT statements until little Bobby Tables shows up.
Which is to say that SQL injection can be used to turn read-only functionality into write functionality.
Secondly, if you care about who has access to the data, even read-only vulnerabilities can be damaging.
Similarly though -- It is a mature project without a lot of new features. I don't think there are killer features in master that would make me want to install from source over `apt install cgit`
Edit: the reason that I posted this was not due to some hypothetical blind people but rather because I have been in a situation where I want to show a cool project to a friend with vision issues but I can't because the project is hosted on a gitea instance. I posted this because I was unaware of this issue until recently and presumed that there would be others here who are also not aware of this.
HN readers are downvoting you, but Gitea's problems are real and they exist. Enough of them are listed in codeberg's issue pages that I felt the software/platform is very buggy and has a lot of security issues as just a generic user with one week of experience, much less an admin.
Do you have a bug report? I'm a gitea user and want to reproduce it.
Why not just point out its problem and let the users decide?
Is that not what the comment was saying?
> Please avoid gitea. Gitea [...] is known for having accessibility issues, see [...]
Which is not the same as saying:
> Gitea [...] is known for having accessibility issues, see [...]
The first one offers a clear request of action combined with an explaining fact, while the ladder just shows the fact and leaves the action item up to the reader.
I know that this website presents a corporate money-minded version of the Hacker ethos, but this level of psychopathy? Downvoting pleas for accessibility because you don't need it? Really?
Although I would say that my comment would also be relevant to that user as long as their personal server is public and they do not want to exclude people with vision issues from viewing the projects that they host.
That would be an oxymoron. The https://en.wikipedia.org/wiki/Hacker_ethic is anti-corporate by definition.
> but this level of psychopathy?
Many times I've seen people downvoted to hell for pointing out accessibility issues around heavy websites.
Have you looked at the contents of the site and the driving force of the company behind it?
> Many times I've seen people downvoted to hell for pointing out accessibility issues around heavy websites.
Me too, it's a terrible shame.
Of course - hence my point about the conflict between hacker ethic and HN.
I ran git in a container "like you're supposed to" with https access but I wanted better management than adding user access to apache.
So I decided to upgrade to gitea. It was a pretty involved installation but I finally got it going. I set up a gitea cotnainer and was able to use it for a while. But one day it stopped working and I had to spend a bunch of time diagnosing why mariadb wasn't coming up. I can't recall exactly what i did (something with ib_logfile) but i finally got it going.
And a few days later when checking on it, I noticed it was using a bunch of cpu time. Apparently it uses resources just sitting there idle, like 5% cpu.
In the end, I made a git container again. I just used SSH access and one user: git
repos are in ~/git/foo.git and ~/git/bar.git, ~/.ssh/authorized_keys for the hosts I give automated access to, password for others.
# pacman -S git # useradd -m -G wheel -s /usr/bin/git-shell git # systemctl enable sshd
I liked the idea of git+https. And running gitea seemed like it would be like having my own git infrastructure!! But in my experience, git is easy with a minimal setup.
$ git clone [email protected]:git/foo.git
As long as I pushed it forward, setting up a decent Postgres cluster with failover that I use over local file systems when available has been great.
I groan a bit anytime don’t thing requires persisting configuration to the local file system during the lifetime of the process.
Backups are a lot smoother too. The time invested does come back pretty quick.
There is nothing wrong with SQLite as such and in many situation it is perfect for the task. For a backend service that may need to scale, have availability requirements, is going to be run in a clustered or distributed environment, etc, it is not suitable. For one, you suddenly need to couple physical location of persisted data with usage of the same. Running it on networked filesystems is not supported.
Yes, there are some caveats there and ways to work around it but it's a square-peg-round-hole kind of situation.
For mobile or desktop apps, data jobs that can't be parallelized, local or light-weight analytics, embedded, it's great (though I do think many times it's used where something like leveldb or rocksdb would have been more appropriate but w/e).
I'm sure there are other use-cases in both "great choice" and "terrible choice" I saw in the wild and can't recall.
For something like gitea - it depends on the hosting environment,. requirements and scale. Obv for you it is not a pain-point, for me it absolutely is.
I really appreciate when project give the user the choice, like they do here!
ORMs are not the devil and can afford flexibility without having to spend time implementing for each supported backend specifically, if performance isn't critical enough that DB-specific optimizations are needed.
IMO for a project that is going to be self-hosted by a wide range of users, it's often premature optimization to make the v1 tied to a specific DB.
I followed these instructions setting it up:
I see it does mention sqlite so that would have simplified things. I wonder if it would have prevented the container from using cpu.
Now I'll have to spin up the gitea container again and experiment :)
It's basically just a simple perl script and a couple configuration files. It just work in my experience.
If you end up scaling to the point where you have dozens of users and need to make regular changes to the config it becomes rather cumbersome, but for small groups of people (with a couple of build bots and the like that need read-only repo access) it's very well suited.
however if you have special requirement like a large team, or accessiblity(which I didn't check) you can try something else. The point here is that, it's much easier to use gitea than to use git+gitolite+nginx unless you want to spend more time with sysadmin instead of development itself.
If you also fall into that category, it is possible to provide read only ("dumb" in the official terminology) git access via http(s).
I think that it is important that the primary git repository of projects is not on GitHub, so that may be a first step for projects where the main development happens on mailing lists.
For CI, a GitHub mirror could be used that is updated with a cron job.
command /usr/sbin/fcgiwrap -c <preforks> -f -s tcp:127.0.0.1:8080 user www-data group www-data
use-fcgi-app cgit server fcgiwrap 127.0.0.1:8080 proto fcgi check
log-stderr global option keep-conn docroot <path> set-param SCRIPT_FILENAME /usr/lib/cgit/cgit.cgi set-param PATH_INFO %[path] set-param QUERY_STRING %[query] set-param HTTP_HOST %[hdr(host)]
> Even though we've turned off password based authentication in a previous section, we will still receive a significant amount of bots wasting our compute cycles trying to login.
I think that's proven to be false.
On the server:
On the client(s):
cd ~ mkdir directoryname.git cd directoryname.git git init --bare
That's all there is to it!
git clone [email protected]:directoryname.git
To expand on this slightly: making your "personal" git server accessible to collaborators (without needing to manage user accounts user accounts) is also very simple. You can statically serve your repository using any web server.
Collaboration is then based on each contributor pushing (ssh) to their personal server from which everyone else may pull (http).
When you want to view it from another machine you can just use SSH port forwarding, like:
ssh -L 8888:localhost:8888 [email protected]
I for instance have an own key for my phone, so I can access the repo with my notes etc, but if you steal my phone you cannot push any code.
Git does not require a server for simple scenarios such as mine. The "remote" can be a folder, and the folder can be on be on your local box, or it can be a mounted remote folder. I am even able to kick off Jenkins job on the Raspberry Pi when I push.
For CI/CD you can get quite a long way with git hooks and for code review I would look into git appraise. that's the best one I've found but I would really like to hear if someone else has a better idea here!
"So, git hooks are basically executable scripts (of whatever language you have available, I use bash) placed in the .git/hooks dir, whcih are then executed at whichever event is designated (by the name of the script itself) For me, it's post-recieve. After initial setup, a push automatically triggers the post-recieve hook that does something like the following from my git dir:
and voila, pushes are insta-live, but commits can be pulled and worked on in the meantime.
GIT_WORK_TREE=..../example.com git checkout -f
Learned from a few sources (shout out to Dreamhost for my original intro to this idea), but here are some relevant readings:
It's pretty easy to trigger ci runs via git hooks, and once you're used to it, checking their results in jenkins instead of in the git repository UI makes no difference. But code reviews really need a dedicated interface.
The hard drive of your server can fail at any time and when self hosting you are responsible for your backups.
This is giving me night terrors, especially when its on a cloud server and I don't have access to the hardware.
Currently, I'm running a cron task once per day executing a simple backup script that does the following:
Stop the Gitea container, copy the entire Gitea directory (including the docker-compose.yml and the data directory) to a backup folder, restart the container, sync that folder to a Backblaze bucket, delete the backup folder.
Restoring the backup is (should) be as easy as downloading the bucket from Backblaze and simply docker-composing it up.
I'm looking for other ideas advice that will help me sleep at night. Thanks!
 - https://rsnapshot.org/
 - https://serverfault.com
In fact you don't even need a server, just push/pull between your machines, a server just makes things more convenient.
Glusterfs Replicated 3 (1 arbiter/parity), put on top of zfs filesystems with checksumming.
Users/consuming services can Fuse mount over the network
Incremental backups of bricks (1 should be enough) to a mirror or spinning rust
Then all you need to worry about is offsite
If you really only need to solve for gitea, this is prob overkill but if you have more services putting stuff on disk it could be worth it. Works great for me.
On the security side, you can put them in Docker and apply all necessary measures there. PS: You can also run SSHd and git in a Docker container.