2) it's a pretty rare skillet with specialist tooling, that doesn't really provide the itch scratch of most open source
3) "correctly formulating the problem in a formal specification" is really hard, given the inability of most stack overflow posters to formulate questions rigorously
4) for almost all projects, we value mutability over eternity
a lot of software is deployed in the service of the goals of business. To serve that purpose it often isn't necessary for the software to be completely correct. If it is probably sort of correct that's often good enough. Better to have something sort of correct deployed and running next week than something that costs more money and time to build, is a lot harder to hire people to maintain, relies on a formal analysis that is highly sensitive to changes in assumptions, and achieves a level of correctness that is unnecessary for the situation.
from this perspective, investing effort into formal methods will often give a poor return on investment compared to alternatives of hacking things together & running a few tests to catch obvious defects.
In some situations it seems to be worth investing in formal methods. Eg relatively well defined systems with very high costs if they fail.
Problems like that would go away with something like this, no? (if it is possible and there isn't a fault in my argument)
Did you? It seems pretty obvious to me that Rockstar knew that this was slow as hell, and they were showing ads during that time window. Business interests are not software engineering interests, so I don't think problems like that would go away with something like this, no.
So all this will really do is put more load on their servers, now that the bottleneck moved off of your hardware and back onto thiers.
Excerpt: "The incredible progress here is evident in the IronFleet project: by using advanced SMT solvers and a cutting-edge verification language, Microsoft was able to write 5,000 lines of verified Dafny code in only 3.7 person-years! That’s a blazing-fast rate of four whole lines a day."
5) Right now it's probably harder (in most cases) to write a correct formal spec than actually implementing that spec. Otherwise formal specs would be much more popular in general.
6) The proportion of software which is actually algorithmically interesting these days is tiny. I expect something like 99%-99.999% of lines of code are basically stuff that's been done a thousand times before.
I guess what that means, right now, when we're writing Software, we're really only ever solving a small part of the whole problem. A lot of the problem space isn't even considered. (I'm saying this completely neutral btw, I don't think it's either good or bad. It's a trade-off and probably what allows us to have all the software we have now.)
- Correctness proofs of even simple algorithms can be research grade problems that take several person years of effort from (hard to find) experts, so you're not likely to get casual Stack Overflow style contributions because of the order of magnitude difference in time and expertise required.
- The time required to verify correct software will come at a major cost to other factors like features and speed. I don't think 100% correctness is a driving factor for many domains like web servers, browsers and IDEs, so your database of correct software isn't going to be widely used until it catches up on other factors.
Could you elaborate on the last (now first, after edit) point you made? Sorry if the question is naive, I don't know much about this topic. Why can't you do that?
Similarly with theorem provers, on top of just the syntax, they each have their own definitions for equality, data structures etc. that don't have simple mappings, as well as differences in the basic proof axioms all proofs must be built on. Both specifications and their proofs won't map easily between theorem provers which you'll have to reconcile somehow.
In all seriousness, the idea is very interesting.
I haven’t given this much thought, and maybe I don’t fully understand the proposed idea, but how would you deal with improvements/advancement to already solved things? Like when the fenwick tree was invented
Or how do you deal with competing solutions to a problem like quicksort vs mergesort?
That's why it would be awesome if you also added performance/memory characteristics to each solution you posted - then you could easily see which one is better for your situation.
Actually, even better - the memory/performance constraints could/should arguably be part of the problem formulation in the first place.
It's very hard to get everyone to agree how code plus mathematics should be written. There's a lot of fragmentation and ongoing attempts to unify.