I’m probably not the only one with the opinion that rewrites in Rust may generally a good idea, but Rust’s compile times are unacceptable. I know there are efforts to improve that, but Rust’s compile times are so abysmally slow that it really affects me as a Gentoo user. Another point is that Rust is not standardized and a one-implementation-language, which also discourages me from looking deeper into Haskell and others. I’m not saying that I generally reject single-implementation languages, as this would disregard any new languages, but a language implementation should be possible without too much work (say within two man-months). Neither Haskell nor Rust satisfy this condition and contraptions like Cargo make it even worse, because implementing Rust would also mean to more or less implement the entire Cargo-ecosystem.
Contrary to that, C compiles really fast, is an industry standard and has dozens of implementations. Another thing we should note is that the original C-codebase is a mature one. While Rust’s great ownership and type system may save you from general memory-handling- and type-errors, it won’t save you from intrinsic logic errors. However, I don’t weigh that point that much because this is an argument that could be given against any new codebase.
What really matters to me is the increase in the diversity of git-implementations, which is a really good thing.
but a language implementation should be possible without too much work (say within two man-months)
Why is that a requirement? I don’t understand your position, we shouldn’t have complex, interesting or experimental languages only because a person couldn’t write an implementation by himself in 2 months? We should discard all the advances rust and haskell provide because they require a complex compiler?
I’m not saying that we should discard those advances, because there is no mutual exclusion. I’m pretty certain one could work up a pure functional programming language based on linear type theory that provides the same benefits and is possible to implement in a reasonable amount of time.
A good comparison is the web: 10-15 years ago, it was possible for a person to implement a basic web browser in a reasonable amount of time. Nowadays, it is impossible to follow all new web standards and you need an army of developers to keep up, which is why more and more groups give up on this endeavour (look at Opera and Microsoft as the most recent examples). We are now in a state where almost 90% of browsers are based on Webkit, which turns the web into a one-implementation-domain. I’m glad Mozilla is holding up there, but who knows for how long?
The thing is the following: If you make the choice of a language as a developer, you “invest” into the ecosystem and if the ecosystem for some reason breaks apart/dies/changes into a direction you don’t agree with, you are forced to put additional work into it.
This additional work can be a lot if you’re talking about proprietary ecosystems, meaning more or less you are forced to rewrite your programs. Rust satisfies the necessary condition of a qualified ecosystem, because it’s open source, but open source systems can also shut you out when the ABI/API isn’t stable, and the danger is especially given with the “loose” crate system that may provide high flexibility, but also means a lot of technical debt when you have to continually push your code to the newest specs to be able to use your dependencies. However, this is again a question of the ecosystem, and I’d prefer to only refer to the Rust compiler here.
Anyway, I think the Rust community needs to address this and work up a standard for the Rust language. On my behalf, I won’t be investing my time into this ecosystem until this is addressed in some way. Anything else is just building a castle on sand.
A good comparison is the web: 10-15 years ago, it was possible for a person to implement a basic web browser in a reasonable amount of time. Nowadays, it is impossible to follow all new web standards and you need an army of developers to keep up, which is why more and more groups give up on this endeavour (look at Opera and Microsoft as the most recent examples). We are now in a state where almost 90% of browsers are based on Webkit, which turns the web into a one-implementation-domain. I’m glad Mozilla is holding up there, but who knows for how long?
I know Blink was forked from webkit but all these years later don’t you think it’s a little reductive to treat them as the same? If I’m not mistaken Blink sends nothing upstream to webkit and by now the codebases are fairly divergent.
I feel ya - on OpenBSD compile times are orders of magnitude slower than on Linux! For example ncspot takes ~2 minutes to build on Linux and 37 minutes on OpenBSD (with most features disabled)!!
There isn’t one. People (semarie@ - who maintains the rust port on OpenBSD being one) have looked into it with things like the RUSTC_BOOTSTRAP=1 and RUSTFLAGS='-Ztime-passes -Ztime-llvm-passes' env vars. These point to most of the time being spent in LLVM. But no one has tracked down the issue fully AFAIK.
Another point is that Rust is not standardized and a one-implementation-language
This is something that gives me pause when considering Rust. If the core Rust team does something that makes it impossible for me to continue using Rust (e.g. changes licenses to something incompatible with what I’m using it for), I don’t have anywhere to go and at best am stuck on an older version.
One of the solutions to the above problem is a fork, but without a standard, the fork and the original can vary and no one is “right” and I lose the ability to write code portable between the two versions.
Obviously, this isn’t a problem unique to Rust - most languages aren’t standardized and having a plethora of implementations can cause its own problems too - but the fact that there are large parts of Rust that are undefined and unstandardized (the ABI, the aliasing rules, etc) gives me pause from using it in mission-critical stuff.
(I’m still learning Rust and I’m planning on using it for my next big thing if I get good enough at it in time, though given the time constraints it’s looking like I’ll be using C because my Rust won’t be good enough yet.)
The fact that the trademark is still owned by the Mozilla foundation and not the to-be-created Rust Foundation is also likely chilling any attempts at independent reimplementation.
As much as I understand your point about the slowness of compile time in Rust, I think it is a matter of time to see them shrink.
On the standard point, Haskell have a standard : Haskell 2010 . GHC is the only implementation now but it have a lot of plugins to the compiler that are not in the standard. The new standard Haskell 2020 is on his way. Implementing the standard Haskell (not with all the GHC add-ons) is do-able but the language will way more simple and with flaws.
The thing is, as you said: You can’t compile a lot of code by implementing Haskell 2010 (or 2020 for that matter) when you also don’t ship the “proprietary” extensions.
It is the same when you abuse GCC or Clang extensions in your codebase. The main difference with Haskell is that you, almost, only have GHC available and the community put their efforts in it and create a ecosystem of extensions.
As for C, your could write standard-compliant code that an hypothetical other compiler may compile. I am pretty sure if we only had one main compiler for C for so long that Haskell have had GHC, the situation would have been similar : lots of language extension outside the standard existing solely in the compiler.
But this is exactly the case: There’s lots and lots of code out there that uses GNU extensions (from gcc). For a very long time, gcc was the only real compiler around and it lead to this problem. Some extensions are so persistent that clang had no other choice but to implement them.
But does those extensions ever reached the standard? It as asked candidly as I do not know a lot of the evolution of C, compilers and standard that much.
There’s a list by GNU that lists the extensions. I really hate it that you can’t enable a warning flag (like -Wextensions) that warns you about using GNU extensions.
Still, it is not as bad as bashism (i.e. extensions in GNU bash over Posix sh), because many scripts declare a /bin/sh-shebang at the top but are full of bashism because they incidentally have bash as the default shell. Most bashisms are just stupid, many people don’t know they are using them and there’s no warning to enable warnings. Another bad offender are GNU extensions of the Posix core utilities, especially GNU make, where 99% of all makefiles are actually GNU only and don’t work with Posix make.
In general, this is one major reason I dislike GNU: They see themselves as the one and only choice for software (demanding people to call Linux “GNU/Linux”) while introducing tons of extensions to chain their users to their ecosystem.
Yep :) and yhc never landed after forking nhc. UHC and JHC seem dead. My main point is mainly that the existence of a standard does not assure the the multiplication of implementations and the cross-cIompilation between compilers/interpreters/jit/etc. It is a simplification around it and really depends on the community around those languages. If you look at Common Lisp with a set in the stone standard and a lot of compilers that can pin-point easily what is gonna work or not. Or Scheme with a fairly easy standard but you will quickly run out of the possibility to swap between interpreters if you focus on some specific stuffs.
After that, everyone have their checklist about what a programming language must or must not provide for them to learn and use.
I truly do not understand why people get upset about re-implementations of existing software. Why is it a problem for you that someone else wrote some software that works with or similarly to some existing software, in this particular language?
In the worst case, it’s like a fork: it can split the community and result in two programs, neither of which is as good as the original could have been without being sapped of resources. I think there’s also a feeling of, “with all the cool things waiting to be invented, why did you put your effort into instead reinventing a wheel that wasn’t broken?”
On the other hand, rewriting a venerable tool in a more sustainable way can bring big benefits: I think a lot of people were skeptical of Clang at first, but it’s been a big boon to IDEs (esp Xcode) and accelerated progress in C-family compilation.
Probably because it indicates to them that the person viewed something in the original as deficient. When they invested heavily in that something, they feel like they have been slighted. I think it’s a bit of a lizard brain thing that we all have to some extent.
I don’t understand what the point of this project is. Is it just a git porcelain but BSD licensed and without colors? Everything on the examples page I regularly do with git. I wonder if the FAQ covers why I should use this software.
What’s the point of all this? Why not just use Git?
If you are wondering why Got even exists, you can just ignore it.
It’s a complete reimplementation of git, with privilege separation and sandboxing of all the components in a way that will protect against confused deputy attacks of the sort that we keep seeing in git, a completely different UI, and out of the box support for patch based workflows.
Thank you for providing some background on this project.
…in a way that will protect against confused deputy attacks of the sort that we keep seeing in git
Can you provide some context for this? I did a cursory search for this sort of thing, but I couldn’t find anything on git being vulnerable to this type of attack.
a completely different UI
I touched on this in other comments, but I suppose this is a goal and not necessarily the current state of the project.
and out of the box support for patch based workflows.
IME git has excellent support for patch-based workflows. It wasn’t until I started using this style of workflow that I understood why git made many of its decisions.
Except you didn’t ignore it, you decided it was worth your time to read about it, post an annoyed comment about it on Lobste.rs, and then declare your intention to ignore it as if winning a moral victory. Why? Does it hurt you in some way?
Because as a person who’d genuinely like to have an answer to that question, all I got was a “go away, if you can’t tell why this is better we didn’t want you to use it anyway.”
The entire point is that git(1) is not a good end-user tool. It’s a thin layer over messy, raw internals. So make a better user-friendly tool that leverages the git on-disk format and capabilities but simplify the UX so there are significantly fewer footguns.
I suppose, but I don’t really see any ux improvements in their examples. For instance, histedit is just sugar for rebase -i, but without squash or reword (or a few others that I personally don’t use much). And do you really gain that much by renaming add to stage? I feel like the removal of color is a bigger usability loss than any advantage demonstrated in their examples.
By defending any of these examples it just reinforces that the open source community has no sense of UX design.
Yes, you’ve pointed out a few workarounds, but they’re not good. git status shows you a lot more info than just the current branch, which you know. You’re just telling people to look elsewhere for the information and make them mentally sort through the mess on the screen to find it.
Imagine trying to explain all of this to someone who is a competent developer but has never been exposed to git before. Good luck!
Yes, you’ve pointed out a few workarounds, but they’re not good.
These are not workarounds; they are simpler versions of examples you gave which usually have the same or slightly different information.
git status shows you a lot more info than just the current branch, which you know.
I mean, it’s the first line. No one uses the method you suggested unless they’re writing a script.
You can certainly say “wouldn’t it be nice if git remote showed the repo urls by default?” But you shouldn’t try to pretend that you need to add an option to list remotes at all.
I am never interested in the short name of remotes, only the URL. Without displaying the URL I don’t know if I can trust that the remote is setup correctly.
It’s a permissively licensed git implementation that lends itself well to patch-based workflows. It’s intended to replace cvs for openbsd, fixing the problems the openbsd people have with git.
I think Git is an excellent candidate for a rust rewrite. There’s nothing about it that really needs to be in C or benfits from being in C, it’s just one of the many userspace command-line tools that were written in C out of tradition. Rust is a better language that C that results in more maintainable and less bug prone code. There’s no reason not to use Rust for this kind of program (or at least avoid C).
That said, it’s a good idea to have multiple implementations of the same functionality. It wouldn’t surprise if GitHub was exploring alternatives.
I’d like to expand on my point in the first sentence - what would have been a viable language for Linus to use to implement Git in 2005 (other than C/C++?). Git was designed as a replacement for BitKeeper to handle the Linux source tree. Using C in that case was an easy choice.
C++ is a horrible language. It’s made more horrible by the fact that a lot
of substandard programmers use it, to the point where it’s much much
easier to generate total and utter crap with it. Quite frankly, even if
the choice of C were to do nothing but keep the C++ programmers out,
that in itself would be a huge reason to use C.
In other words: the choice of C is the only sane choice. I know Miles
Bader jokingly said “to piss you off”, but it’s actually true. I’ve come
to the conclusion that any programmer that would prefer the project to be
in C++ over C is likely a programmer that I really would prefer to piss
off, so that he doesn’t come and screw up any project I’m involved with.
C++ leads to really really bad design choices. You invariably start using
the “nice” library features of the language like STL and Boost and other
total and utter crap, that may “help” you program, but causes:
infinite amounts of pain when they don’t work (and anybody who tells me
that STL and especially Boost are stable and portable is just so full
of BS that it’s not even funny)
inefficient abstracted programming models where two years down the road
you notice that some abstraction wasn’t very efficient, but now all
your code depends on all the nice object models around it, and you
cannot fix it without rewriting your app.
In other words, the only way to do good, efficient, and system-level and
portable C++ ends up to limit yourself to all the things that are
basically available in C. And limiting your project to C means that people
don’t screw that up, and also means that you get a lot of programmers that
do actually understand low-level issues and don’t screw things up with any
idiotic “object model” crap.
So I’m sorry, but for something like git, where efficiency was a primary
objective, the “advantages” of C++ is just a huge mistake. The fact that
we also piss off people who cannot see that is just a big additional
advantage.
If you want a VCS that is written in C++, go play with Monotone. Really.
They use a “real database”. They use “nice object-oriented libraries”.
They use “nice C++ abstractions”. And quite frankly, as a result of all
these design decisions that sound so appealing to some CS people, the end
result is a horrible and unmaintainable mess.
But I’m sure you’d like it more than git.
and
It sucks. Trust me - writing kernel code in C++ is a BLOODY STUPID IDEA.
The fact is, C++ compilers are not trustworthy. They were even worse in
1992, but some fundamental facts haven’t changed:
the whole C++ exception handling thing is fundamentally broken. It’s
especially broken for kernels.
any compiler or language that likes to hide things like memory
allocations behind your back just isn’t a good choice for a kernel.
you can write object-oriented code (useful for filesystems etc) in C,
without the crap that is C++.
In general, I’d say that anybody who designs his kernel modules for C++ is
either
(a) looking for problems
(b) a C++ bigot that can’t see what he is writing is really just C anyway
(c) was given an assignment in CS class to do so.
I’m probably not the only one with the opinion that rewrites in Rust may generally a good idea, but Rust’s compile times are unacceptable. I know there are efforts to improve that, but Rust’s compile times are so abysmally slow that it really affects me as a Gentoo user. Another point is that Rust is not standardized and a one-implementation-language, which also discourages me from looking deeper into Haskell and others. I’m not saying that I generally reject single-implementation languages, as this would disregard any new languages, but a language implementation should be possible without too much work (say within two man-months). Neither Haskell nor Rust satisfy this condition and contraptions like Cargo make it even worse, because implementing Rust would also mean to more or less implement the entire Cargo-ecosystem.
Contrary to that, C compiles really fast, is an industry standard and has dozens of implementations. Another thing we should note is that the original C-codebase is a mature one. While Rust’s great ownership and type system may save you from general memory-handling- and type-errors, it won’t save you from intrinsic logic errors. However, I don’t weigh that point that much because this is an argument that could be given against any new codebase.
What really matters to me is the increase in the diversity of git-implementations, which is a really good thing.
Why is that a requirement? I don’t understand your position, we shouldn’t have complex, interesting or experimental languages only because a person couldn’t write an implementation by himself in 2 months? We should discard all the advances rust and haskell provide because they require a complex compiler?
I’m not saying that we should discard those advances, because there is no mutual exclusion. I’m pretty certain one could work up a pure functional programming language based on linear type theory that provides the same benefits and is possible to implement in a reasonable amount of time.
A good comparison is the web: 10-15 years ago, it was possible for a person to implement a basic web browser in a reasonable amount of time. Nowadays, it is impossible to follow all new web standards and you need an army of developers to keep up, which is why more and more groups give up on this endeavour (look at Opera and Microsoft as the most recent examples). We are now in a state where almost 90% of browsers are based on Webkit, which turns the web into a one-implementation-domain. I’m glad Mozilla is holding up there, but who knows for how long?
The thing is the following: If you make the choice of a language as a developer, you “invest” into the ecosystem and if the ecosystem for some reason breaks apart/dies/changes into a direction you don’t agree with, you are forced to put additional work into it.
This additional work can be a lot if you’re talking about proprietary ecosystems, meaning more or less you are forced to rewrite your programs. Rust satisfies the necessary condition of a qualified ecosystem, because it’s open source, but open source systems can also shut you out when the ABI/API isn’t stable, and the danger is especially given with the “loose” crate system that may provide high flexibility, but also means a lot of technical debt when you have to continually push your code to the newest specs to be able to use your dependencies. However, this is again a question of the ecosystem, and I’d prefer to only refer to the Rust compiler here.
Anyway, I think the Rust community needs to address this and work up a standard for the Rust language. On my behalf, I won’t be investing my time into this ecosystem until this is addressed in some way. Anything else is just building a castle on sand.
There is a good argument by Drew DeVault that it is impossible to reimplement a web browser for the modern web
I know Blink was forked from webkit but all these years later don’t you think it’s a little reductive to treat them as the same? If I’m not mistaken Blink sends nothing upstream to webkit and by now the codebases are fairly divergent.
I feel ya - on OpenBSD compile times are orders of magnitude slower than on Linux! For example ncspot takes ~2 minutes to build on Linux and 37 minutes on OpenBSD (with most features disabled)!!
For reals? This is terrifying.
Excuse my ignorance – mind pointing me to some kind of article/document explaining why this is the case?
There isn’t one. People (semarie@ - who maintains the rust port on OpenBSD being one) have looked into it with things like the
RUSTC_BOOTSTRAP=1
andRUSTFLAGS='-Ztime-passes -Ztime-llvm-passes'
env vars. These point to most of the time being spent in LLVM. But no one has tracked down the issue fully AFAIK.This is something that gives me pause when considering Rust. If the core Rust team does something that makes it impossible for me to continue using Rust (e.g. changes licenses to something incompatible with what I’m using it for), I don’t have anywhere to go and at best am stuck on an older version.
One of the solutions to the above problem is a fork, but without a standard, the fork and the original can vary and no one is “right” and I lose the ability to write code portable between the two versions.
Obviously, this isn’t a problem unique to Rust - most languages aren’t standardized and having a plethora of implementations can cause its own problems too - but the fact that there are large parts of Rust that are undefined and unstandardized (the ABI, the aliasing rules, etc) gives me pause from using it in mission-critical stuff.
(I’m still learning Rust and I’m planning on using it for my next big thing if I get good enough at it in time, though given the time constraints it’s looking like I’ll be using C because my Rust won’t be good enough yet.)
The fact that the trademark is still owned by the Mozilla foundation and not the to-be-created Rust Foundation is also likely chilling any attempts at independent reimplementation.
As much as I understand your point about the slowness of compile time in Rust, I think it is a matter of time to see them shrink.
On the standard point, Haskell have a standard : Haskell 2010 . GHC is the only implementation now but it have a lot of plugins to the compiler that are not in the standard. The new standard Haskell 2020 is on his way. Implementing the standard Haskell (not with all the GHC add-ons) is do-able but the language will way more simple and with flaws.
The thing is, as you said: You can’t compile a lot of code by implementing Haskell 2010 (or 2020 for that matter) when you also don’t ship the “proprietary” extensions.
It is the same when you abuse GCC or Clang extensions in your codebase. The main difference with Haskell is that you, almost, only have GHC available and the community put their efforts in it and create a ecosystem of extensions.
As for C, your could write standard-compliant code that an hypothetical other compiler may compile. I am pretty sure if we only had one main compiler for C for so long that Haskell have had GHC, the situation would have been similar : lots of language extension outside the standard existing solely in the compiler.
But this is exactly the case: There’s lots and lots of code out there that uses GNU extensions (from gcc). For a very long time, gcc was the only real compiler around and it lead to this problem. Some extensions are so persistent that clang had no other choice but to implement them.
But does those extensions ever reached the standard? It as asked candidly as I do not know a lot of the evolution of C, compilers and standard that much.
There’s a list by GNU that lists the extensions. I really hate it that you can’t enable a warning flag (like -Wextensions) that warns you about using GNU extensions.
Still, it is not as bad as bashism (i.e. extensions in GNU bash over Posix sh), because many scripts declare a /bin/sh-shebang at the top but are full of bashism because they incidentally have bash as the default shell. Most bashisms are just stupid, many people don’t know they are using them and there’s no warning to enable warnings. Another bad offender are GNU extensions of the Posix core utilities, especially GNU make, where 99% of all makefiles are actually GNU only and don’t work with Posix make.
In general, this is one major reason I dislike GNU: They see themselves as the one and only choice for software (demanding people to call Linux “GNU/Linux”) while introducing tons of extensions to chain their users to their ecosystem.
Here are some of the GNU C extensions that ended up in the C standard.
If I remember correctly 10 years ago hugs was still working and maybe even nhc :)
Yep :) and yhc never landed after forking nhc. UHC and JHC seem dead. My main point is mainly that the existence of a standard does not assure the the multiplication of implementations and the cross-cIompilation between compilers/interpreters/jit/etc. It is a simplification around it and really depends on the community around those languages. If you look at Common Lisp with a set in the stone standard and a lot of compilers that can pin-point easily what is gonna work or not. Or Scheme with a fairly easy standard but you will quickly run out of the possibility to swap between interpreters if you focus on some specific stuffs.
After that, everyone have their checklist about what a programming language must or must not provide for them to learn and use.
I truly do not understand why people get upset about re-implementations of existing software. Why is it a problem for you that someone else wrote some software that works with or similarly to some existing software, in this particular language?
In the worst case, it’s like a fork: it can split the community and result in two programs, neither of which is as good as the original could have been without being sapped of resources. I think there’s also a feeling of, “with all the cool things waiting to be invented, why did you put your effort into instead reinventing a wheel that wasn’t broken?”
On the other hand, rewriting a venerable tool in a more sustainable way can bring big benefits: I think a lot of people were skeptical of Clang at first, but it’s been a big boon to IDEs (esp Xcode) and accelerated progress in C-family compilation.
Probably because it indicates to them that the person viewed something in the original as deficient. When they invested heavily in that something, they feel like they have been slighted. I think it’s a bit of a lizard brain thing that we all have to some extent.
Rewrite fatigue? (similar and somewhat related to js fatigue)
There was a trend not so long ago to rewrite X in Y for Z reason(s).
I think people just got tired of it.
See also: game of trees.
I don’t understand what the point of this project is. Is it just a git porcelain but BSD licensed and without colors? Everything on the examples page I regularly do with git. I wonder if the FAQ covers why I should use this software.
Sounds like a great idea.
It’s a complete reimplementation of git, with privilege separation and sandboxing of all the components in a way that will protect against confused deputy attacks of the sort that we keep seeing in git, a completely different UI, and out of the box support for patch based workflows.
Thank you for providing some background on this project.
Can you provide some context for this? I did a cursory search for this sort of thing, but I couldn’t find anything on git being vulnerable to this type of attack.
I touched on this in other comments, but I suppose this is a goal and not necessarily the current state of the project.
IME git has excellent support for patch-based workflows. It wasn’t until I started using this style of workflow that I understood why git made many of its decisions.
Except you didn’t ignore it, you decided it was worth your time to read about it, post an annoyed comment about it on Lobste.rs, and then declare your intention to ignore it as if winning a moral victory. Why? Does it hurt you in some way?
Because as a person who’d genuinely like to have an answer to that question, all I got was a “go away, if you can’t tell why this is better we didn’t want you to use it anyway.”
The entire point is that git(1) is not a good end-user tool. It’s a thin layer over messy, raw internals. So make a better user-friendly tool that leverages the git on-disk format and capabilities but simplify the UX so there are significantly fewer footguns.
I suppose, but I don’t really see any ux improvements in their examples. For instance,
histedit
is just sugar forrebase -i
, but withoutsquash
orreword
(or a few others that I personally don’t use much). And do you really gain that much by renamingadd
tostage
? I feel like the removal of color is a bigger usability loss than any advantage demonstrated in their examples.There’s a lot that can be done to fix the broken and inconsistent UX
That’s very fair, though I do have a couple of nits with these examples
git remote
also lists all the remotes, it just doesn’t show their urls.git branch
returns a list of all local branches. 90% of the time this contains what I want.Eh. Just do
git status
.Yeah, this could use some improvement :)
Though there is a bit of sense in that
git remote
has subcommands for everything else.By defending any of these examples it just reinforces that the open source community has no sense of UX design.
Yes, you’ve pointed out a few workarounds, but they’re not good.
git status
shows you a lot more info than just the current branch, which you know. You’re just telling people to look elsewhere for the information and make them mentally sort through the mess on the screen to find it.Imagine trying to explain all of this to someone who is a competent developer but has never been exposed to git before. Good luck!
These are not workarounds; they are simpler versions of examples you gave which usually have the same or slightly different information.
I mean, it’s the first line. No one uses the method you suggested unless they’re writing a script.
You can certainly say “wouldn’t it be nice if
git remote
showed the repo urls by default?” But you shouldn’t try to pretend that you need to add an option to list remotes at all.I am never interested in the short name of remotes, only the URL. Without displaying the URL I don’t know if I can trust that the remote is setup correctly.
It’s a permissively licensed git implementation that lends itself well to patch-based workflows. It’s intended to replace cvs for openbsd, fixing the problems the openbsd people have with git.
I think Git is an excellent candidate for a rust rewrite. There’s nothing about it that really needs to be in C or benfits from being in C, it’s just one of the many userspace command-line tools that were written in C out of tradition. Rust is a better language that C that results in more maintainable and less bug prone code. There’s no reason not to use Rust for this kind of program (or at least avoid C).
Portability? Also bootstraping rust is kind of hard :| . Both issues are important for this kind of foundational software.
But I agree that there are many reasons to use rust for this kind of program.
How portable is Git? How many C compilers can compile it and for how many systems?
A quick search shows Git is available on AIX and Sparc systems, you can probably find more with better searches
Git was created in 2005, Rust in 2010…
That said, it’s a good idea to have multiple implementations of the same functionality. It wouldn’t surprise if GitHub was exploring alternatives.
I’d like to expand on my point in the first sentence - what would have been a viable language for Linus to use to implement Git in 2005 (other than C/C++?). Git was designed as a replacement for BitKeeper to handle the Linux source tree. Using C in that case was an easy choice.
Python was used to created Mercurial in 2005.
And haskell was used to create darcs in 2003.
That’s a subjective opinion. Rust has some mechanisms that protect against certain classes of programming errors, but it has many many other problems.
Even without those mechanisms, it would still be a better language.
By what metric?
By everyone of them. Being better than C is hardly rocket science.
Isn’t git written in Perl? </joke> … sorta
I would suggest it was written in C as per the opinions of its author rather than out of tradition.
Rust did not exist at the time.
Sure, but C++ did.
And we already know how Linus feels about C++:
and
Man, I love it when the #RIIR (“Rewrite It In Rust”)
virusmania spreads like wildfire :)