I use the command line.
“glo” means “git log –oneline”, “gloa” for “–decorate –all –graph”. “git log -S” can be very useful sometimes. “git {tag/branch} –contains” as well.
I otherwise use git blame and start from there. I made an alias for blaming specific lines with references of the form “path/to/file.ext:l”, which is recurring in most debugging tools (compiler, stack trace, diff, etc). So my alias will consume this and feed it to some “git blame -L${line},+1 $ref – $file”. This way I can very quickly see the ref for a specific line, and then see the previous ref for the same line, etc.
I don’t generally need to see a whole file in a specific version, I find the commits sufficient. If I really need to load a specific version, I will checkout the ref.
The author wants to use the tool for tasks it is not designed for, or does not know how to use the tool correctly.
Passing on “Distributed version control sucks for distributing software” which is just non-sense.
Distributed version control sucks for distributed development
The problem shows up when I’m sitting in my hotel room and need to re-create the local repository over the poor connection. Now I’m not just downloading the one revision I want to work on; I’m downloading every revision ever.
Nothing forces you to do that in git. Just download the one revision you want to work on. This point reinforces also the idea that the author is certainly not working in the industry (academic setting) and seems to have no idea of necessary more advanced features used in developping complex systems collaboratively (with several products / features, in stable branches in which new features have to be backported).
Distributed version control sucks for long-lived projects
The history continues to grow; a single version doesn’t. This may be a smooth progression rather than a sudden state change: over time it becomes more the case that the history grows faster than the current version. And so a system that forces every copy to contain all of history will eventually, inevitably, have bigger copies than a system that only stores current versions.
And then the author rants about DVCS sucking for archiving. And sees absolutely no contradiction in those two positions. If you forget your history because you only keep current versions, you are nor archiving anything. It is impossible to replicate a past version of a system, for historical purpose, exploration or just to help a user stuck with an old system.
Distributed version control sucks for archiving
Use a database.
The author is just closed off in his own environment and has a poor grasp of the tools he is using on top of it. This rant is useless, and his peers were right to shut him off.
I didn’t like it either, but you’re strawmanning one criticism. I think OP’s claim is that a centralized repo is easier to archive because everything stays on a primary copy, with people subsetting into secondary copies. So it’s clear what to back up. There’s no contradiction there.
This thing is barely readable. What the fuck?
If you build a collaborative website, show the threads, the authors (BEFORE a new paragraph, not after), who added what and when.
This reads like a dump of opinions.
That’s pretty ironic given the subject.
it’s worth pointing out that this is the original wiki. It’s recently been rewritten in JS, but they have stuck with their old conventions through lots of waves of stylistic changes that have influenced UX/web design. I’m sure they’d consider a pull request source code on github
Yeah, c2 has always had a conversational instead of authoritative tone; with little formatting conventions, so it can read as a stream of consciousness at times.
Yeah, the original was easier to read. It just had a plain, one-point-after-another style. I loved reading all the debates on there about things like LISP.
Mmmmh, an anonymous domain registration, an unknown “CTS” security research firm publishing only one whitepaper for all vulnerabilities. Whitepaper published on a secondary website “safefirmware.com”, that is otherwise broken.
No exploit has been published, there is no peer review, no responsible disclosure to verify the findings.
This smells like FUD. The SP is probably broken and vulnerable, yes. But this crap seems only aimed at selling security services.
My phrasing was a bit misleading, but the whole “exploit being published, peer review, responsible disclosure” was what I was getting at to verify the findings. These publications have to be transparent, reproducible and verified by third parties to be taken seriously.
No exploit has been published, there is no peer review, no responsible disclosure to verify the findings.
This is bullshit. Here’s peer review.
I’m astounded at just how strong the backlash against this is, and the backlash reeks of damage control propaganda.
AMD PSP is a hardware backdoor. Intel ME is a hardware backdoor. These things shouldn’t exist in the first place, and I wouldn’t put it past AMD and Intel to spend $$ sending armies of trolls trying to cover up the severity of what they’ve done.
Of course AMD PSP shouldn’t exist in the first place.
But the backlash against this is simply due to “it” being a ridiculous hit-job. I don’t care about damage to AMD.
This is bullshit. Here’s peer review.
Nice, they did not link it on their website. My first guess will always be that there is none unless shown otherwise.
Seems to be the consensus about this site on Reddit, HN, etc. Someone’s either trying to make a name for themselves or Intel paid someone who paid someone who paid someone who is good at marketing.
This guy is absolutely terrible at communicating his ideas.
I agree that it would be nice to have rich content formats. As he puts it “hypermedia” might be ok: it’s a medium that spans several dimensions of expressions, and can thus be read in several ways.
But his example is terrible, the execution is poor, his drawings are a joke, even the argumentation is full of circumvolutions. This is obvious, to me, that this guy has no idea what he is talking about. He has no idea about the complexity of implementing what he is talking about (properly I mean, not as PoC).
When we look at history and what finally happened, I contend that while at a base level we can see that economic or political factors could sway computer science one way or another, but I think that’s for trivial stuff, like choosing one encoding over another, because for example designing an ASIC capable of doing it in hardware cost less, even if the other encoding is better in pretty much all other ways. But looking at the big picture, what won was simplicity, the least (overall) effort.
This is, I think, what he means when he says that current content is in the format choosen by computer scientists. And this is true. Because all technic will evolve this way.
His idea are infeasible IMO. I would love to receive an article one day where I will be able to open it using mathematica to look at graphs and play with the data, read the content in another interface, navigate the cited papers and have more context about the authors for example. But all of this is already possible. There is no need for a data-structure to be specifically designed for it, this is implementation detail, abstraction leakage. ZigZag is just a bad idea. It should not exist (and I can’t help but be extremely sceptical at the amount of trademark this guy uses, as if his ideas were worth anything).
What I find weird actually, is to be speaking about hypertext like some kind of invention. The concept is just so trivial and so self-evident, no one invented it! What was invented was a proper grammar to describe the object and protocols to communicate about. But the concept is trivially simple. Same stuff for hypermedia, but then, the implementation is infeasible (in a standardized, content-agnostic way).
Fair.
This isn’t really accurate. Most people today don’t understand what hypertext is. (Just look at the comments on this thread!)
The idea of navigable connections between ideas through mechanism is trivial (assuming you’re familiar with the western cyclopedic tradition) & many people independently invented similar systems, but hypertext has a very specific set of rules that interact in a fairly nuanced way. (The web implements approximately one-half of one of these rules, which is the source of a lot of confusion.)
He has a pretty clear idea of the complexity of implementing what he’s talking about, because he’s been in close communication with different teams of serious professional developers actually implementing versions of it for many years.
It’s easier to implement a proper hypertext system than a modern web browser – but, where browsers have hundreds of developers, all of the implementations of Xanadu ideas since the mid-80s have (as far as I am aware) had teams of at most three people.
They’ve been implemented. Implementations are being used internally.
The core ideas are pretty straightforward to implement. (I’ve written open source implementations of them in my free time, after leaving the project.)
The primary difficulty in implementing these things is poor public-facing documentation (because Ted wrote all the public-facing documentation, and he doesn’t separate technical ideas from rants & marketing material). This is why I wrote my own documentation.
Once the concepts are understood, most of them can be implemented in an hour or two. (I know, because I did exactly that many times.)
Take a look at any W3C standard and tell me, with a straight face, that simplicity won.
What won was organic growth. In other words: instead of thinking carefully and seriously about how things should be designed, they went with their gut and used the design that came to mind most quickly. This gives them an edge in terms of communication: a stupid idea is much easier to communicate than a simple idea, because it will be as obvious to the person who hears it as it is to the person who says it. However, it’s a nightmare when it comes to maintainability, because poorly-thought-out designs are inflexible.
In terms of the actual number of elements necessary & the actual amount of text required to explain it, hypertext is simpler than webtech. The effort in a hypertext or translit system is the fifteen minutes you spend thinking hard about how all the pieces fit together, while the effort in webtech is trying to figure out how to make a pile of mismatched pieces do something that shouldn’t be done in the first place a decade after you learned to use all of them.