1. 2

    We’ve done some work on prezto to improve performance as well. We ended up going with one of the tradeoffs mentioned with completion - only regenerate once a day.

    I’ve also got my own set of utils that I’m slowly porting from prezto (https://github.com/belak/zsh-utils). I’ve found that both OMZ and prezto tend to do a ton of things I don’t want (which obviously slows it down) - there are plenty of instances where I just want a more sane version of the default config.

    If you have suggestions or performance improvement ideas for either prezto or zsh-utils, definitely file an issue, submit a PR, or let me know. I’d be happy to take a look.

    1. 6

      Besides the negative points discussed above, Atom is effectively the same tool as Sublime Text except it runs slower.

      I disagree with that statement. Sublime Text is great, I love its speed, but it has a bunch of tiny awkward details that Atom doesn’t have, and Atom has some cool features that Sublime Text doesn’t.

      From ST one of the things that bothers me the most is that it assumes I want to drag text that I’ve selected, which is false, actually I basically never want to drag text. This assumption means that I can’t select something and then re-select something inside that selection, because it assumes a drag is a text drag, not a selection.

      Another bit I find Atom does great is the splits, I love its approach. My favorite of any editor.

      Not that I use it a lot, but the Git support from Atom is great.

      I can’t figure out how to make ST’s subl command behave like I want. I want it to behave exactly like Atom’s:

      • subl . opens current dir in a window and nothing more
      • subl opens new window and nothing more
      • If a window is opened without a file, it just opens an empty window with no working dir

      Right now it also opens whatever old window I had open when I last closed ST, and I can’t find how to disable that.

      Also, to be fair, Atom has Teletype now. I haven’t used it, but it looks cool.

      I probably missed something, but I think I’ve done enough to show it’s not “the same”.

      1. 2

        The ‘drag selected text’ continually confounds me. I can’t imagine anyone finding that useful. The other thing is Eclipse and other IDEs dragging/dropping arbitrary objects in project/navigator views, “oops where’d that folder go?” It’s maddening.

        1. 3

          One always cuts and pastes, right? Who drags around a block of text..

          1. 1

            Have you tried going to preferences -> settings and addding/changing "drag_text" to false?

          2. 2

            The dragging thing is probably OS-specific. I don’t see it on my Ubuntu.

            1. 1

              It looks like there’s an undocumented option remember_open_files in ST. That combined with alias subl="subl -n" in your shell should get pretty close to the behavior you’re looking for.

            1. 23

              GitHub URLs are pretty badly designed.

              For example, /contact is their contact page, and /contactt is a user profile.

              Apparently, there’s a hardcoded list of ”reserved words” in the code, and when someone adds a new feature, they add the word/path segment there and check that it’s not taken by a user.

              So it could perhaps be the case that they’re adding some feature related to malware?

              1. 13

                That could very well be the case – and I’d be totally fine with that. I understand being coded into a corner, and wanting to fix things for the greater good at the expense of a few users.

                I just can’t figure out why, for the sake of “privacy and security”, they don’t want to tell me.

                1. 16

                  I think this is absurd behavior on GitHub’s part, and you’re right to be upset by it.

                  Since you do seem curious, I have a guess why they’re being so evasive, and it’s pretty simple: They’re a large organization. The person you’re talking to would probably need to get approval from both legal and PR teams to tell you about their product plan before it’s launched. I have no information on how busy GitHub’s lawyers and PR people are, but I would expect an approval like that to take a few weeks. Based on what they told you about the timeframe, it sounds like they want to launch their feature sooner than that.

                  What I’d really like to know is whether this is a one-off, or whether they’ve done it to other people before. It seems like their URL scheme will require it pretty frequently…

                  1. 7

                    The person you’re talking to would probably need to get approval from both legal and PR teams to tell you about their product plan before it’s launched.

                    Which is why I didn’t single out the support representative that contacted me; they clearly were not in the decision process for any of this, and I don’t want to cause them any undue grief/trouble past my first email reply asking for clarification.

                    To be clear: I don’t really care about the malware username, other than it’s a pretty cool name. I’m more interested in the reason behind why the forced rename.

                    Lots of people (read: salty News of Hacker commenters) say it’s obvious (wanting to reserve the /malware top level URL) and call me dumb for even asking, but no one has given me any evidence other than theories and suppositions. Which is great! I love thinking and hypothesizing.

                    1. 5

                      I don’t have any documented evidence other than anecdotal, but when I worked at a similar company with an almost identical URL structure this was one of the hardest parts of launching a new top level feature. It turns out recognizable words make for good usernames… so it’s almost impossible to find one that’s still available when working on a new feature. The choice ends up being between picking a horrible URL or displacing one user to make it easier to find.

                      It’s also worth noting that GitHub has a habit of being very secretive about what they’re working on - it’s almost impossible to get information about known bugs which have been reported before, let alone information about a potential new feature.

                      I would be willing to bet that this is being done for something we’ll hear about in the next year or two.

                2. 11

                  We made a team that was just the unicode pi symbol and GitHub assigned us the url /team/team.

                  1. 4

                    That’s a great unicode hack.

                  2. 11

                    The curse of mounting user paths directly to /. When in doubt, always put a namespace route on it.

                    1. 6

                      That was my thought as well. I would imagine they want it as a landing page for some new feature or product.

                    1. 32

                      In the Hacker News thread about the new Go package manager people were angry about go, since the npm package manager was obviously superior. I can see the quality of that now.

                      There’s another Lobster thread right now about how distributions like Debian are obsolete. The idea being that people use stuff like npm now, instead of apt, because apt can’t keep up with modern software development.

                      Kubernetes official installer is some curl | sudo bash thing instead of providing any kind of package.

                      In the meantime I will keep using only FreeBSD/OpenBSD/RHEL packages and avoid all these nightmares. Sometimes the old ways are the right ways.

                      1. 7

                        “In the Hacker News thread about the new Go package manager people were angry about go, since the npm package manager was obviously superior. I can see the quality of that now.”

                        I think this misses the point. The relevant claim was that npm has a good general approach to packaging, not that npm is perfectly written. You can be solving the right problem, but writing terribly buggy code, and you can write bulletproof code that solves the wrong problem.

                        1. 5

                          npm has a good general approach to packaging

                          The thing is, their general approach isn’t good.

                          They only relatively recently decided locking down versions is the Correct Thing to Do. They then screwed this up more than once.

                          They only relatively recently decided that having a flattened module structure was a good idea (because presumably they never tested in production settings on Windows!).

                          They decided that letting people do weird things with their package registry is the Correct Thing to Do.

                          They took on VC funding without actually having a clear business plan (which is probably going to end in tears later, for the whole node community).

                          On and on and on…

                          1. 2

                            Go and the soon-to-be-official dep dependency managment tool manages dependencies just fine.

                            The Go language has several compilers available. Traditional Linux distro packages together with gcc-go is also an acceptable solution.

                            1. 4

                              It seems the soon-to-be-official dep tool is going to be replaced by another approach (currently named vgo).

                            2. 1

                              I believe there’s a high correlation between the quality of the software and the quality of the solution. Others might disagree, but that’s been pretty accurate in my experience. I can’t say why, but I suspect it has to do with the same level of care put into both the implementation and in understanding the problem in the first place. I cannot prove any of this, this is just my heuristic.

                              1. 8

                                You’re not even responding to their argument.

                                1. 2

                                  There’s npm registry/ecosystem and then there’s the npm cli tool. The npm registry/ecosystem can be used with other clients than the npm cli client and when discussing npm in general people usually refer to the ecosystem rather than the specific implementation of the npm cli client.

                                  I think npm is good but I’m also skeptical about the npm cli tool. One doesn’t exclude the other. Good thing there’s yarn.

                                  1. 1

                                    I think you’re probably right that there is a correlation. But it would have to be an extremely strong correlation to justify what you’re saying.

                                    In addition, NPM isn’t the only package manager built on similar principles. Cargo takes heavy inspiration from NPM, and I haven’t heard about it having a history of show-stopping bugs. Perhaps I’ve missed the news.

                                2. 8

                                  The thing to keep in mind is that all of these were (hopefully) done with best intentions. Pretty much all of these had a specific use case… there’s outrage, sure… but they all seem to have a reason for their trade offs.

                                  • People are angry about a proposed go package manager because it throws out a ton of the work that’s been done by the community over the past year… even though it’s fairly well thought out and aims to solve a lot of problems. It’s no secret that package management in go is lacking at best.
                                  • Distributions like Debian are outdated, at least for software dev, but their advantage is that they generally provide a rock solid base to build off of. I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.
                                  • While I don’t trust curl | sh it is convenient… and it’s hard to argue that point. Providing packages should be better, but then you have to deal with bug reports where people didn’t install the package repositories correctly… and differences in builds between distros… and… and…

                                  It’s easy to look at the entire ecosystem and say “everything is terrible” but when you sit back, we’re still at a pretty good place… there are plenty of good, solid options for development and we’re moving (however slowly) towards safer, more efficient build/dev environments.

                                  But maybe I’m just telling myself all this so I don’t go crazy… jury’s still out on that.

                                  1. 4

                                    Distributions like Debian are outdated, at least for software dev,

                                    That is the sentiment that seems to drive the programming language specific package managers. I think what is driving this is that software often has way too many unnecessary dependencies causing setup of the environment to build the software being hard or taking lots of time.

                                    I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.

                                    Often it is possible to install libraries at another location and redirect your software to use that though.

                                    It’s easy to look at the entire ecosystem and say “everything is terrible” but when you sit back, we’re still at a pretty good place…

                                    I’m not so sure. I forsee an environment where actually building software is a lost art. Where people directly edit interpreted files in place inside a virtual machine image/flatpak/whatever because they no longer know how to build the software and setup the environment it needs. And then some language specific package manager for distributing these images.

                                    I’m growing more disillusioned the more I read Hacker News and lobste.rs… Help me be happy. :)

                                    1. 1

                                      So like squeak/smalltalk images then? Whats old is new again I suppose.

                                      http://squeak.org

                                      1. 1

                                        I’m not so sure. I forsee an environment where actually building software is a lost art. Where people directly edit interpreted files in place inside a virtual machine image/flatpak/whatever because they no longer know how to build the software and setup the environment it needs. And then some language specific package manager for distributing these images.

                                        You could say the same thing about Docker. I think package managers and tools like Docker are a net win for the community. They make it faster for experienced practitioners to setup environments and they make it easier for inexperienced ones as well. Sure, there is a lot you’ve gotta learn to use either responsibly. But I remember having to build redis every time I needed it because it wasn’t in ubuntu’s official package manager when I started using it. And while I certainly appreciate that experience, I love that I can just install it with apt now.

                                      2. 2

                                        I don’t want to have to use a version of a python library from years ago because it’s the only version provided by the operating system.

                                        Speaking of Python specifically, it’s not a big problem there because everyone is expected to work within virtual environments and nobody runs pip install with sudo. And when libraries require building something binary, people do rely on system-provided stable toolchains (compilers and -dev packages for C libraries). And it all kinda works :-)

                                        1. 4

                                          I think virtual environments are a best practice that unfortunately isn’t followed everywhere. You definitely shoudn’t run pip install with sudo but I know of a number of companies where part of their deployment is to build a VM image and sudo pip install the dependencies. However it’s the same thing with npm. In theory you should just run as a normal user and have everything installed to node_modules but this clearly isn’t the case, as shown by this issue.

                                          1. 5

                                            nobody runs pip install with sudo

                                            I’m pretty sure there are quite a few devs doing just that.

                                            1. 2

                                              Sure, I didn’t count :-) The important point is they have a viable option not to.

                                            2. 2

                                              npm works locally by default, without even doing anything to make a virtual environment. Bundler, Cargo, Stack etc. are similar.

                                              People just do sudo because Reasons™ :(

                                          2. 4

                                            It’s worth noting that many of the “curl | bash” installers actually add a package repository and then install the software package. They contain some glue code like automatic OS/distribution detection.

                                            1. 2

                                              I’d never known true pain in software development until I tried to make my own .debs and .rpms. Consider that some of these newer packaging systems might have been built because Linux packaging is an ongoing tirefire.

                                              1. 3

                                                with fpm https://github.com/jordansissel/fpm it’s not that hard. But yes, using the Debian or Redhat blessed was to package stuff and getting them into the official repos is def. painful.

                                                1. 1

                                                  I used the gradle plugins with success in the past, but yeah, writing spec files by hand is something else. I am surprised nobody has invented a more user friendly DSL for that yet.

                                                  1. 1

                                                    A lot of difficulties when doing Debian packages come from policy. For your own packages (not targeted to be uploaded in Debian), it’s far easier to build packages if you don’t follow the rules. I like to pretend this is as easy as with fpm, but you get some bonus from it (building in a clean chroot, automatic dependencies, service management like the other packages). I describe this in more details here: https://vincent.bernat.im/en/blog/2016-pragmatic-debian-packaging

                                                  2. 2

                                                    It sucks that you come away from this thinking that all of these alternatives don’t provide benefits.

                                                    I know there’s a huge part of the community that just wants things to work. You don’t write npm for fun, you end up writing stuff like it because you can’t get current tools to work with your workflow.

                                                    I totally agree that there’s a lot of messiness in this newer stuff that people in older structures handle well. So…. we can knowledge share and actually make tools on both ends of the spectrum better! Nothing about Kubernetes requires a curl’d installer, after all.

                                                  1. 2

                                                    Maybe a dumb questions, but in semver what is the point of the third digit? A change is either backwards compatible, or it is not. To me that means only the first two digits do anything useful? What am I missing?

                                                    It seems like the openbsd libc is versioned as major.minor for the same reason.

                                                    1. 9

                                                      Minor version is backwards compatible. Patch level is both forwards and backwards compatible.

                                                      1. 2

                                                        Thanks! I somehow didn’t know this for years until I wrote a blog post airing my ignorance.

                                                      2. 1

                                                        PATCH version when you make backwards-compatible bug fixes See: https://semver.org

                                                        1. 1

                                                          I still don’t understand what the purpose of the PATCH version is? If minor versions are backwards compatible, what is the point of adding a third version number?

                                                          1. 3

                                                            They want a difference between new functionality (that doesn’t break anything) and a bug fix.

                                                            I.e. if it was only X.Y, then when you add a new function, but don’t break anything.. do you change Y or do you change X? If you change X, then you are saying I broke stuff, so clearly changing X for a new feature is a bad idea. So you change Y, but if you look at just the Y change, you don’t know if it was a bug-fix, or if it was some new function/feature they added. You have to go read the changelog/release notes, etc. to find out.

                                                            with the 3 levels, you know if a new feature was added or if it was only a bug fix.

                                                            Clearly just X.Y is enough. But the semver people clearly wanted that differentiation, they wanted to be able to , by looking only at the version #, know if there was a new feature added or not.

                                                            1. 1

                                                              To show that there was any change at all.

                                                              Imagine you don’t use sha1’s or git, this would show that there was a new release.

                                                              1. 1

                                                                But why can’t you just increment the minor version in that case? a bug fix is also backwards compatible.

                                                                1. 5

                                                                  Imagine you have authored a library, and have released two versions of it, 1.2.0 and 1.3.0. You find out there’s a security vulnerability. What do you do?

                                                                  You could release 1.4.0 to fix it. But, maybe you haven’t finished what you planned to be in 1.4.0 yet. Maybe that’s acceptable, maybe not.

                                                                  Some users using 1.2.0 may want the security fix, but also do not want to upgrade to 1.3.0 yet for various reasons. Maybe they only upgrade so often. Maybe they have another library that requires 1.2.0 explicitly, through poor constraints or for some other reason.

                                                                  In this scenario, releasing a 1.2.1 and a 1.3.1, containing the fixes for each release, is an option.

                                                                  1. 2

                                                                    It sort of makes sense but if minor versions were truly backwards compatible I can’t see a reason why you would ever want to hold back. Minor and patch seem to me to be the concept just one has a higher risk level.

                                                                    1. 4

                                                                      Perhaps a better definition is library minor version changes may expose functionality to end users you did not intend as an application author.

                                                                      1. 2

                                                                        I think it’s exactly a risk management decision. More change means more risk, even if it was intended to be benign.

                                                                        1. 2

                                                                          Without the patch version it makes it much harder to plan future versions and the features included in those versions. For example, if I define a milestone saying that 1.4.0 will have new feature X, but I have to put a bug fix release out for 1.3.0, it makes more sense that the bug fix is 1.3.1 rather than 1.4.0 so I can continue to refer to the planned version as 1.4.0 and don’t have to change everything which refers to that version.

                                                                2. 1

                                                                  I remember seeing a talk by Rich Hickey where he criticized the use of semantic versioning as fundamentally flawed. I don’t remember his exact arguments, but have sem ver proponents grappled effectively with them? Should the Go team be wary of adopting sem ver? Have they considered alternatives?

                                                                  1. 3

                                                                    I didn’t watch the talk yet, but my understanding of his argument was “never break backwards compatibility.” This is basically the same as new major versions, but instead requiring you to give a new name for a new major version. I don’t inherently disagree, but it doesn’t really seem like some grand deathblow to the idea of semver to me.

                                                                    1. 2

                                                                      IME, semver itself is fundamentally flawed because humans are the deciders of the new version number and we are bad at it. I don’t know how many times I’ve gotten into a discussion with someone where they didn’t want to increase the major because they thought high major’s looked bad. Maybe at some point it can be automated, but I’ve had plenty of minor version updates that were not backwards compatible, same for patch versions. Or, what’s happened to me in Rust multiple times, is the minor version of a package incremented but the new feature depends on a newer version of the compiler, so it is backwards breaking in terms of compiling. I like the idea of a versioning scheme that lets you tell the chronology of versions but I’ve found semver to work right up until it doesn’t and it’s always a pain. I advocate pinning all deps in a project.

                                                                      1. 2

                                                                        It’s impossible for computers to automate. For one, semver doesn’t define what “breaking” means. For two, the only way that a computer could fully understand if something is breaking or not would be to encode all behavior in the type system. Most languages aren’t equipped to do that.

                                                                        Elm has tools to do at least a minimal kind of check here. Rust has one too, though not widely as used.

                                                                        . I advocate pinning all deps in a project.

                                                                        That’s what lockfiles give you, without the downsides of doing it manually.

                                                              1. 8

                                                                Regarding Table of Contents generation, what do you think about hxtoc(1) which is part of the HTML/XML utilities by the w3c?

                                                                Also, I’ve made a similar experience regarding a joyful discovery of CommonMark recently, but instead of using the parser you mention, I’ve taken up lowdown as my client of choice. I guess this is something it has in common with most C implementations of markdown, but especially when compared to pandoc, it was fast. It took me a fraction on a second to generate a website, instead of a dozen or more. So I guess, I wanted to see, what other clients you’ve looked into, for example discount, as an example of an another popular implementation.

                                                                1. 5

                                                                  Hm, I’ve actually never heard of hxtoc, lowdown, or discount!

                                                                  I haven’t been using Markdown regularly for very long. I started it using more when I started the Oil blog in late 2016. Before that, I wrote a few longer documents in plain HTML, and some in Markdown.

                                                                  I had heard of pandoc, but never used it. I guess cmark was a natural fit for me because I was using markdown.pl in a bunch of shell scripts. So cmark pretty much drops right in. I know a lot of people use framework-ish static site generators, which include markdown. But I really only need markdown, since all the other functionality on my site is written with custom scripts.

                                                                  So I didn’t really do much research! I just felt that markdown.pl was “old and smelly” and I didn’t want to be a hypocrite :-) A pile of shell scripts is pretty unhygienic and potentially buggy, but that is what I aim to fix with Oil :)

                                                                  That said, a lot of the tools you mention do look like the follow the Unix philosophy, which is nice. I would like to hear more about those types of tools, so feel free to post them to lobste.rs :) Maybe I don’t hear about them because I’m not a BSD user?

                                                                  1. 4

                                                                    I had heard of pandoc, but never used it.

                                                                    It’s a nice tool, and not only for working with Markdown, but tons of other formats too. But Markdown is kind of it’s focus… If you look at it’s manual, you’ll find that it can be very finely tuned to match ones preferences (such as enabling or disabling raw HTML, syntax highlighting, math support, BibLaTeX citations, special list and table formats, etc.). It even has output options that make it resemble other implementations like PHP Markdown Extra, GitHub-Flavored Markdown, MultiMarkdown and also markdown.pl! Furthermore, it’s written by John MacFarlane, who is one of the guys behind CommonMark itself. In fact if you look at the cmark contributers, he seems to be the most active maintainer.

                                                                    I usually use pandoc to generate .epub files or to quickly generate a PDF document (version 2.0 supports multiple backends, besides LaTeX, such as troff/pdfroff and a few html2pdf engines). But as I’ve mentioned, it’s a bit slow, so I tend to not use it for simpler texts, like when I have to generate a static website.

                                                                    I know a lot of people use framework-ish static site generators, which include markdown.

                                                                    Yeah, pesonally I use zodiac which uses AWK and a few shell script wrappers. You get to choose the converter, which pipes some format it, and HTML out. It’s not ideal, but other than writing my own framework, it’s quite ok.

                                                                    Maybe I don’t hear about them because I’m not a BSD user?

                                                                    Nor am I, at least not most of the time. I learned about those HTML/XML utilities because someone mentioned them here on lobste.rs, and I was supprised to see how powerful they are, but just how seemingly nobody knows about them. hxselect to query specific elements in a CSS-fashion, hxclean as an automatic HTML corrector, hxpipe/hxunpipe converts (and reconverts) HTML/XML to a format that can be more easily parsed by AWK/perl scripts – certainly not useless or niche tools.

                                                                    But I do have to admit that a BSD user influenced me on adopting lowdown, and since it fits my use-case, I stick by it. Nevertheless, I might take a look at cmark, since it seems interesting.

                                                                  2. 2

                                                                    Unfortunately, it looks like lowdown is a fork of hoedown which is a fork of sundown which was originally based on the markdown.pl implementation (with some optional extensions), and is most likely not CommonMark compliant. Pandoc is nice because it can convert between different formats, but it also has quite a few inconsistencies.

                                                                    One of the biggest reasons I like CommonMark is because it aims to be an extremely solid, consistent standard that makes markdown more sane. It would be nice to see more websites move towards CommonMark, but that’s probably a long shot.

                                                                    Definitely check out babelmark if you get a chance which lets you test different markdown inputs against a bunch of different parsers. There are a bunch of example divergences on the babelmark FAQ. The sheer variety of outputs for some simple inputs is precisely why CommonMark is useful as a standard.

                                                                    1. 3

                                                                      Lowdown isn’t CommonMark conformant, although it has some bits in place. The spec for CommonMark is huge.

                                                                      If you’re a C hacker, it’s easy to dig into the source to add conformancy bit by bit. See the parser in document.c and look for LOWDOWN_COMMONMARK to see where bits are already in place. The original sundown/hoedown parser has been considerably simplified in lowdown, so it’s much easier to get involved. I’d be thrilled to have somebody contribute more there!

                                                                      In the immediate future, my biggest interest is in going an LCS implementation into the lowdown-diff algorithm. Right now it’s pretty ad hoc.

                                                                      (Edit: I’m the author of lowdown.)

                                                                      1. 2

                                                                        One of the biggest reasons I like CommonMark is because it aims to be an extremely solid, consistent standard that makes markdown more sane. It would be nice to see more websites move towards CommonMark, but that’s probably a long shot.

                                                                        I guess I can agree with you when it comes to websites like Stackoverflow, Github and Lobsters having Markdown formatting for comments and other text inputs, but I really don’t see the priority when it comes to using a not 100% CommonMark compliant tool for your own static blog generator. I mean, it’s nice, no doubt, as long as you don’t intentionally use uncertain constructs and don’t over-format your texts to make them more complicated than they have to be, I guess that most markdown implementations are find in this regard – speed on the other hand, is a different question.

                                                                        1. 1

                                                                          Are you saying that CommonMark should be used for comments on websites, but not for your own blog?

                                                                          I would say the opposite. For short comments, the ambiguity in Markdown doesn’t seem to be a huge problem, and I am somewhat comfortable with just editing “until it works”. I don’t use very many constructs anyway – links, bold, bullet points, code, and block code are about it.

                                                                          But blogs are longer documents, and I think they have more lasting value than most Reddit comments. So although it probably wasn’t strictly necessary to switch to cmark, I like having my blog in a format with multiple implementations and a spec.

                                                                          1. 3

                                                                            At least in my opinion, its useful everywhere, but more so for comments, because it removes differences in implementations. Often times the people using a static site generator are developers and can at least understand differences between implementations.

                                                                            That being said, I lost count of how many bugs at Bitbucket were filed against the markdown parser because the library used resolves differences by following what markdown.pl does. I still remember differences in bbcode parsing between different forums - moving to a better standard format like markdown has been a step in the right direction… I think CommonMark is the next step in the right direction.

                                                                            1. 1

                                                                              The point has already been brought up, but I just want to stress it again. You will probably have a feeling for how your markup parser works anyway, and you will write according. If your parser is commonmark compliant, that’s nice, but really isn’t the crucial point.

                                                                              On the other hand, especially if one likes to write longer comments, and uses a bit more than the basic markdown constructs on websites, having a standar to rely on does seem to me to offer an advantage, since you don’t necessary know what parser is running in the background. And if you don’t really use markdown, it doesn’t harm you after all.

                                                                      1. 3

                                                                        The duotone themes for atom do something similar to this. They aim to use the main colors for more important portions and muted colors for less important things, like builtins, etc.

                                                                        I’ve also experimented with this a little bit with https://github.com/belak/emacs-grayscale-theme. Unfortunately, the theming in emacs isn’t quite as flexible as atom, so it’s a bit limited.

                                                                        1. 1

                                                                          The workspaces idea is really interesting… it would be cool to see some sort of experimental interface built around it - use a single object store and somehow ensure that proper access controls are in place. I’ve played around with implementing a custom git-receive-pack and git-upload-pack using libgit2, but it’s a pretty large pain (or at least it was a year ago).

                                                                          One thing worth noting: at least for the storage portion, GitHub almost definitely does something similar to this. If you ever push a commit to a fork, you can access that commit through a URL in the main repo. My guess is that they specifically push the forking model so they can save on storage.

                                                                          As an example, you can take a look at these commits:

                                                                          1. 13

                                                                            Their main website (at least as far as I know) is still at https://geany.org/.

                                                                            It does seem odd to me that the .sexy domain is being used for any text editors.

                                                                            See also:

                                                                            1. 8

                                                                              Still cringey IMO

                                                                              1. 2

                                                                                I always thought the vim.sexy was more of a parody/satire of that style of site from a fan of the editor more than anything else.

                                                                              1. 3

                                                                                Bolt is a super useful, simple data store. I’ve written a small wrapper around it called nut to make it simple to store objects (by marshaling them to json, but it may make more sense to store them as gobs or protobufs in the future) rather than just bytes. It’s pretty experimental, but I’ve been using it in personal projects for a while.

                                                                                https://github.com/belak/nut/

                                                                                Also, bolt isn’t really being updated any more, but it’s still considered fairly stable. There’s a coreos fork called bbolt which aims to keep updating it.

                                                                                1. 1

                                                                                  It is also worth mentioning that https://github.com/asdine/storm exists, and does JSON, gob and protobuf encoding. It also gives you a KV string store that handles the copying of bytes.

                                                                                1. 3

                                                                                  But Zsh does have a visual mode! Don’t rebind v to something else. Pick something else: I use ‘^X^E’.

                                                                                  I’ve seen this bind v to edit-command-line advice before, probably because oh-my-zsh does it. I can only guess that the existence of visual mode simply isn’t obvious because by default it is highlighted in a manner that is indistinguishable from the cursor. My advice is to pick something more obvious and set it in zle_hightlight. Note that much of the zsh documentation talks about the “region” which is emacs terminology.

                                                                                  1. 2

                                                                                    Does Zsh have a visual mode? If so it’s not on by default, or at least by default it’s not mapped to v in command mode. I also could not find any documentation on Zshell visual mode. Can you provide links to any documentation or articles on this? Closest thing I found was a Zshell plugin that implemented this behavior (https://github.com/b4b4r07/zsh-vimode-visual).

                                                                                    1. 2

                                                                                      Go to http://zsh.sourceforge.net/Doc/Release/Zsh-Line-Editor.html or type man zshzle and search for the word “visual”. There are several references. The feature was added three years ago. In general for vi-mode I would recommend using at least 5.0.8 and preferably 5.1 or newer as a lot of vi/vim related improvements were made around the time of those releases. To verify, run zsh -f and try bindkey -a v and you should find v is bound to visual-mode. There’s also visual-line-mode for V and a visual keymap.

                                                                                      1. 1

                                                                                        Wow! How did I miss that?! That’s really nice, and much faster than opening Vim. I will remove my custom mapping and update the blog post accordingly.

                                                                                      2. 1
                                                                                        1. 1

                                                                                          I actually decided against using that vi plugin for some other reasons, so at least in theory v should be mapped to the default command.

                                                                                    1. 1

                                                                                      We’ve got a few additional hacks in prezto you might be interested in: https://github.com/sorin-ionescu/prezto/blob/master/modules/editor/init.zsh

                                                                                      I don’t personally use it, but one of our regular contributors has submitted multiple improvements and if you have any other ideas, we’d love to hear them!

                                                                                      1. 1

                                                                                        Thanks! I’ll comb through it and see if I can find any other gems. I’m still pretty new to Vi mode so this blog posts pretty much sums up what I’ve learned so far.