I’ve been using Macs for nearly a decade on the desktop and switched to Linux a couple of months ago. The 2016 MacBook Pro finally drove me to try something different. Between macOS getting more bloated each release, defective keyboard, terrible battery life, and the touch bar I realized that at some point I stopped being the target demographic.
I switched to Manjaro and while there are a few rough edges as the article notes, overall there really isn’t that much difference in my opinion. I’m running Gnome and it does a decent enough job aping macOS. I went with Dell Precision 5520, and everything just worked out of the box. All the apps that I use are available or have equivalents, and I haven’t found myself missing anything so far. Meanwhile it’s really refreshing to be able to configure the system exactly the way I want.
Overall, I’d say that if you haven’t tried Linux in a while, then it’s definitely worth giving another shot even though YMMV.
I don’t know about Dell, but my 2016 MacBook Pro was hit pretty hard after the Specter/Meltdown fix came out. I used to go 5 or 6 hours before I was down to 35-40%. Now I’m down to %20-25% after about 4 hours.
Same here. I wonder if the specter/meltdown fiasco has at all accelerated Apple’s (hypothetical) internal timeline for ARM laptops. Quite the debacle.
In regards to the parent, I have actually been considering moving from an aged Macbook Pro 15” (last of the matte screen models – I have avoided all the bad keyboards so far), to a Mac /desktop/ (mac pro maybe). You can choose your own keyboard, screen, and still get good usability and high performance. Then moving to a linux laptop for “on the road” type requirements. Being able to leave work “at my desk” might be nice too.
(note: I work remotely)
I honestly don’t understand the fetish for issuing people laptops, particularly for software development type jobs. The money is way better spent (IMHO) on a fast desktop and a great monitor/keyboard.
Might be the ability to work remotely. I’m with you, though, that laptops are a bizarre fetish, as is working from Anywhere You Want(!)
It’s an artifact of, among other things, the idea that you PURSUE YOUR PASSIONS and DO WHAT YOU LOVE*; I don’t want to “work anywhere” – I want to work from work, and leave that behind when I go home to my family. But hey, I’m an old, what do I know.
*: what you love must be writing web software for a venture funded startup in San Francisco
Same here. I wonder if the specter/meltdown fiasco has at all accelerated Apple’s (hypothetical) internal timeline for ARM laptops.
I wouldn’t guess that. Apples ARM design was one of the few also affected by meltdown. Using it for a laptop wouldn’t have helped.
Yeah I get 4-6 hours with the Dell, and I was literally getting about 2-3 hours on the Mac with the same usage patterns and apps running. I think the fact that you can be a lot more granular regarding what’s running on Linux really helps in that regard.
No problems! They’re very effective, and are just about the first package I install on a new setup.
I like the way of thinking these moments reveal.
Though, I prefer to rebase to bring changes from master into my feature branch.
- You can merge in both directions.
A rebase can force you to resolve many conflicts, each of which has a chance of introducing a bug. A merge can result in fewer conflict resolution and be preferable for that reason.
True in general, but depends on your commit workflow; I generate a clean history locally before I rebase so there’s rarely multiple conflicts for the same thing. Git rerere helps too.
I have been doing remote work for 5 years and I think the “work room for work” and “don’t work in your pyjamas” rules are overrated. I am doing just fine typing this from my couch while waiting for a build to finish.
For my first two years working remotely I had a dedicated office in my house. I think that helped me to build the discipline and boundaries necessary.
6 years in, I can work effectively and with balance in about any situation.
Same here; I think the rules for “transitioning from office-based work to remote work” are very different from “effective remote work for someone who’s used to it”.
I found out that when my home office became my work office my new home office was the coffee shop after working hours.
I work from home about 2 days a week (at my last job it was 3 to 4). I often didn’t shower until the end of my work day and I’ve never been in a place large enough to have a separate work room.
I do run multiple X servers. Ctrl+Alt+F8 is my work X11 instance and I have a different username for it. My git repos have my work/home laptops as each others remotes so I can push branches back and fourth without touching origin. (I often squash some of those intermediate commits before creating a real origin pull request).
I often find my time at home is way more productive. Open work spaces such and even my fancy noise cancelling headphones can’t drown out some of the chatter around me.
Lobste.rs didn’t warn me about duplicate URLs… oh you mean another ready on this topic. ah. Well here is the official confirmation if anyone cares to read it.
I really hate browser notifications. I never click yes ever. It feels like preventing browsers from going down this hole is just yet another hack. The Spammers and the CAPTCHAers are fighting a continuous war, all because of the 2% of people who actually click on SPAM.
My firefox has that in the settings somewhere:
[X] Block new requests asking to allow notifications
This will prevent any websites not listed above from requesting permission to send notifications. Blocking notifications may break some website features.
help links here: https://support.mozilla.org/en-US/kb/push-notifications-firefox?as=u&utm_source=inproduct
Did anyone find the about:config setting for this, to put in ones user.js? I am aware of dom.webnotifications.enabled, but I don’t want to disable it completely because there are 3 websites which notifications I want.
there always has been in Chrome and Safari and since very recently, there’s also one in Firefox. It’s the first thing I turn off whenever I configure a new browser. I can’t possibly think of anybody actually actively wanting notifications to be delivered to them.
Sure, there’s some web apps like gmail, but even there - I’d rather use a native app for this.
I can’t possibly think of anybody actually actively wanting notifications to be delivered to them.
Users of web-based chat software. I primarily use native apps for that, but occasionally I need to use a chat system that I don’t want to bother installing locally. And it’s nice to have a web backup for when the native app breaks. (I’m looking at you, HipChat for Windows.)
There is a default deny option in Chrome, takes a little digging to find though. But I agree that it’s crazy how widespread sites trying to use notification are. There’s like 1 or 2 sites that I actually want them from, but it seems like every single news site and random blog wants to be able to send notifications. And they usually do it immediately upon loading the page, before you’ve even read the article, much less clicked something about wanting to be notified of future posts or something.
The only time I have clicked “yes” for notifications is for forums (Discourse only at this point) that offer notifications of replies and DMs. I don’t see a need for any other websites to need to notify me.
Terminal within vim now?
From the article:
The main new feature of Vim 8.1 is support for running a terminal in a Vim window. This builds on top of the asynchronous features added in Vim 8.0.
Pretty cool addition. :-)
I wonder if the new Vim terminal used any code from the NeoVim terminal. I know NeoVim was created in part because Bram rejected their patches for adding async and other features.
I have to say, I really don’t care to see this in a text editor. If anything it’d be nice to see vim modernize by trimming features rather than trying to compete with some everything-to-everybody upstart. We already had emacs for that role! I just hope 8.2 doesn’t come with a client library and a hard dependency on msgpack.
Edit: seems this was interpreted as being somewhat aggressive. To counterbalance that, I think it’s great NeoVim breathed new life into Vim, just saying that life shouldn’t be wasted trying to clone what’s already been nailed by another project.
Neovim isn’t an upstart.
You can claim that Vim doesn’t need asynchronous features, but the droves of people running like hell to more modern editors that have things like syntax aware completion would disagree.
Things either evolve or they die. IMO Vim has taken steps to ensure that people like you can continue to have your pristine unsullied classic Vim experience (timers are an optional feature) but that the rest of us who appreciate these changes can have them.
Just my $.02.
Things either evolve or they die.
Yeah, but adding features is only one way to evolving/improving. And a poor one imho, which results in an incoherent design. What dw is getting is that one can improve by removing things, by finding ‘different foundations’ that enable more with less. One example of such path to improvement is the vis editor.
Thanks, I can definitely appreciate that perspective. However speaking for myself I have always loved Vim. The thing that caused me to have a 5 year or so dalliance with emacs and then visual studio code is the fact that before timers, you really COULDN’T easily augment Vim to do syntax aware completion and the like, because of its lack of asynchronous features.
I know I am not alone in this - One of the big stated reasons for the Neovim fork to exist has been the simplification and streamlining of the platform, in part to enable the addition of asynchronous behavior to the platform.
So I very much agree with the idea that adding new features willy nilly is a questionable choice, THIS feature in particular was very sorely needed by a huge swath of the Vim user base.
It appears we were talking about two different things. I agree that async jobs are a useful feature. I thought the thread was about the Terminal feature, which is certainly ‘feature creep’ that violates VIM’s non-goals.
From VIM’s 7.4 :help design-not
VIM IS… NOT design-not
- Vim is not a shell or an Operating System. You will not be able to run a shell inside Vim or use it to control a debugger. This should work the other way around: Use Vim as a component from a shell or in an IDE.
I think you’re right, and honestly I don’t see much point in the terminal myself, other than perhaps being able to apply things like macros to your terminal buffer without having to cut&paste into your editor…
Emacs is not as fast and streamlined as Neovim-QT, while, to my knowledge, not providing any features or plugins that hasn’t got an equivalent in the world of vim/nvim.
Be careful about saying things like this. The emacs ecosystem is V-A-S-T.
Has anyone written a bug tracking system in Vim yet? How about a MUD client? IRC client? Jabber client? Wordpress client, LiveJournal client? All of these things exist in elisp.
Why do we use paper in 2018? The children’s books from my childhood promised that stuff would be gone by the year 2000 :(
Anecdotally, I have heard this same sentiment from professional archivists as well. We’re pretty good at preserving paper over time.
If there was something I really wanted to survive past my lifetime I would use a lazer printer, with acid-free paper, and have it laminated in plastic.
Digital is just not as quick and flexible as paper.
Think about it. You print something on paper, you want to highlight, just do it, you want to correct it, just write over it, you want to give someone your highlighted and corrected version, just photocopy/scan it.
You want to write something down, grab a pen, pencil, heck even something that’s pointy enough to make an indent on the paper, and just do it. No need to press a button to turn it on. No need to keep a battery around. etc.
Closest thing I’ve seen to paper using digital are these nice things called reMarkable, which are nowhere near affordable compared to paper (or a phone/laptop for that matter). Honestly I would consider buying one if they were 50$ lol
Because not everyone does digital, not everyone does backups and the kids of today know more about how to take pics for Tinder than organize their data.
I’m just happy the need for paper has diminished. That’s plenty and we’ll never lose the need altogether anyway.
because industry has not provided a suitable alternative.
still waiting on an e-ink device which can run linux and has an sd card slot, replaceable battery, and usb port. an e-ink laptop or even just a monitor would be good too, but alas.
You can read, for instance, the Dead Sea Scrolls, or a copy of a Chinese text written on mulberry paper from 2,300 years ago. Does anybody really think that our descendants will be reading information off of Zip drives or Memory Sticks in 50 years, let alone 500?
Paper is an amazingly great technology and it’s uses have yet to be obsoleted by digital technologies.
Not just Zip drives and such. CD-R was supposed to be quite good, but I’ve lost my only copies of some nostalgic and personal data from only 20 years ago, because the fuck it is. Instead I do have drawings from when I was four.
Anyone know if the M-Disc is any good?
But even then, paper can’t really be replaced, though maybe digital copies could make decent backups if there is proper tech for it.
In 50 years, every piece of data our civilisation has ever recorded will be available from some sort of distributed cloud, with enough redundancy that nothing is ever lost. If something requires an ancient operating system to convert to a new format, every imaginable machine will be available as virtual machines. These things are almost reality today already, so why wouldn’t they be true in 50 years?
Unless of course we destroy our whole civilisation before that.
While paper uses within (Western) offices has probably declined a lot, and there’s less demand for newspapers, there’s still a lot of printing going on - whether on advertising flyers, billboards, decals for vehicles, photographic prints on metal or glass… and paper is used as a substrate a lot.
Wood pulp and paper products is still one of the staple export industries of Sweden and Finland.
I have it on fairly good authority Nordic paper isn’t as good as bamboo paper, except that it’s here already.
Cardboard is the next big thing, that can’t be made from bamboo as well and people order more and more online.
I believe you’re right, the little I’ve gleaned from my readings is that cardboard production (including the fancy kind used by electronics manufacturers in their packaging) is an big part of the industry here in Sweden now.
I love the world Peerkeep (née Camlistore) is trying to create, but we’re still too far out, and I don’t see why businesses would want to adopt it.
agreed. What drove me nuts with this project is that I am a technically savvy user, and many of the services I evaluated are designed for typical consumers – yet the solution was still non-trivial. For many folks, when asked about these problems I just say “use Backblaze (the app), Dropbox and Google Photos.”
The problem with using vim to support such claims is that it doesn’t actually work: HJKL keys were not researched in extreme depth and proven to be the best keys to use as arrow keys in qwerty keyboards, they just happened to be the arrow keys on the one terminal that the person that wrote vi was using 1 2
Yeah, I don’t get why we keep preaching HJKL in vim. I use JKL; instead (down up left right respectively) and I don’t have to lift my fingers from home row and left / right are on the “weaker” fingers, because I don’t typically use them as often as up / down.
I agree with the parent comment, HJKL was not researched, and might not be the absolutely best ever. At this point it would less useful for me to use something other than HJKL. Muscle memory is too ingrained.
And even if I took the time really learn another key combination there would be tons of programs that still assume HJKL.
The foundation might be wonky, but we are not going to tear down the house and rebuild it.
Typewriters were optimized for, well, typing.
Within significant technological constraints of metallurgy, plastics, and mechanics that now offer vastly different tradeoffs. Even the language is different: when’s the last time you saw a semicolon outside of code?
One of my goals in life is to be able to properly use a semicolon in the normal course of my writing; it’s not as hard as you would think.
The clauses aren’t independent. Actually that form is specifically used for dependent clauses. For example, “I use them all the time, more than before.” The base “I use them” applies to both parts: “I use them all the time” and “I use them more than before.” But “I use them much better than comma splices” doesn’t make any sense, so that’s not what’s happening here. Forty-Bot is omitting “they are”—typically handled with an emdash, or parentheses if the additional content has only minor significance.
For a semicolon to apply, “they are” must be included to create a second independent clause:
I use them all the time; they’re much better than comma splices.
Using a comma instead of an emdash is mildly incorrect, but widely accepted in conversational writing. Since Forty-Bot explicitly called out the comma, I only pointed out the comma would be more appropriate. Though an emdash would be most appropriate. Semicolons see little use because conversational writing favors such omissions.
Semicolons are also useful for: separating list elements, when they contain commas; showing off, often in language discussions :)
I have a Pok3r keyboard, and have configured it to use the Caps Lock key as a function modifer for HJKL so I can move around like I do with NeoVim. I enjoy it so much that I had trouble typing on the regular keyboard when I was away from my desk. So I used Karabiner Elements to do the same mapping; with Caps Lock acting as an FN key and HJKL as the arrow keys.
It is wonderful.
This is very convenient when working with code or text, thanks - except I set up IJKL to be the cursor keys.
The reason they chose hjkl is that it is historical, and with good reason. You never take your fingers off of the home row.
I get that, but I have 26 years of personal history with cursor keys in the shape of an inverted T. Besides, I only have to move one finger off the home row.
Actually, with HJKL you have to shift four fingers one key to the left from the touch typing position, whereas with IJKL it’s just one! Anyway, I’m being pointlessly pedantic here :)
I think the most important part of a resume is to not waste people’s time. They will scan the resume, looking for things that are relevant for the open position, and if they find them then they will slow down to read more. To that longer test is best kept to the bottom of the resume, or at the end of a job listing.
I like how I have my resume (on my website under /resume ) because I believe it is easy to scan, yet has bolded keywords that will jump out at a viewer.
Also, from the article, these are really good points to be able to demonstrate; and I should improve my resume to show these off.
- Your ability to solve real, relevant projects with your skills.
- Your ability to work independently.
- Your ability to learn quickly.
- Your relevant domain knowledge, if any.
- Your ability to work with others.
The bullet points below your experiences are really nice, they show the reader the main aspects and give a quick overview/feel of what you worked on, without wasting time as you said. The blue tags you have are not really my thing, but I enjoy your resume very much.
Having looked over AMP and seeing all the different points it does not seem as bad as people make it out to be, at least from a technical point of view.
These are the two main points of contention.
- Content that “opts in” to AMP and the associated hosting within Google’s domain is granted preferential search promotion, including (for news articles) a position above all other results.
- When a user navigates from Google to a piece of content Google has recommended, they are, unwittingly, remaining within Google’s ecosystem.
The first does not bother me so much. It is a standardized way of limiting page size growth, and enforcing best practices for speed. It is a formal way to enforce getting 90 on a webpage speed test. Google has used speed as a ranking factor for awhile now.
The second point does have some issue. Originally the address of the page was obscured when it was served from Google’s servers, and that, in my opinion, is not great. This has now been, somewhat, resolved. While it still connects the user and the publisher, Google is overstepping it’s middleman role.
I don’t like the idea of Google hosting cached versions of my content, but I can live with it.
Will someone expand on the AMP cache, and explain why it is a more horrible thing than I currently understand?
Re: the AMP cache, there might be other reasons, but I think it creates an unfair playing field. It optimizes for people who have teams to rewrite/add to their templating code to fit AMP when there very well could be better more relevant content that is just managed by 1 person or non-tech people who would need to outsource.
I see where that could be the case in some situations, but let me play devil’s advocate for a moment and say; AMP makes it easier for a single person to optimize a website. As a specific set of rules, it is easier to create layouts, and the validation tools help guide one to a speedy websites. Using the specific set of rules is more straightforward than having to know a ton of micro-optimizations, and how they all need to fit together.
I just added AMP to my website. I based the AMP pages on my existing layout, and only had to change a few things. All together it took me about three days.
But I came here to share this:
TL;DR: We are making changes to how AMP works in platforms such as Google Search that will enable linked pages to appear under publishers’ URLs instead of the google.com/amp URL space while maintaining the performance and privacy benefits of AMP Cache serving.
https://amphtml.wordpress.com/2018/01/09/improving-urls-for-amp-pages/amp/
I gave it 5 full days
and
Focus on your single main.go file. Coming from higher level languages, it is very tempting to start thinking in “namespaces” and sub-directories but that is a fool’s errand.
I am loath to say anything unkind, however; this person does not begin to understand the concepts in Go enough to make decisions on whether it is a good tool for their project.
I used PHP a good deal in my previous gigs, and have found that Go works well enough as PHP would for my current projects.
I did a talk about this called ‘The Deep Synergy Between Testability And Good Design.’ The case I made was that difficulty in unit testing often indicates design problems. I listed about 10 cases where that appeared to be the case. The crux of my argument was: writing tests is writing a program to understand your code. If it’s hard to do that, it’s probably hard to understand the code also.
I like this paper because it shows some real empirical correlation. The places where it fails are very interesting, particularly the cases of long methods and complexity. I suspect that testing enables complexity because tests allow us to write code that is ‘correct’ but still not easy to understand at a glance, whereas something like parameter count for methods tends to go lower because it’s extra work to write tests for methods with more parameters.
This space where the ergonomics of practice ‘nudge’ design is very interesting.
I think I tried out the tests-first approach to unit testing probably 5 times until I finally understood it. Tests-first helped me to write better software not only because the unit tests would have my back when refactoring, but becuse tests-first unit testing made me write software with a better structure.
Would you be willing to share your experiences? What mistakes did you make during the first 4 attempts? What made it finally clicked?
I think I had several misconception on unit tests. I was under the impression that unit tests must test at the implementation-detail level, when now I am mostly working on a functional level. So in my first attempts I would think of a solution in my head, then write a test for that solution (that contained assumptions on the implementation of the function) and then I wrote the implementation, so in fact it wasn’t really tests first. Then I would write several tests at once in that style. then implement instead of writing one test and making a minimal implementation that would just fullfil the test. I also tried to use mocks/fakes for tests a lot, and that made matters worse. Nowadays I use is very rarely. I still see to it, that my test suite runs fast enough though. I am not involved too much writing client code for services though, so that I can work this way may be an aspect of the kind of programming I do.
Does that make sense?
It finally clicked when I got an introduction to TDD in a Kent Beck style red-green-refactor workflow. Now I am a TDD zealot :) Forever grateful for the colleague who showed me the good way.
Also https://www.youtube.com/watch?v=Xu5EhKVZdV8 taught me a lot and brought me on a better track to unit testing I think.
[Comment removed by author]
You’re saying that ST was great 4-5 years ago, but apart from the langserver, which one of your points didn’t apply back then as much as it does now? You say that “today there are better editors”, but surely vim is much older than 4-5 years and basically didn’t change.
[Comment removed by author]
The primary reason I stick with Sublime Text is that Atom and VSCode have unacceptably worse performance for very mundane editing tasks.
I’ve tried to switch to both vim and Spacemacs (I’d love to use an open source editor), but it’s non-trivial to configure them to replicate functionality that I’ve become attached to in Sublime.
I thought VSCode was supposed to be very quick. Haven’t experimented with it much myself, what mundane editing tasks make it grind to a halt? I am well aware Atom has performance issues.
Neither Atom nor VSCode grind to a halt for me, but I can just tell the difference in how quicky text renders and how quickly input is handled.
I’m not usually one of those people who obsesses about app performance, but editors are an exception because I spend large chunks of my life using them.
I’ve tried to switch to both vim and Spacemacs (I’d love to use an open source editor), but it’s non-trivial to configure them to replicate functionality that I’ve become attached to in Sublime
This is the reason who I stay with vim, unable to replicate vim functionality in other editors.
Yeah, fortunately NeoVintageous for Sublime does everything I need for vim-style movement and editing.
I think the really ground-breaking feature that ST introduced was multi-cursor editing. Now most editors have some version of that. Once you get used to it, it’s very convenient, and the cognitive overhead is low.
As for the mini-map, I suppose it’s a matter of taste, but I found it very helpful for scanning quickly through big files looking for structure. Visual pattern recognition is something human brains are ‘effortlessly’ good at, so why not put it to use? Of course, I was using bright syntax hilighting, which makes code patterns much more visible in miniature. Less benefit for the hilight-averse.
I’ve been using ST3 beta for a few years as my primary editor. I tried using Atom and (more recently) VS Code, but didn’t like them as much: the performance gap was quite noticeable at start-up and for oversized data files. The plug-in ecosystems might make the difference for some folks, but all I really used was git-gutter and some pretty standard linters. For spare-time fun projects I still enjoy Light Table, but it’s more of a novelty. I’m gradually moving away from the Mac and want a light-weight open-source editor that will run on any OS.
So now, as part of my effort to simplify and get better at unix tools, I’m using vis. I’m enjoying the climb up the learning curve, but I think that if I stick with it long enough, I’ll probably end up writing a mouse-mode plugin. And maybe git-gutter. Interactive structural regexps and multi-cursor editing seem like a winning combination, though.
You might enjoy exploring kakoune as well. http://kakoune.org | https://github.com/mawww/kakoune
I’ve never used Sublime Text, but I’ve used multiple-cursors in vis and Kakoune, and it beats the heck out of Vim’s macro feature, just because of the interactivity.
With Vim, I’d record a macro and bang on the “replay” button a bunch of times only to find that in three of seventeen cases it did the wrong thing and made a mess, so I’d have to undo and (blindly) try again, or go back and fix those three cases manually.
With multiple cursors, I can do the first few setup steps, then bang on the “cycle through cursors” button to check everything’s in sync. If there’s any outliers, I can find them before I make changes and keep them in mind as I edit, instead of having my compiler (or whatever) spit out syntax errors afterward.
Also, multiple cursors are the most natural user interface for [url=http://doc.cat-v.org/bell_labs/structural_regexps/]structural regular expressions[/url], and being able to slice-and-dice a CSV (or any non-recursive syntax) by defining regexes for fields and delimiters is incredibly powerful.
[url=http://doc.cat-v.org/bell_labs/structural_regexps/]structural regular expressions[/url]
This might be the first attempt at BBCode I’ve seen on Lobsters. Thanks for reminding me how much I hate it.
I agree with you. I use Vim, and was thinking about switching until I realized that a search and repeat (or a macro when it’s more complex) works just as well. Multiple cursors is a cute trick, but never seemed as useful as it first appeared.
I thought multiple cursors was awesome. Then I switched to using Emacs, thanks to Spacemacs. Which introduced to me [0] iedit. I think this is superior to multiple cursors. I am slowly learning Emacs through Spacemacs, I’m still far away from being any type of guru.
[0] https://github.com/syl20bnr/spacemacs/blob/master/doc/DOCUMENTATION.org#replacing-text-with-iedit
I’ve started using vim for work, and although I’ve become quite fast, I find myself missing ST’s multiple cursors.
I might try switching to a hackable editor like Yi. I’ve really enjoyed using xmonad recently for that reason.
I was toying with it yesterday after building from source. It’s cool in that it is a single binary and has sane defaults but the documentation is only fair. It’s not sufficiently compelling for me to be annoyed.
Caddy has a couple of nice features for when you want to run a simple website. It auto sets up, and renews, Let’s Encrypt certs, and has a simple config format for simple websites. It’s nice when you want to throw something together quickly.
After this license change I will probably move back to nginx. It has better certbot integration for auto-renew than when last I was setting up my server.
I was excited when I saw it several months ago because the not-quite-zero-config web server in my dev directory trick is cool but I already know my way around apache, nginx, etc. for anything more persistent. The auto TLS feature is great, I guess, if you don’t know or don’t want to know about the workings of it. I did figure out how to get it to use certbot refreshed certs without it using .well_known or having dynamic dns updates, it’s trivial to specify the pems, but it’s not well represented in the docs.
Weird. I just copy/pasted from the URL. The Lobster’s Preview did not catch it as a duplicate. @jcs maybe a bug.
I think part of it depends on what kind of work one wants to do. Go is really strong in network connected server spaces. Rust is a good future for low level systems programming. C++ is widely used, and has a lot of legacy codebase around. Don’t pick the tool, pick the industry/area and see what tools they are using.