1. 2

    The company I work at has semiregular talks for developers and I’ve been meaning to write “How to Not Be Afraid of Large (and Small) Numbers” for a while. I want to explain back-of-the-envelope estimation “at the edges” and why it’s useful. “At the edges” means what might happen if you take some variable of a problem to the limit. A practical example for my team would be asking “how many servers could we ever possibly use?,” then seeing what it would cost to actually do that. Thinking along these lines reveals hidden bottlenecks and contours in the problem, and being comfortable doing these thought experiments lets you brainstorm and navigate scaling issues better.

    1. 12

      Author here, happy to be thoroughly corrected on German or linguistics in general.

      1. 2

        Not a correction, but you may want to learn the reason why prepositions are so difficult: They aren’t indoeuropean. Most of the nouns and verbs we use have some root in Indoeuropean, but the prepositions were mostly (entirely?) created after the great divisions, so there’s less reason for them to pair nicely with prepositions in other indoeuropean languages.

        1. 2

          The claim that “prepositions aren’t indoeuropean” is poorly-defined and incorrect in most reasonable more-specific senses. Many prepositions in English, German, and in other modern Indo-European languages are straightforwardly traceable to Proto-Indo-European roots. The English prepositions off and of and their German cognate ab, for instance, are reflexes of the reconstructed PIE root *apo, which also yields Greek απο and Latin ab (and then Spanish a). The common English preposition in, which is cognate with similarly-pronounced German in and Latin in (and then Spanish en) are reflexes of a PIE root *en meaning, more or less, “in”.

          It’s true that not every single preposition in English or any other modern Indo-European language is traceable to a PIE root, and that some roots that yield prepositions in modern IE languages were not necessarily prepositions in PIE (if PIE even had a distinct syntactic category of prepositions), and that some prepositions in English or German are cognate with morphemes in other Indo-European languages that are not necessarily prepositions (German um for instance is cognate with the Latinate prefix ambi-, which is not a preposition in Latin). But I don’t think any of these facts are inconsistent with the claim that prepositions in modern Indo-European languages by and large are shared Indo-European vocabulary, traceable to the proto-language.

          1. 1

            I took a random set of prepositions now (the German accusative prepositions durch für gegen ohne um, for no particular reaon other than having the Kluge dictionary on a shelf in front of me) and looked them up. They all are traceable a little over a thousand years back, one has much older roots and another may have, but those much older roots don’t seem to be indoeuropean prepositions.

            When you write “not every… IE root”, are you suggesting that most prepositions are traceable to an IE preposition?

          2. 1

            Wow! Super interesting, thank you for the info.

            1. 2

              I saw this really cool diagram once somewhere with prepositions in different languages, including Finnish which doesn’t have prepositions. The idea was, iirc, to demonstrate conceptualization.

              Couldn’t find it now, but this one on dativ/akkusativ is pretty neat too ;)

        1. 3

          I heard that homebrew was an awful package managers by some compared to, say, apt. Is this true, if so, why?

          1. 11

            It’s fucking ridiculous how bad this thing is, and the issues around how it’s run are almost as bad as the technical ones.

            For years it was a source only tool - it did all compilation locally. Then they caught up to 1998 and realised that the vast majority of people want binary distribution. So they added in pre-compiled binaries, but never bothered to adapt the dependency management system to take that into account.

            So for instance, if you had two packages that provide the same binary - e.g. mysql-server and percona-server (not sure if that’s their exact names in Homebrew), and then wanted to install say “percona-toolkit” as well, which has a source requirement of “something that provides mysql libs/client” - the actual dependency in the binary package would be whatever had been installed on the machine it was built on. This manifested itself in an issue where you couldn’t install both percona-server and percona-toolkit from binaries.

            When issues like this were raised - even by employees of the vendor (e.g. as in https://github.com/Homebrew/homebrew-core/issues/8717) the official response was “not our problem buddy”.

            No fucks given, just keep hyping the archaic design to the cool kids.

            I haven’t even got into the issue of permissions (what could go wrong installing global tools with user permissions) or the ridiculous way package data is handled on end-user machines (git is good for some things, this is not one of them)

            If you get too vocal about the problems the tool has, someone (in my case, the founder of the project) will ask you to stop contacting them (these were public tweets with the homebrew account referenced) about the issues.

            1. 4

              it’s good, easy to use and has a big community with well maintained list of packages. It’s the main package manager for macos. It’s been there for a long time in the macos ecosystem and is much better and easier to use than the older solutions we had such as macport. A cool thing is it has a distinction between command line tool and libraries vs desktop applications (called casks)

              example; you can install wget with brew install wget, but you’d install firefox with brew cask install firefox.

              I would stick to linux system’s default package manager, but maybe it’s worth giving it a try I guess.

              1. 3

                A cool thing is it has a distinction between command line tool and libraries vs desktop applications (called casks)

                Why is that cool? It seems pretty pointless to me.

                1. 2

                  Yeah distinction between them at install tine isn’t that cool, but the fact it does support installing desktop apps is nice. No need for a different tooling like snap does. And you get to know where it’s going to be installed according to the command used. Desktop apps are usually located in /Applications on macos and cli tools are in linked in /usr/local/bin

              2. 4

                Pro:

                • Has every package imaginable (on Mac)
                • Writing your own formulae is stupidly easy

                Con:

                • You can only get the latest version of packages due to how the software repo works.
                • It’s slower than other package managers

                Meh:

                • Keeps every single package you have ever installed around, just in case you need to revert (because remember, you can only get the latest version of packages).
                • Might be too easy to add formulae. Everyone’s small projects are in homebrew.
                • The entire system is built on Ruby and Git, so it inherits any problems from them (esp Git).
                1. 1

                  Someone told me that it doesn’t do dependency tracking, does that tie in with:

                  Keeps every single package you have ever installed around, just in case you need to revert (because remember, you can only get the latest version of packages).

                  Also, I’m not very knowledgeable on package managers, but not being able to get older versions of a package and basing everything on Git seems kind of a questionable choice to me. Also, I don’t like Ruby, but that’s a personal matter. Any reason they chose this?

                  1. 1
                    • You can only get the latest version of packages due to how the software repo works.
                    • Keeps every single package you have ever installed around, just in case you need to revert (because remember, you can only get the latest version of packages).

                    This is very similar to how Arch Linux’s pacman behaves. Personally, I would put both of these under the “pro” header.

                  2. 4

                    The author of Homebrew has repeatedly said this himself (e.g. in this quora answer). He usually says the dependency resolution in Homebrew is substantially less sophisticated than apt.

                    Homebrew became successful because it didn’t try to be a Linux package manager. Instead it generally tries to build on top of MacOS rather than build a parallel ecosystem. The MacOS base distribution is pretty big, so it’s dependency resolution doesn’t need to be that sophisticated. On my system I have 78 Homebrew packages, of those 43 have no dependencies and 13 have just 1.

                    Homebrew Cask also supports MacOS native Application / installer formats like .app, .pkg, and .dmg, rather than insisting on repackaging them. It then extends normal workflows by adding tooling around those formats.

                    So, yes, Homebrew isn’t as good at package management compared to apt, because it didn’t originally try to solve all the same problems as apt. It’s more of a developer app store than a full system package manager.

                    Linuxbrew still doesn’t try to solve the same problems. It focuses on the latest up to date versions of packages, and home dir installations. It doesn’t try to package an entire operating system, just individual programs. I doubt you could build a Linux distribution around the Linuxbrew packages, because it doesn’t concern itself with bootstrapping an operating system. Yes, it only depends on glibc and gcc on Linux, but that doesn’t mean any of the packages in Linuxbrew are set up to work together like they are on an actual Linux distribution.

                    1. 2

                      I don’t want to rag on the homebrew maintainers too much (it’s free software that’s important enough to me that it’s probably the second thing I install on a new mac), but I do have one big UX complaint: every time I run a homebrew command, I have no idea how long it will take. Even a simple brew update can take minutes because it’s syncing an entire git repo instead of updating just the list of packages.

                      brew install might take 30 seconds or it might take two hours. I have no intuition how long it will take before I run it and am afraid to ctrl-c during the middle of a run. I’ll do something like brew install mosh and suddenly I’m compiling GNU coreutils. Huh?

                      While I’d wish they’d fix this variance head-on, at minimum I’d appreciate if it did something like apt and warn you if you’re about to do a major operation. Admittedly apt only does this with disk size, but homebrew could store rough compile times somewhere and ask if I’d like to continue.

                      1. -3

                        I think it’s awful because it’s written in Ruby and uses Github as a CDN.

                        1. 0

                          This comment isn’t helpful. Please be more constructive.

                          1. 0

                            Who are you to judge? He wanted opinions, I gave mine.

                            The Ruby VM is awfully slow and using Github as a CDN is so bad it requires no elaboration.

                            1. 3

                              Saying it’s slow is much more helpful than what you said above.

                      1. 2

                        This is my first FOSDEM, so I don’t really know what I’m in for. Planning on hanging out in the go and rust rooms.

                        1. 2

                          Have a backup plan, those rooms will be super full all the time.

                          1. 1

                            It will be my first too

                          1. 1

                            Related, from 2008: https://lwn.net/Articles/299483/

                            1. 3

                              This is quite old (2003) – today emoji would definitely figure into the “absolute minimum”.

                              1. 5

                                I’ve read this article a few times through the years…

                                I think the most important thing Spolsky is trying to do is get programmers out of the ASCII mindset - one byte, one character. Once you’ve made sure your app can handle Unicode, emoji just comes along as a bonus.

                                As an aside… the Unicode consortium has expressed some dismay about all the attention and money lavished on the “trivial” emoji space, but I think as a larger picture having the impression that “Unicode is great, it gives us emoji!” is a net positive for an organisation.

                                1. 4

                                  Agreed – if emoji can get Amerocentric¹ programmers to care at all about non-ASCII support it’s a win for everyone. It doesn’t solve things like right-to-left problems, but it goes a long way toward making software accessible. Unfortunately, proper Unicode still seems like a chore instead of a basic feature. Even Rust, which goes as far as discouraging you from iterating over “chars” in the standard library, admits that “[g]etting grapheme clusters from strings is complex, so this functionality is not provided by the standard library.”

                                  I look forward to a time when handling these things is the default and “iterating over chars” is difficult. Maybe it’s not possible given the varied features of human language and orthography, but I think there is still a long way we can go with the technology we currently have.

                                  ¹ I know “Amerocentric” technically applies to the entire continents of North and South America, but I couldn’t find a better word to capture the sense of “programmers who only consider en_US when designing and testing software.”

                                  1. 4

                                    I’ve recently been doing some work on Unicode stuff for some commandline tools I’ve been writing, and I found the Unicode specs to be fairly hard to read, and being spread out over multiple documents isn’t helping either. You also need some background knowledge about different writing systems of the world.

                                    None of it is insurmountably hard as such – k8s is probably more complex – but it takes some time to grok and quite some effort to get right. Perhaps we should treat Unicode as cryptography: “don’t implement it yourself when it can be avoided”. I could add LTR support, but without actual knowledge of how an Arabic person uses a computer I’ll probably make some stupid mistake; for example, as I understand it you write from left-to-right in Arabic, except for numbers, which are left-to-right.

                                    I haven’t even gotten to vertical text yet. I have no idea how to deal with that (yet).

                                    I know “Amerocentric” technically applies to the entire continents of North and South America, but I couldn’t find a better word to capture the sense of “programmers who only consider en_US when designing and testing software.”

                                    Anglocentric? As it’s a problem that extends beyond just the United States (CA, UK, AU, NZ, many African countries). Many non-English programmers do a lot of their work in English and have similar biases. Especially in Europe, where most scripts are covered by extended ASCII/ISO-8859/cp1252.

                                    1. 4

                                      In Arabic, everything’s written right-to-left, even the numbers.

                                      When Arabic numerals were imported into Europe, they were physically written left-to-right, but to this day every school child does calculations from right-to-left (like addition, or multiplication) because that’s just how Arabic numerals work.

                                      The story continues with computers, too: many computers were designed in European-culture countries that were comfortable with numbers working in the opposite direction from everything else, and so they used big-endian byte-ordering. Some smaller, cheaper computers couldn’t justify the cost of making the computer follow the designer’s conventions, so they went with the simpler, more straight-forward implementation and came up with little-endian byte-ordering, taking Arabic numerals back to their roots.

                                    2. 2

                                      Yeah, crusty academics can witter on harmlessly about cunieform and Linear B, but if tweens can’t send poop emojis to each other you can bet that bug request will be actioned.

                                    3. 1

                                      Once you’ve made sure your app can handle Unicode, emoji just comes along as a bonus.

                                      Unless you’re MySQL ;-)

                                  1. 5

                                    I’d be interested to see a side-by-side comparison of kitty to alacritty. In particular, I’ve been using alacritty at work for a while and while it’s barebones at the moment, it’s exceptionally fast (which is probably my core feature for terminal emulators). That said, kitty looks like a fine emulator.

                                    1. 6

                                      Honest question: what need do you have for a fast terminal emulator?

                                      1. 7

                                        I have a minor obsession with input latency and scroll jank. It seems to creep up everywhere and is hard to stamp out (Sublime Text is a shining counterexample). I noticed a bit of weird input latency issues when using Terminal.app (purely anecdotal), and haven’t seen the same thing since using alacritty. So that’s the need I have for a fast emulator, it enables a smooth input and output experience.

                                        1. 3

                                          I am sensitive to the same.

                                          This is what kept me on Sublime Text for years, despite open source alternatives (Atom, VS Code and friends). I gave them all at least a week, but in the end the minor latency hiccups were a major distraction. A friend with similar sensitivity has told me that VS Code has gotten better lately, I would give it another go if I weren’t transitioning to Emacs instead.

                                          I sometimes use the Gmail web client and, for some period of time, I would experience an odd buffering of my keystrokes and it would sometimes completely derail my train of thought. It’s the digital equivalent of a painful muscle spasm. Sometimes you ignore it and move on, but sometimes you stop and think “Did I do something wrong here? Is there something more generally broken, and I should fear or investigate it?”

                                          1. 1

                                            Web-based applications are particularly bad, because often they don’t just buffer, but completely reorder my keystrokes. So I can’t just keep typing and wait for the page to catch up; I have to stop, otherwise I’m going to have to do an edit anyway.

                                        2. 3

                                          I have to admit, I thought for certain this was going to be Yet Another JavaScript Terminal but it turns out it’s written in Python. Interesting.

                                          Anyway I would have a hard time believing it’s faster than xfce4-terminal, xterm, or rxvt. It’s been a long time since I last benchmarked terminal emulators, maybe I smell a weekend project coming on.

                                          1. 6

                                            kitty is written is about half C, half Python, Alacritty is written in Rust.

                                            There were some benchmarks done for the recent Alacritty release that added scrollback, which include kitty, urxvt, termite, and st. https://jwilm.io/blog/alacritty-lands-scrollback/#benchmarks

                                            1. 2

                                              I just did a few rough-and-ready benchmarks on my system. Compared to my daily driver (xfce4-terminal), kitty is a little under twice as fast, alacritty and rxvt are about three times as fast. If raw speed was my only concern, I would probably reach for rxvt-unicode since it’s a more mature project.

                                              Alacritty is too bare-bones for me but I could be sold on kitty if I took the time to make it work/behave like xfce4-terminal.

                                              1. 1

                                                I like xfce4-terminal, but it renders fonts completely wrong for me. It’s most noticeable when I run tmux and the solid lines are drawn with dashes. If I pick a font where the lines are solid, then certain letters look off. It’s a shame, because other vte-based terminals (e.g. gnome-terminal) tend to be much slower.

                                          2. 2

                                            For me it’s the simple stuff that gets annoying when it’s slow. Tailing high-volume logs. less-ing/cat-ing large files. Long scrollbacks. Makes a difference to my day by just not being slow.

                                            1. 2

                                              I don’t care that much about the speed it takes to cat a big file, but low latency is very nice and kitty is quite good at that. I cannot use libvte terminals anymore, they just seem so sluggish.

                                              1. 2

                                                For one thing, my workflow involves cutting and pasting large blocks of text. If the terminal emulator can’t keep up, blocks of text can come through out of order etc, which can be a bad time for everyone involved.

                                              2. 3

                                                I’m on macOS.

                                                I used alacritty for a while, then switched to kitty as I’d get these long page redraws when switching tmux windows—so kitty is at least better for me in that regard. Both have similar ease of configuration. I use tmux within both, so I don’t use kitty’s scrolling or tabs. The way I was using them, they were more or less the same.

                                                I’m going to try alacritty again to see if it’s improved. I’d honestly use the default Terminal app if I could easily provide custom shortcuts (I bind keys to switching tmux panes, etc).

                                                1. 4

                                                  I came back to Alacritty on MacOS just the other day after trying it last maybe 6 months ago and finding it “not ready” in my head. It’s been significantly updated, there’s a DMG installer (and it’s in brew), a lot more polished overall and it works really well and really fast. No redraws in tmux switches. Weirded redraw artifiact while resizing main window, but snaps to fixed immediately you stop, and doesn’t bother me much. Using it as a full-time Terminal replacement right now, liking it so far, will see how it goes!

                                                  1. 1

                                                    Good to know! I’ve installed it via brew now and double-checked my old config. My font (as in, not the default Menlo. I’m using a patched Roboto Mono) looks a bit too bold, so just gotta figure out what’s wrong there.

                                                    1. 2

                                                      They’ve updated config files with additional info about aliasing and rendering fonts on Mac. So take a look at that if you are using your old config. It’s not a bad idea to start from scratch.

                                                      1. 1

                                                        Thanks for the tip! I did start from scratch and moved over changes bit by bit, but I’ll have to check the new macOS specific lines.

                                                  2. 3

                                                    Cool, thanks for your input! I also use tmux, and I haven’t seen anything like what you described (I also don’t really use tmux panes, only tabs). I know there has been a longstanding vim + tmux + osx bug as well, but I haven’t used vim proper in a while.

                                                    1. 2

                                                      I think that’s my exact problem (turns out I’m even subscribed to the issue haha). I use neovim so I think it is/was applicable to both

                                                  3. 1

                                                    do any of those really measure up when benchmarked.

                                                    I remember doing some writing to stdout and it alacritty turned out to be slower than say gnome-terminal or whatever.

                                                    Might’ve been that there was a bug with my intel graphics card though, don’t remember to well.

                                                  1. 3

                                                    Docker can fail to load the container, bundler can fail while installing some dependencies, and so can git fetch. All of those failures can be retried

                                                    If you are retrying an action connected to an external service (whether it’s something you run or something on the internet), please, please implement exponential backoff (here is a personal example). I will never forget the phrase “you are threatening to destabilize Git hosting at Google.”

                                                    1. 5

                                                      I honestly tried several times to get into it but I don’t know, it just didn’t click yet.

                                                      I find golang much nicer to work with if I need to do what rust promises to be best at.

                                                      I won’t give up on it yet but that’s just my feeling today.

                                                      1. 12

                                                        I’ll likely take some heat for this but my mental model has been:

                                                        • Go is the Python of 2019
                                                        • Rust is the C++ of 2019

                                                        Go has found its niche in small-to-medium web services and CLI tools where “server-side scripting languages” were the previous favorite. Rust has found its niche in large (or performance-sensitive) interactive applications where using GC is untenable. These aren’t strict boundaries, of course, but that’s my impression of where things have ended up.

                                                        1. 9

                                                          I agree with the mental model, although I usually think of Go as the Java for (current year).

                                                          1. 5

                                                            The tooling is light years behind though

                                                            1. 3

                                                              “go fmt” offers a standard way to format code, which removes noise from diffs and makes code other people have written more readable.

                                                              “go build” compiles code faster than javac.

                                                              The editor support is excellent.

                                                              In which way is the Java tooling better than Go, especially for development or deployment?

                                                              1. 8

                                                                How is the debugger these days?

                                                                When I was doing go a few years ago, the answer was “it doesn’t work”, whereas java had time-travel debugging.

                                                                1. 1

                                                                  Delve is a pretty great debugger. VSCode, Atom etc all have good debugging support for Go, through the use of Delve. Delve does not have time-travel, but it works.

                                                                  Packaging Java applications for Arch Linux is often a nightmare (with ant downloading dependencies in the build process), while with Go, packaging does not feel like an afterthought (but it does require setting the right environment variables, especially when using the module system that was introduced in Go 1.11).

                                                                  Go has some flaws, for example it’s a horrible language to write something like the equivalent of a 3D Vector class in Java or C++, due to the lack of operator overloading and multiple dispatch.

                                                                  If there are two things I would point out as one of the big advantages of Go, compared to other languages, it’s the tooling (go fmt, godoc, go vet, go build -race (built-in race detector), go test etc.) and the fast compilation times.

                                                                  In my opinion, the tooling of Go is not “light years behind” Java, but ahead (with the exception of time-travel when debugging).

                                                                2. 2

                                                                  My three favourite features when developing Java:

                                                                  • The refactoring support and IDE experience for plain Java is outstanding. Method extraction (with duplicate detection), code inlining, and rearranging classes (extract classes, move methods, extract/collapse hierarchies) makes it very easy to re-structure code.
                                                                  • Java Flight Recorder is an outstandingly good tool for insight into performance and behaviour, at virtually no overhead.
                                                                  • Being able to change code live, drop frames, and restart the system in the debugger is a life-saver when debugging hard-to-reach issues. that the process being debugged can be essentially anywhere is a wonderful bonus.

                                                                  Sure, it would be nice if there was a single Java style, but pick one and use that, and IDE’s generally reformat well. Also, the compile times can be somewhat long, but for plain Java thay are usually ok.

                                                                  Note that I have never had to work in a Spring/Hibernate/… project with lots of XML-configurations, dependency injections, and annotation processing. The experience then might be very different.

                                                                  1. 1

                                                                    Just the other day I connected the debugger in my IDE to a process running in a datacenter across the ocean and I could step through everything, interactively explore variables etc. etc. There is nothing like it for golang.

                                                              2. 5

                                                                “I’ll likely take some heat for this but my mental model has been:”

                                                                All kinds of people say that. Especially on HN. So, not likely. :)

                                                                “These aren’t strict boundaries, of course, but that’s my impression of where things have ended up.”

                                                                Yup. I would like to see more exploration of the middle in Rust. As in, the people who couldn’t get past the borrow checker just try to use reference counting or something. They get other benefits of Rust with performance characteristics of a low-latency GC. They still borrow-checker benefits in other people’s code which borrow checks. They can even study it to learn how it’s organized. Things click gradually over time while they still reap some benefits of the language.

                                                                This might not only be for new folks. Others know know Rust might occasionally do this for non-performance-sensitive code that’s not borrow-checking for some reason. They just skip it because the performance-critical part is an imported library that does borrow-check. They decide to fight with the borrow-checker later if it’s not worth their time within their constraints. Most people say they get used to avoiding problems, though, so I don’t know if this scenario can play out in regular use of the language.

                                                                1. 6

                                                                  I agree, for 99% of people, the level of strictness in Rust is the wrong default. We need an ownership system where you can get by with a lot less errors for non-performance sensitive code.

                                                                  The approach I am pursuing in my current language is to essentially default to “compile time reference counting”, i.e. it does implement a borrow checker, but where Rust would error out, it inserts a refcount increase. This is able to check pretty much all code which previously used runtime reference counting (but with 10x or so less runtime overhead), so it doesn’t need any lifetime annotations to work.

                                                                  Then, you can optionally annotate types or variables as “unique”, which will then selectively get you something more like Rust, with errors you have to work around. Doing this ensures that a) you don’t need space for a refcount in those objects, and b) you will not get unwanted refcount increase ops in your hot loop.

                                                                  1. 2

                                                                    Ha ha, just by reading this comment I was thinking ‘this guy sounds a bit like Wouter van Oortmerssen’, funny that it turns out to be true :-) Welcome to lobste.rs!

                                                                    Interesting comment, I assume you are exploring this idea in the Lobster programming language? I would love to hear more about it.

                                                                    1. 2

                                                                      Wow, I’m that predictable eh? :P

                                                                      Yup this is in Lobster (how appropriate on this site :)

                                                                      I am actually implementing this as we speak. Finished the analysis phase, now working on the runtime part. The core algorithm is pretty simple, the hard part is getting all the details right (each language feature, and each builtin function, has to correctly declare to its children and its parent wether it is borrowing or owning the values involved, and then keep those promises at runtime). But I’m getting there, should have something to show for in not too long. I should definite do a write-up on the algorithm when I finish.

                                                                      If it all works, the value should be that you can get most of the benefit of Rust while programmers mostly don’t need to understand the details.

                                                                      Meanwhile, happy to answer any more specific questions :)

                                                                2. 4

                                                                  I don’t understand, and have never understood, the comparisons between Go and Python. Ditto for former Pythonistas who are now Gophers. There is no equivalent of itertools in Go, and there can’t be due to the lack of generics. Are all these ex-Python programmers writing for loops for everything? If so, why???

                                                                  1. 1

                                                                    Not Go but Julia is more likely the Python of 2019.

                                                                1. 8

                                                                  Due to unintended effect of interaction with the software, the text of a story I posted (that should have been a comment here) got wiped. @pushcx was kind enough to resurrect the markdown so I could repost it here. This should make @friendlysock’s reply make more sense.

                                                                  ===

                                                                  I’ve seen a sudden uptick against new users, as if another Eternal September looms on the horizon. The regular approach to push September away is to just educate new users and guide them toward being productive Crustaceans, but a lot of the discussion lately is downright negative (to the point of a proposal banning new users altogether).

                                                                  I don’t get paid to be on here. I come here because it’s interesting and fun, and I imagine that’s the kind of vibe we want to encourage on Lobsters. People are attracted to our stories, discussions, and general emphasis on technical or artistic topics (instead of, say, business success). Let’s keep that spirit in mind when interacting with new people so we can show them the right way instead of scaring them off. If the only nudges you get are negative ones, you may expect that’s all you’re going to get. If @friendlysock can make a conscious decision to be friendlier, so can you.

                                                                  I’d like to specify that when I mean “negative,” I’m talking about negative reinforcement – “you’re posting wrong” instead of “you’re posting right,” or “this is the kind of content we don’t want” instead of “this is the kind of content we like.” There will always be a place for technical criticism, and I don’t want to say that we should always be upbeat happy people. I just think that we should always mention positive examples when demonstrating negative ones.

                                                                  For a long time I’ve supported a “new poster’s guide” to bolster the rather technical about page. We tried to do this a while ago but that effort unfortunately lost steam. If we don’t have anything written down, we rely on people trying to post what they think is right while navigating the gauntlet of negative feedback until they “get it.” This is an extremely discouraging process. It’s also not really interesting or fun, both for the new posters and the people who have been here a while. If it’s not interesting or fun, why are we even here?

                                                                  I’ll leave you with a _why quote that really guides my thinking on this: “when you don’t create things, you become defined by your tastes rather than ability. your tastes only narrow & exclude people. so create.” Let’s create an encouraging environment for new users, and find ways to guide them toward sharing some really awesome stuff. Thanks.

                                                                  1. 6

                                                                    No. If we think new members are a problem (I don’t), then we should require more referrals (two or more people need to invite you) or just charge buxx. A blanket closing off of the site is how you kill it.

                                                                    1. 7

                                                                      There are a lot of words on this page, so I’ll try to keep it short (and recommend you do as well!):

                                                                      • I agree that the general tone of the entire discussion was not great (although I think that top-level post was pretty respectably written)
                                                                      • Viewing the site as “technology without the humanity, unless that humanity is related to building more technology” is just denying reality. There’s a reason we’re not using ReiserFS anymore, and I think that particular reason is just as relevant to Lobsters as its metadata journaling.

                                                                      As someone who’s participated in more than a few “ethical slapfights” on this site, I think the major problem is runaway threads, not the content that starts the threads themselves. One of the many reasons I like Lobsters is its slower pace compared to other sites (only a few new stories come up every day), so maybe a response cooldown really is the answer.

                                                                      1. 2

                                                                        Using Alpine for Docker containers is great, and many language runtimes have an Alpine variant. As the author points out, python has python:3.6-alpine. Java, Groovy, Node, Ruby, Erlang, Clojure and Swift also have -alpine variants. Be aware that it uses musl instead of glibc which theoretically might affect production applications, but I haven’t noticed any difference in the workloads I’ve been running.

                                                                        1. 4

                                                                          If you aren’t doing anything too complicated with a multi-stage build, Alpine’s apk tool has a virtual package feature designed to make using temporary build tools in Docker easier:

                                                                          RUN apk --no-cache add --virtual build-dependencies python-dev build-base wget \
                                                                            && pip install -r requirements.txt \
                                                                            && python setup.py install \
                                                                            && apk del build-dependencies
                                                                          

                                                                          Docker layers are only created after an entire RUN command is run, meaning those packages never go into a layer.

                                                                          1. 4

                                                                            Sorry for your loss, and I hope someone can help you out.

                                                                            I like the idea of having a more community-oriented side to lobste.rs, maybe kept separate from the main page to keep the original spirit of the site unmodified.

                                                                            1. 1

                                                                              After reading this I looked around for an OSM app that would be a relatively decent replacement for Google Maps, it looks like OsmAnd Maps might do the trick. One thing that’s missing for me is public transit, but Citymapper does a good job of that. It’s just nice to flip between car/transit/bike/walk quickly to get a sense of how long each method will take.

                                                                              1. 1

                                                                                For me Maps.me hits the spot. Have you tried it?

                                                                                1. 1

                                                                                  Not yet! It was also high on the list of OSM map apps, but I’m a bit worried that it was acquired by a Russian internet company.

                                                                              1. 2

                                                                                This sounds like a description my dad gave me of using Fortran on a mainframe in the 70s.

                                                                                1. 3

                                                                                  Note: this appears to be completely separate from Kx Systems’ q.