1. [Comment removed by author]

    1. [Comment removed by author]

      1. 2

        Who are you talking to? I see nothing to indicate that the submitter of this post is the author of the tool. Go file a github issue, or better yet a pull request.

        Sorry, I haven’t checked if the OP is the author, will create an issue on github.

    1. 12

      I don’t like tools that make using awk or cut harder.

      The output could be improved without using the parenthesis around the bytes, or having a flag .

      1. 5

        Tools should have optional JSON output so I can use jq queries instead of awk or cut :P

        1. 4

          I really like jq too :)

          1. 2

            https://github.com/Juniper/libxo

            This is integrated in most FreeBSD utilities.

            1. 3

              We should all switch to Powershell, where values have types and structure, instead of switching to a different text format.

          1. 7

            clickbaity title; ‘Stop Building Single Page Apps’ would be simpler and informative

            1. 1

              And the dude needs to dial the font size down a LOT. I had to view it at 70% the original size just to make it readable.

              1. 4

                Then others have to increase the font size again. For me the font size is pretty reasonable (27” WQHD). I often use Firefox’s reader mode when a page is hard to read, especially for forums and the like.

                EDIT: typos, formatting

                1. 1

                  80% for me, but yeah kinda painfully big.

              1. 1

                @hwayne I just discovered your 2017 talk at StrangeLoop about TLA+ and really enjoyed it! The talk sparked my interest so much that I decided to buy your book Practical TLA+ right away.

                The examples you presented in the talk were mostly about the verification of models before their implementation, which is without a question a proper use case, but would it be practical to specify e.g. methods in legacy software which don’t have tests in order to refactor them safely?

                Update: added a question

                1. 2

                  Glad you enjoyed! I know a lot of people have successfully used it for refactoring legacy systems; I don’t know if anyone who’s used it at the method level, which might be a little bit too low, but it’s worth a shot IMO.

                1. 5

                  First of all - congratulations on the release! This looks cool and I’ll definitely try it out.

                  So to ask an audio developer anything: How did you (and how can I) get into DSP/audio programming? I’m thinking mostly resources to learn both the concepts and math of DSP, as well as the tricks of the trade in writing fast DSP code. It seems like if you want to learn ML, or compilers, or OS-design, etc, there are piles of good books, tutorials and videos available – but I’m having trouble finding good resources to learn audio stuff. Do you have any tips?

                  1. 11

                    I had my introduction to signal processing with a course at the university. At least I can recommend some books for you:

                    PS: I think lobste.rs’ needs an dsp tag.

                    Edit: typos.

                    1. 1

                      There’s at least two tags we need that will cover lots of articles that people might filter if we get too specific on application area or arcane internals users of black-box tools don’t need to know. Relevant here is “parallel:” techniques for parallel programming. It’s already a huge field that HPC draws on. It can cover DSP, SIMD, multicore, NUMA, ways of stringing hardware together, parallel languages, parallelizing protocols, macros/libraries that parallelize, and so on. I plan to ask the community about the other one after I collect relevant data. Mentioning this one here since you already brought it up.

                    2. 4

                      Hey, thanks! Do feel free to reach out and let me know what you think.

                      With regards to DSP literature – klingtnet has provided some great resources already, so I’ll just talk a little about my path. My background has always just been in development, and my math has always been weak. Hence, the best resources for me were studying other people’s code (for which github is a particularly great resource) and figuring out enough math to implement research papers in code.

                      Audio DSP has this weird thing going on still where companies in the space are generally incredibly guarded about their algorithms and approaches, but there’s a few places where they’ll talk a little more openly. For me, those have been the music-dsp mailing list and the KVR audio DSP forum. The KVR forum in particular has some deep knowledge corralled away – I always search thorough there when I start implementing something to see how others have done it.

                      And, one final little tidbit about DSP: in real-time, determinism is key. An algorithm which is brutally fast but occasionally is very slow could be less useful than one slower but more consistent in its performance. Always assume you’re going to hit the pessimal case right when it’s the most damaging, and in this industry those moments are when a user is playing to a crowd of tens of thousands.

                      That being said, I’d encourage just jumping in! Having a good ear and taste in sound will get you further than a perfect math background will.

                      1. 4

                        https://jackschaedler.github.io/circles-sines-signals/index.html is a really well done interactive intro to the basics (note that the top part is the table of contents, you’ll have to click there to navigate).

                        1. 1

                          Thanks a lot for this, I just finished and felt like I finally got some basic things that eluded me in the past. Good intro!

                      1. 3

                        Opus 1.3 includes a brand new speech/music detector. It is based on a relatively new type of recurrent neuron: the Gated Recurrent Unit (GRU). Unlike simple feedforward units, the GRU has a memory. It not only learns how to use its input and memory at each time, but it can also learn how and when to update its memory. That makes it able to remember information for a long period of time but also discard some of that information when appropriate.

                        The quality improvement between Opus 1.0 and 1.3 for a 9kbps bitrate speech sample is impressive.

                        1. 3

                          I had these exact same headphones as a warranty replacement for the previous pair that failed after 2 years. They refused to replace them this time. A $14 pair of AUkey headphones are nearly as good to me, although I’ll admit years of being an audio engineer have probably affected my hearing somewhat; I still appear to have quite good ears according to hearing tests. I have some nice studio headphones when I really need to hear clearly, and it turns out my use case for earbuds overrides the need for superior fidelity. Are the $14 buds as nice? Of course not, but sometimes good enough is good enough.

                          Jaybird will never get another cent from me, and their parent company Logitech is now worthy of my scrutiny.

                          1. 2

                            $14 bluetooth buds?

                            1. 5

                              There is no correlation between headphone frequency response and retail price, the consumer and especially the audiophile HiFi market is full marketing voodoo. The main difference between cheap and expensive headphones is the material the case is made of but the built-in drivers are usually pretty cheap and the construction of good headphones is no rocket science, even though the audio industry wants you think that. I also own a cheap pair of bluetooth in-ear headphones for commuting that cost me 20€ and are pretty reliable and sound pretty okay. I forgot them once in a pocket of my jeans and they even survived the washing machine. Another anecdote regarding relation between price and audio reproduction quality of headphones: I was looking for headphones for my home recording studio this year and tested difference models ranging from Samson SR850 for 27€ to Beyerdynamic DT-880 for around 200€. In the end I went with the Samsons’ because they sound fantastic and I can live with a non perfect case finish, heck, you can even get a pair of them for 39€.

                              1. 1

                                I don’t disagree about marketing voodoo in HiFi space, there is astonishingly good cheap gear, KSC75s possibly being the most striking example. That said, adding a mic and BT to them will cost you about $14 ($11 BT chip, $3 mic, straight from China) by itself with a so-so BT chip which doens’t license the high quality audio stream stuff and will randomly fail to pair.

                                The SR850s are exceptional, like the KSC75s, Zero Audio Tenore’s and a handful of other great drivers, so-so build quality but no core build defects.

                                EDIT: Update from the OP, it was $27, which makes A LOT more sense to me.

                                1. 1

                                  I’ve been exactly there and I ended up with BD DT-770’s, which sound great but are ultimately comfortable to wear for extended periods without clamping my head or causing inner-ear pain. Were I tracking a kit I’d probably use the Sennheiser hd-280pro’s due to the superior bleed isolation but man those things kill my ears after a couple hours. What that tells me is that inside, the tech provides modest differences and it’s all about comfort and durability.

                                  At the end of the day, much of the music most people consume is rammed through lossy compression and mixed to maximize volume, then rammed through a cheap DAC - so listening through a $1000 pair of headphones provides little benefit other than to point out the flaws in the recording all along the process.

                                  1. 4

                                    At the end of the day, much of the music most people consume is rammed through lossy compression

                                    Most modern static compression is well beyond good enough even for high end gear. Note: static compression, not on the fly compression like BT does.

                                    and mixed to maximize volume

                                    The loudness wars left a lot of damaged music. But it is all but over at this point. Everyone from indie artists to professional mastering have stopped it as a matter of course, and it is now the exception. Mick Guzauski, Bob Ludwig and Ian Shepherd since the mid-2000s really pushed against it changing the industry. iTunes Radio really cemented it with automatically tuning down overly loud music, meaning if the copy they get from you is part of the loudness wars it is going to sound objectively horrible.

                                    then rammed through a cheap DAC -

                                    $3 DACs are all but perfect at this point, a lot of the difference between a $3 and $30 DAC is bit-rates used for professional mastering and its shielding. Finding an awful DAC these days takes real effort.

                                    so listening through a $1000 pair of headphones provides little benefit other than to point out the flaws in the recording all along the process.

                                    Really depends on the headphones, some very much show the flaws, others are just expensive and fun. Also, there is something sort of special about finding new depth in recordings through high end gear, tapping of a foot, the side mic exhale, etc. I would say like the TH900s are a nice pure-fun high end headphone: 25ohm, v-shaped, pretty to look at.

                                    1. 1

                                      thanks for the clarity; I agree with the that there is the possibility to renew appreciation in old favorites by changing the listening environment. Having donned the primo grados at a high-end mastering house, I am a believer.

                                    2. 3

                                      …mixed to maximize volume…

                                      This is more a matter of taste than a sound quality problem, and yes, the loudness war caused popular music to be less dynamic because loud = good.

                                      …rammed through a cheap DAC…

                                      I will not deny that there are differences between a good DAC used in a professional audio interfaces and those used in a cheap laptop but even the latter ones are good now (except the one of the Raspberry Pi), but the distortion caused by a cheap DAC is orders of magnitude’s lower than that of any loudspeaker. The mechinal part of reproduction is still the weak point, by far.

                                      Monty Montgomery from xiph.org (the ogg vorbis guys) made an enlightening video about D/A and A/D conversion which I can highly recommend to anyone.

                                      1. 2

                                        All the HD-280s I have even seen or owned died the same sad death – headband death. Either the metal strains against the plastic and breaks it, or the strain goes to the metal connect and it snaps, either way hard to repair.

                                        Also, they make a great set of earmuffs.

                                        1. 1

                                          At the end of the day, much of the music most people consume is rammed through lossy compression and mixed to maximize volume, then rammed through a cheap DAC - so listening through a $1000 pair of headphones provides little benefit other than to point out the flaws in the recording all along the process.

                                          Dunno about that. I really enjoy my AKG K812 even if plugged straight in to a laptop (most of the time) or phone (sometimes). I also enjoy my Sennheiser HD 800 even if the amp that feeds them gets analog input straight from the motherboard. Yes, it can get a little noisy when the GPU is busy. I enjoy them both, generally more than my Sennheiser HD 650, even if I’m streaming lossy music from Youtube. Or music I compressed myself at a bitrate I know is transparent (or damn well close enough) from the ABXing I’ve done in the past. If anything, I feel like the AKG K701 (cheapest cans I have right now) are more revealing in terms of recording flaws.

                                          I really don’t think the DAC and compression are a big deal, even if I do also have a collection lossy music and an external head amp.

                                          1. 1

                                            I think it comes down to design intent for the cans in question - e.g. listening vs. mixing, and I do agree that technology has vastly improved since I last posted a diatribe about this. I think there’s also a matter of ear training here that affects me, as it’s not just headphone use where I hear every razzafrazzin sound in the room. I spent years developing critical listening skills and I can’t just turn them off.

                                        2. 1

                                          While I won’t argue the point of sound quality right now (because it’s all over the map), I certainly will argue about build quality.

                                          I’d be willing to bet that a much larger percentage of gear priced at $200 and above will be around in 15 years, vs. lower priced gear.

                                          The higher-end gear might not always be technically and sonically superior but it is usually built to a higher standard of quality.

                                        3. 1

                                          Oops, I fibbed, they were $26.99: https://www.amazon.com/gp/product/B06ZZSQQTD/

                                          I’m one of those “excessive research” headphone chaps and I am generally highly critical of any headphones but these right here, they are a winner for me.

                                          I should note my use cases are: using outdoor power equipment where bigger hearing protection doesn’t fit, using power tools in the shop, and blocking out noise on planes. The one place they fail, which is entirely due to the size, is for sleeping. Plus, more often than not I’m listening to podcasts, audio books, or lo-fi rock & roll where high fidelity or critical listening isn’t a factor.

                                      1. 4

                                        Help team members when they’re stuck.

                                        Plan your projects’ work.

                                        Create new projects.

                                        For me it’s unclear how this is specific to a senior engineer, I would expect this from anyone in my team.

                                        One thing I left out is “make estimates”. Making estimates is something I’m still not very good at…

                                        Being a senior developer is all about experience, and making estimates benefits from exactly that, work experience. I have to admit that making estimates how long a project will take is hard for me as well, but this should be definitely on the list of things to expect from a senior engineer.

                                        Make sure work is allocated in a fair way.

                                        Make sure folks are working well together.

                                        Those responsibilities are on everyone in the team. If I see someone struggling with the amount of work they have then I try to help them. The same goes for working together in a team, if there are conflicts in the team then one should not wait on the manager to solve them (which is hopefully not what was assumed in the article) because this is how a Kindergarten works.

                                        In general I like jvns’ articles, but this time I can’t take much from it. Those are only my 2¢.

                                        edit: formatting

                                        1. 8

                                          For me it’s unclear how this is specific to a senior engineer

                                          I don’t want Jr engineers planning projects nor creating new ones. They can try to help other team members, but who knows how successful that will be, might just be tossing hours down a hole.

                                          If I see someone struggling with the amount of work they have then I try to help them.

                                          Again, I am not sure I want Jr engineers even attempting this.

                                          if there are conflicts in the team then one should not wait on the manager to solve them

                                          Please consider waiting for your manager or team lead – you probably don’t have all the information. Many attempts to “fix” stuff on a team of engineers makes it worse. Waiting for people with more information than you to give some external feedback isn’t the mark of kindergarten – it is the mark of maturity.

                                          1. 3

                                            Agreed.

                                            A further observation, I recently got promoted to manager. A coworker I worked with at a previous employer got promoted to lead engineer at the same time. Eventually, we both realized that many of the responsibilities we had both been considering “Senior Engineer” for years were actually what most folks would call lead engineer.

                                            The gradient of skill levels can seem compressed when you spend a big slice of your career in a very high-pressure/high-performing arena. (Given the lack of diverse skill-sets, you might even call them dysfunctional.) Consequently, our ideas of what responsibilities are appropriate gets skewed.

                                            1. 1

                                              I don’t want Jr engineers…

                                              Again, I am not sure I want Jr engineers…

                                              Ok, I said everyone in my team, but this doesn’t mean that there are only juniors and seniors, I was referring mostly to those developers with some experience who are in between both levels.

                                              I don’t want Jr engineers planning projects nor creating new ones.

                                              For me this is okay for internal projects of smaller scale and they should be involved in the process for larger ones.

                                              1. 1

                                                Ok, I said everyone in my team, but this doesn’t mean that there are only juniors and seniors

                                                Mentally replace “Jr” with “non-senior” if that makes it more clear to you.

                                                For me this is okay for internal projects of smaller scale and they should be involved in the process for larger ones.

                                                Involved in the process for sure, that is how they learn. Creating or planning, no. Smaller projects and internal projects both tend to grow and most of the cost of a project is maintenance, should never be entered into lightly.

                                          1. 37

                                            What about dependencies? If you use python or ruby you’re going to have to install them on the server.

                                            How much of the appeal of containerization can be boiled directly down to Python/Ruby being catastrophically bad at handling deploying an application and all its dependencies together?

                                            1. 6

                                              I feel like this is an underrated point: compiling something down to a static binary and just plopping it on a server seems pretty straightforward. The arguments about upgrades and security and whatnot fail for source-based packages anyway (looking at you, npm).

                                              1. 10

                                                It doesn’t really need to be a static binary; if you have a self-contained tarball the extra step of tar xzf really isn’t so bad. It just needs to not be the mess of bundler/virtualenv/whatever.

                                                1. 1

                                                  mess of bundler/virtualenv/whatever

                                                  virtualenv though is all about producing a self-contained directory that you can make a tarball of??

                                                  1. 4

                                                    Kind of. It has to be untarred to a directory with precisely the same name or it won’t work. And hilariously enough, the --relocatable flag just plain doesn’t work.

                                                    1. 2

                                                      The thing that trips me up is that it requires a shell to work. I end up fighting with systemd to “activate” the VirtualEnv because I can’t make source bin/activate work inside a bash -c invocation, or I can’t figure out if it’s in the right working directory, or something seemingly mundane like that.

                                                      And god forbid I should ever forget to activate it and Pip spews stuff all over my system. Then I have no idea what I can clean up and what’s depended on by something else/managed by dpkg/etc.

                                                      1. 4

                                                        No, you don’t need to activate the environment, this is a misconception I also had before. Instead, you can simply call venv/bin/python script.py or venv/bin/pip install foo which is what I’m doing now.

                                                      2. 1

                                                        This is only half of the story because you still need a recent/compatible python interpreter on the target server.

                                                    2. 8

                                                      This is 90% of what I like about working with golang.

                                                      1. 1

                                                        Sorry, I’m a little lost on what you’re saying about source-based packages. Can you expand?

                                                        1. 2

                                                          The arguments I’ve seen against static linking are things like you’ll get security updates etc through shared dynamic libs, or that the size will be gigantic because you’re including all your dependencies in the binary, but with node_packages or bundler etc you’ll end up with the exact same thing anyway.

                                                          Not digging on that mode, just that it has the same downsides of static linking, without the ease of deployment upsides.

                                                          EDIT: full disclosure I’m a devops newb, and would much prefer software never left my development machine :D

                                                          1. 3

                                                            and would much prefer software never left my development machine

                                                            Oh god that would be great.

                                                      2. 2

                                                        It was most of the reason we started using containers at work a couple of years back.

                                                        1. 2

                                                          Working with large C++ services (for example in image processing with OpenCV/FFmpeg/…) is also a pain in the ass for dynamic libraries dependencies. Then you start to fight with packages versions and each time you want to upgrade anything you’re in a constant struggle.

                                                          1. 1

                                                            FFmpeg

                                                            And if you’re unlucky and your distro is affected by the libav fiasco, good luck.

                                                          2. 2

                                                            Yeah, dependency locking hasn’t been a (popular) thing in the Python world until pipenv, but honestly I never had any problems with… any language package manager.

                                                            I guess some of the appeal can be boiled down to depending on system-level libraries like imagemagick and whatnot.

                                                            1. 3

                                                              Dependency locking really isn’t a sufficient solution. Firstly, you almost certainly don’t want your production machines all going out and grabbing their dependencies from the internet. And second, as soon as you use e.g. a python module with a C extension you need to pull in all sorts of development tooling that can’t even be expressed in the pipfile or whatever it is.

                                                            2. 1

                                                              you can add node.js to that list

                                                              1. 1

                                                                A Node.js app, including node_modules, can be tarred up locally, transferred to a server, and untarred, and it will generally work fine no matter where you put it (assuming the Node version on the server is close enough to what you’re using locally). Node/npm does what VirtualEnv does, but by default. (Note if you have native modules you’ll need to npm rebuild but that’s pretty easy too… usually.)

                                                                I will freely admit that npm has other problems, but I think this aspect is actually a strength. Personally I just npm install -g my deployments which is also pretty nice, everything is self-contained except for a symlink in /usr/bin. I can certainly understand not wanting to do that in a more formal production environment but for just my personal server it usually works great.

                                                              2. 1

                                                                Absolutely but it’s not just Ruby/Python. Custom RPM/DEB packages are ridiculously obtuse and difficult to build and distribute. fpm is the only tool that makes it possible. Dockerfiles and images are a breeze by comparison.

                                                              1. 2

                                                                Flatpak is a definite no for me as long as they think it’s acceptable to dump things into $HOME. It’s 2018. No new application should do this.

                                                                1. 3

                                                                  Can you elaborate on this? What do they dump in $HOME and where exactly? You can’t change it?

                                                                  1. 0

                                                                    Flatpak creates its own .var directory in $HOME.

                                                                    1. 3

                                                                      What’s wrong with that?

                                                                      1. 0

                                                                        It’s my home directory, not the application’s.

                                                                  2. 3

                                                                    I have the same question as @andyc. Do you think of applications that create files, like rc-files, or folders on $HOME directory level or does this even include subfolders of the XDG base directories, e.g. XDG_CONFIG_HOME (~/.config/<application>)?

                                                                    Update:

                                                                    I just installed an application via flatpak and checked which folders where created/modified and it showed that flatpak does not respect the XDG directories specification, instead the application was installed into .var/app/. I assume that this is what you’re referring to?

                                                                    1. 3

                                                                      Yes, I was referring to the .var directory.

                                                                      But according to the Flatpak developers Flatpak adheres to the XDG spec and .var is “nothing to see here”: https://github.com/flatpak/flatpak.github.io/issues/191

                                                                    2. 2

                                                                      While I agree with you that they should have used a directory for ~/.var that adheres to the XDG spec - like ~/.local/var - they aren’t dumping configuration or any files besides that directory into $HOME. I would however like to see an explanation as to why it was necessary to use the ~/.var directory. Apparently after a discussion including XDG devs, they decided to go that route.

                                                                      1. 3

                                                                        It has been a common issue of application developers to believe that their app is special and should be exempt from the rules. I have seen it many times, but the Flatpak devs invented a whole new level of entitlement.

                                                                        1. 2

                                                                          What’s supposed to be the correct way to do this?

                                                                    1. 4

                                                                      Surely I’m not going to be the only one expecting a comparison here between go’s. I’m not really well versed in GC but this appears to mirror go’s quite heavily.

                                                                      1. 12

                                                                        It’s compacting and generational, so that’s a pair of very large differences.

                                                                        1. 1

                                                                          My understanding, and I can’t find a link handy, is that the Go team is on a long term path to change their internals to allow for compacting and generational gc. There was something about the Azul guys advising them a year+ ago iirc.

                                                                          Edit; I’m not sure what the current status is, haven’t been following, but see this from 2012, look for Gil Tene comments:

                                                                          https://groups.google.com/forum/#!topic/golang-dev/GvA0DaCI2BU

                                                                          1. 4

                                                                            This presentation from this July suggests they’re averse to taking almost any regressions now even if they get good GC throughput out of it. rlh tried freeing garbage at thread (goroutine) exit if the memory wasn’t reachable from another thread at any point, which seemed promising to me but didn’t pan out. aclements did some very clever experiments with fast cryptographic hashing of pointers to allow new tradeoffs, but rlh even seemed doubtful the prospects of that approach in the long term.

                                                                            Compacting is a yet harder sell because they don’t want a read barrier and objects moving might make life harder for cgo users.

                                                                            Does seem likely we’ll see more work on more reliably meeting folks’ current expectations, like by fixing situations where it’s hard to stop a thread in a tight loop, and we’ll probably see work on reducing garbage through escape analysis, either directly or by doing better at other stuff like inlining. I said more in my long comment, but I suspect Java and Go have gone on sufficiently different paths they might not come back that close together. I could be wrong; things are interesting that way!

                                                                            1. 1

                                                                              Might be. I’m just going on what I know about the collector’s current state.

                                                                          2. 10

                                                                            Other comments get at it, but the two are very different internally. Java GCs have been generational, meaning they can collect common short-lived garbage without looking at every live pointer in the heap, and compacting, meaning they pack together live data, which helps them achieve quick allocation and locality that can help processor caches work effectively.

                                                                            ZGC is trying to maintain all of that and not pause the app much. Concurrent compacting GCs are hard because you can’t normally atomically update all the pointers to an object at once. To deal with that you need a read barrier or load barrier, something that happens when the app reads a pointer to make sure that it ends up reading the object from the right place. Sometimes (like in Azul C4 I think) this is done with memory-mapping tricks; in ZGC it looks like they do it by checking a few bits in each pointer they read. Anyway, keeping an app running while you move its data out from under it, without slowing it down a lot, is no easier than it sounds. (To the side, generational collectors don’t have to be compacting, but most are. WebKit’s Riptide is an interesting example of the tradeoffs of non-compacting generational.)

                                                                            In Go all collections are full collections (not generational) and no heap compaction happens. So Go’s average GC cycle will do more work than a typical Java collector’s average cycle would in an app that allocates equally heavily and has short-lived garbage. Go is by all accounts good at keeping that work in the background. While not tackling generational, they’ve reduced the GC pauses to more or less synchronization points, under 1ms if all the threads of your app can be paused promptly (and they’re interested in making it possible to pause currently-uncooperative threads).

                                                                            What Go does have going for it throughput-wise is that the language and tooling make it easier to allocate less, similar to what Coda’s comment said. Java is heavy on references to heap-allocated objects, and it uses indirect calls (virtual method calls) all over the place that make cross-function escape analysis hard (though JVMs still manage to do some, because the JIT can watch the app running and notice that an indirect call’s destination is predictable). Go’s defaults are flipped from that, and existing perf-sensitive Go code is already written with the assumption that allocations are kind of expensive. The presentation ngrilly linked to from one of the Go GC people suggests at a minimum the Go team really doesn’t want to accept any regressions for low-garbage code to get generational-type throughput improvements. I suspect the languages and communities have gone down sufficiently divergent paths about memory and GC that they’re not that likely to come together now, but I could be surprised.

                                                                            1. 1

                                                                              One question that I don’t have a good feeling for is: could Go offer something like what the JVM has, where there are several distinct garbage collectors with different performance characteristics (high throughput vs. low latency)? I know simplicity has been a selling point, but like Coda said, the abundance of options is fine if you have a really solid default.

                                                                              1. 1

                                                                                Doubtful they’ll have the user choose; they talk pretty proudly about not offering many knobs.

                                                                                One thing Rick Hudson noted in the presentation (worth reading if you’re this deep in) is that if Austin’s clever pointer-hashing-at-GC-time trick works for some programs, the runtime could choose between using it or not based on how well it’s working out on the current workload. (Which it couldn’t easily do if, like, changing GCs meant compiling in different barrier code.) He doesn’t exactly suggest that they’re going to do it, just notes they could.

                                                                              2. 1

                                                                                This is fantastic! Exactly what I was hoping for!

                                                                              3. 4

                                                                                There are decades of research and engineering efforts that put Go’s GC and Hotspot apart.

                                                                                Go’s GC is a nice introductory project, Hotspot is the real deal.

                                                                                1. 4

                                                                                  Go’s GC designers are not newbies either and have decades of experience: https://blog.golang.org/ismmkeynote

                                                                                  1. 2

                                                                                    Google seems to be the nursing home of many people that had one lucky idea 20 years ago and are content with riding on their fame til retirement, so “famous person X works on it” has not much meaning when associated with Google.

                                                                                    The Train GC was quite interesting at its time, but the “invention” of stack maps is just like the “invention” of UTF-8 … if it hadn’t been “invented” by random person A, it would have been invented by random person B a few weeks/months later.

                                                                                    Taking everything together, I’m rather unconvinced that Go’s GC will even remotely approach G1, ZGC’s, Shenandoah’s level of sophistication any time soon.

                                                                                  2. 3

                                                                                    For me it is kind of amusing that huge amounts of research and development went into the Hotspot GC but on the other hand there seem to be no sensible defaults because there is often the need to hand tune its parameters. In Go I don’t have to jump through those hoops, and I’m not advised to, but still get very good performance characteristics, at least comparable to (in my humble opinion even better) than for a lot of Java applications.

                                                                                    1. 13

                                                                                      On the contrary, most Java applications don’t need to be tuned and the default GC ergonomics are just fine. For the G1 collector (introduced in 2009 a few months before Go and made the default a year ago), setting the JVM’s heap size is enough for pretty much all workloads except for those which have always been challenging for garbage collected languages—large, dense reference graphs.

                                                                                      The advantages Go has for those workloads are non-scalar value types and excellent tooling for optimizing memory allocation, not a magic garbage collector.

                                                                                      (Also, to clarify — HotSpot is generally used to refer to Oracle’s JIT VM, not its garbage collection architecture.)

                                                                                      1. 1

                                                                                        Thank you for the clarification.

                                                                                  3. 2

                                                                                    I had the same impression while reading the article, although I also don’t know that much about GC.

                                                                                  1. 7

                                                                                    I just bought my first desktop synthesizer, a Behringer Neutron, a bunch of patch cables and updated Bitwig Studio to the latest Beta. This will be a friday evening full of slowly evolving soundscapes :)

                                                                                    1. 2

                                                                                      I’m getting automatically redirected to the HTTPS version in Firefox but their certificate expired. I don’t have https-everywhere installed.

                                                                                      1. 26

                                                                                        Given how many times over the years I had journald completely hose itself and freeze apps running on production systems [1] , I don’t find his arguments exceptionally compelling. Far more problems with journald/journalctl than I ever did with various syslog implementations. Yes you can still install syslog, but journald still gets the logs first, and then forwards/duplicates the data to syslog.

                                                                                        Maybe journald is better now? Been a couple of years since I had to deal with it on high volume log systems. At the time we ended up using a program wrapper (something similar to logexec) that sent the logs directly to syslog, and avoided systemd/journald log handling entirely.

                                                                                        [1]: app outputting some log data, journald stops accepting app output, app stdout buffer fills, app freezes blocking on write to stdout

                                                                                        1. 7

                                                                                          I see. Well nothing beats real world experience, so thank you very much for sharing that!

                                                                                          1. 5

                                                                                            For me it’s quite the opposite, I never had any issues with journald, neither in production nor in development environments.

                                                                                            1. 4

                                                                                              Seconded, I actually quite like that I can see all my logs the same way without setting up stuff on my side. With syslog I’d have to tell every program where to log and the systemd combo just takes away that manual burden.

                                                                                              1. 3

                                                                                                “works for me”

                                                                                              2. 4

                                                                                                I had this experience too, but that was because journald was hanging due to my disks being slow as molasses (I had deeper problems). I’m honestly not sure whether to blame journald for that.

                                                                                              1. 4

                                                                                                I am also a happy Arch Linux user since about three years, without any reason to switch even though I tried almost any major Linux distribution in the years before.

                                                                                                The things I like most about it are:

                                                                                                • its rolling release model, so I have small incremental changes than one huge and fragile system update.
                                                                                                • that I get almost vanilla upstream packages.
                                                                                                • that I can get almost everything from the Arch User Repositories.
                                                                                                • that Arch’s package format is easy to learn.

                                                                                                It is not surprising to me that Arch is developer friendly because I would assume that almost no one without programming knowledge is using it.

                                                                                                1. 2

                                                                                                  Not really something I’m working on but I am excited to attend GopherCon Iceland. So, if anybody wants to meet please leave me a message!

                                                                                                  1. 2

                                                                                                    My only realistic hope this year is local conferences, feels like, so I might attend ApacheCon in September. I initially wanted to do Gophercon Iceland, but that’s not happening (my going there I mean)

                                                                                                    1. 1

                                                                                                      I am happy to hear that there is a european GopherCon in Iceland!

                                                                                                    1. 4

                                                                                                      Not very helpful non-answer incoming, more like a “me too” ’;)

                                                                                                      My go-to conference every year (since 2013, with a break) is FOSDEM, so I’m planning this for 2019.

                                                                                                      I went to PolyConf in 2017 and loved it, but apparently it’s not happening this year :( (still hoping, maybe in the late fall?)

                                                                                                      So I’m also a bit unsure if there’s something interesting for me, I’m not doing PHP anymore (the Unconference in Hamburg was alwyas excellent) and I’m not doing Go anymore…

                                                                                                      Chaos Communication Congress would be nice, but I’ve no time for it this year.

                                                                                                      1. 1

                                                                                                        I totally forgot about FOSDEM, what a great conference for open source software. But I was not too happy with the technical infrastructure of the ULB in Brussels, often microphones didn’t work or the beamer resolution was to small to make meaningful demos. This is the reason I did not went there this year but I will give it a second chance in 2019.

                                                                                                        1. 1

                                                                                                          I enjoyed PolyConf 17 as well. Especially as it was the beginning of a Paris vacation for me.

                                                                                                        1. 17

                                                                                                          Knowing german, I read this way differently than the title is trying to convey initially.

                                                                                                          1. 8

                                                                                                            For the non-german folks here, du is the german personal-pronoun for you so the title reads: Like you but more intuitive :)

                                                                                                            1. 2

                                                                                                              It’s a bit more complex than that: German retained the T-V distinction, which means it has two forms of singular second person pronoun, one for people you’re close to and one for people you’re not close to. Sie is the pronoun for people you’re not close to, du is the one for people you are close to. It also has two forms of the second-person plural pronoun, ihr for people you’re close to and, again, Sie for people you’re not close to.

                                                                                                              1. 2

                                                                                                                Still it translates to the same and I don’t know of any way to preserve that intent in English.

                                                                                                                I always thought the du/Sie distinction makes German very formal but it also seems very ingrained in the culture. The distinction was also in Swedish but it disappeared and is so rare in Denmark I can’t remember when I saw it last. Something I couldn’t imagine happening in Germany.

                                                                                                                1. 3

                                                                                                                  Sweden is such a small country that a reform of this type, made in the heady days of the 60s, got traction very easily.

                                                                                                                  As a bank cashier in the late 80s I’d sometimes refer to customers using “ni” and occasionally get pushback from people of the “68 generation”.

                                                                                                            2. 5

                                                                                                              Also “dust” means dork or idiot in Norwegian.

                                                                                                              1. 1

                                                                                                                This app was previously called kuler and I used it often in the past where I still had time to do some graphics design.

                                                                                                                1. 1

                                                                                                                  I wrote my own, though not as fancy