1. 2

    My server is hosted at netcup.de, never had a problem and service is pretty quick, can recommend!

    There are benchmarks, as well!

    1. 1

      Where’s the alternative? The only thing I can find is an empty repository and a blog post with half baked ideas.

      1. 10

        After using Erlang and Go I don’t know why people keep choosing Go. Channels flip everything upside down whereas messages and inboxes are a much more natural way to manage concurrency. Coupled with Erlang’s first class support for introspecting processes there isn’t even a competition between who wins.

        A few jobs back I was at a Go shop and to put it midly it was mostly a mess. One example that comes to mind was a library that had no provisions for stopping a goroutine because its channel protocol just didn’t have provisions for it. So if you used this library and you had some timeout constraints then whatever goroutine was launched would leak because you wouldn’t get the result in time and wouldn’t be able to tell it to stop. In Erlang this would not have been a problem because we could have asked the runtime to kill the runaway/broken process. If you’re wondering why the library wasn’t fixed it’s because that would have required changing all the call sites to pass in extra arguments to pass in the pieces for telling the goroutine to stop and it was just too much hassle to bother. In the end for that specific use case someone rolled a custom solution that bypassed the library (exactly as the Go designers intended I guess).

        Go is an almost adequate language built in the Unix tradition. There are better tools but we just keep going for the hacky and simplistic solutions.

        1. 3

          My intro to erlang was awful. The library I happened to be using would return [x,y] or (x,y) or [x, (y)] or what have you with no rhyme or reason. Of course there’s no compile errors because it’s all dynamic. And bonus fun, it doesn’t actually crash where you expect because various list or tuple patterns can unpack each other. So then you plow forward and only die later when it turns out that y was really [y].

          1. 4

            I think that’s a generic problem for all dynamically typed languages. Fortunately Erlang has dialyzer: http://erlang.org/doc/man/dialyzer.html.

            1. 3

              This is one of the few areas where you get a feel for it over time. “Modern” erlang uses type specifications (dialyzer link below) and has compile time checking, sadly the error messages are not at elm or rust standards, and its not full HM like OCaml etc, but it’s pretty useful. It sounds like the library you’re using is not very functional in design. I generally like to keep these are functional as possible, possibly with a sum type return if error handling in line is needed.

              That aside, erlang’s strong points are superb concurrency and robustness under load. If that’s what you need then it’s really good. If you need something else you should choose it. single binaries? inline assembler? use something else, and let erlang do the coordination or network stuff. Pragmatic.

            2. 2

              If you’re wondering why the library wasn’t fixed it’s because that would have required changing all the call sites to pass in extra arguments to pass in the pieces for telling the goroutine to stop and it was just too much hassle to bother. In the end for that specific use case someone rolled a custom solution that bypassed the library (exactly as the Go designers intended I guess).

              So, you’re implying that Go is a bad language because you worked at a place where seemingly nobody cared about proper software design? You argument works for every language, simply replace Go with for example Java.

              1. 3

                I’m implying Go is a simplistic language that paints programmers into corners and other languages don’t have the same issues because they don’t treat programmers like children. Surprisingly, Go is an instance where the usual social problems play second fiddle to its technical issues.

                Quote from the horse’s mouth

                The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.

                http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/From-Parallel-to-Concurrent

                1. 5

                  The contempt he has for their own engineers is striking. Doesn’t Google hire the best or doesn’t it?

                  1. 3

                    Google is a machine for printing money. You tell me what good engineers would do at Google.

                    But to answer your question directly I don’t think Google top brass cares about the skill of their engineers. I think Google now is mostly a resume signaling mechanism and my plan is to get hired and quit so I can claim to be ex-Google.

                    For anyone at Google that is totally not my plan so you should refer me and I promise not to quit within the first week.

              2. 2

                Channels flip everything upside down whereas messages and inboxes are a much more natural way to manage concurrency.

                Please could you explain that a bit more? I have written a bit of Go but no Erlang. Until I read your comment, I thought both handled concurrency in a similar way, i.e. CSP-style message passing. What’s the difference? How do channels “flip everything upside down” compared to Erlang’s message passing? Thank you!

              1. 23

                Put me squarely in the don’t understand the webcam stickers camp. What’s on my screen is 99% more likely to be interesting than what’s in front of it. Like, why try to extort me with a video of me picking my nose when you can just remote drive my browser and empty my bank account. And then there’s the whole microphone thing. It’s hard to imagine a threat model where webcam stickers are relevant.

                1. 48

                  I was squarely in the same camp… until WebEx started my video on a call when I didn’t want it to, and a nice view of me (and my wife!) in bed wearing pyjamas (I was dialling in from 6 timezones ahead to listen to a town hall meeting) was projected on the wall for everyone to enjoy.

                  I’m not worried about evil malware, I’m worried about WebEx ;-)

                  1. 19

                    I had that happen with google hangouts while I was listening to a call on the toilet. That was a bad moment. With “continuous deployment” this is bound to happen unpredictably.

                    1. 9

                      Yup, IMO badly-written conference calling software is a much more realistic and everyday threat than teh evil hackerz. WebEx and Hangouts and other systems seem to be constantly changing their UI and behavior, yet always seem to really want to broadcast video. And then sometimes pop up some other modal dialog blocking the buttons to stop it. It’s worth it IMO to definitely never ever send out video unless I’ve explicitly okayed it first, no matter what some marketing manager thinks would help them increase their engagement by 1%.

                      1. 6

                        I was fortunate enough to be dressed when it happened to me. Conference software has the worst defaults.

                      2. 42

                        My threat model isn’t malicious attackers as much as incompetence. I use webcam covers in case (1) a program that I trust has some mindblowing lapse in competence and turns on my webcam unexpectedly or (2) I fat-finger a video call button without noticing.

                        1. 4

                          Ah, I hadn’t thought about that too much since I rarely use such software. Also, I think this thread is the first time I’ve seen someone mention that. It’s always the evil hackers that get blamed instead.

                          1. 2

                            Do you use a webcam cover on your smartphone too?

                            1. 5

                              I removed the front camera in my phone. It was useless for me, and I didn’t like the idea of never knowing if some app was using it.

                            2. 1

                              But what’s the big difference compared to disabling the webcam in your BIOS settings?

                              1. 18

                                With a piece of electrical tape over the camera I can “re-enable” it in seconds without rebooting for the times when I do actually need it. Disabling it in the BIOS is a good option if you know you’ll really never need it though.

                                1. 1

                                  Fair enough, but for someone who never needs it, this doesn’t really change a lot…

                                2. 14

                                  Stickers/covers are simple in every aspect of their operation.

                                  1. 4

                                    Exactly! Most people’s understanding of stickers/covers allows them a fairly high degree of confidence that it’s working. You can hold the sticker up to the light to confirm that it’s opaque to visible light and you can see that it covers the lens. You can also run a camera application to see what it can see. By comparison, it is incredibly difficult to confirm that a BIOS setting does what it says it does.

                              2. 14

                                It’s hard to imagine a threat model where webcam stickers are relevant.

                                Porn and whacking off to it. I believe one Black Mirror episode was centered on that. I think blackmail on such footage is a credible threat even if you’re not into kinky/illegal stuff. And even if not anything as sleazy as that, there’s something quite disturbing in a random person essentially being inside your house looking around with you having no clue about it.

                                1. 4

                                  The real threat seems to be people worried about the threat, given all the “I caught you visiting a naughty site, you know which one, pay me bitcoins” spam I get.

                                  1. 4

                                    You do understand that there’s a pretty big difference between the two situations, right? Someone leaking that you visited a naughty site isn’t really comparable to someone leaking pictures or video of you.

                                    1. 0

                                      The scam threat obviously includes “I hacked your webcam” blah blah. Sorry for not posting the entire spam here.

                                      1. 1

                                        Right, that makes sense. I’ve never actually read such a spam e-mail; if I get any, they just end up caught in the spam filter.

                                        You would presumably take the threat more seriously if someone contacted you with some actual proof, such as showing an actual image of you naked taken from your webcam?

                                        1. 1

                                          I’ve had this email a few times, and they spoof the sender address to make it look like it came from your own email address. This at least gives the illusion of them having hacked you specifically.

                                    2. 2

                                      In a lot of the country, getting caught viewing porn can hurt their career or ability to run for office. It’s hypocritical given lots of people in those same areas watch porn. It’s a reality, though. This is also true for lots of other habits or alternate lifestyles cameras might reveal.

                                      1. 4

                                        In some countries, any consumption of anything deemed immoral can have even more devastating consequences. I know a guy from a small Persian Gulf country — a son of a late imam too — who was scammed for a few thousand euros recently by a con-artist he found on Grindr.

                                        Losing a few thousand euros is not the harsh consequence in this scenario.

                                  2. 10

                                    I mostly agree, but I don’t think you should need to choose. I’d prefer HW switches for microphone, webcam, wireless and allowing only whitelisted HID device instances being active.

                                    As I see Microsoft and Apple (even more so) have started to realize that there is a user demand for more privacy. The next windows update will notify you when there is an active microphone recording going on, for example. I think this is not a bad direction, but too little too late for my taste.

                                    Also I think it is a design flaw that in current windows versions it is still so simple to globally register every keystroke, and that in Windows UWP, and Android there are so many grouped capabilities, and still you have to allow the app to use a capability in advance, or for now and for ever to use these privileges…

                                    I don’t have much experience with Apple products.

                                    Edit: regarding webcams:

                                    You need to take into account that the line between digital and psychical life is getting thinner and blurrier. I often leave my machine running when I leave home, as it is energy efficient, and I might need to log in remotely, or a download is running in the background. A malicious actor could get information about my physical whereabouts, or about an opportunity for home invasion for example, should they deem it profitable.

                                    1. 1

                                      started to realize

                                      This is hardly new. Apple’s 2003 external webcam model, the iSight, included a manual iris shutter/switch that rotated to both disable the device and physically obstruct the camera. Fashions change.

                                    2. 4

                                      There’s a mix of bad things people are doing right now and some things they could do with it that they’ll figure out eventually. I’m not writing about the latter since I prefer them to be delayed.

                                      For now, I’m for being able to totally disable inputs, specific wireless, etc for a simple reason: no access by default until it needs it (POLA). No power by default until it needs it if available. I can try to guess every bad thing that can happen with risky peripherals. Or I can just shut them down when not using them. Covering my webcam is easy way to shut down its vision. My old laptop had a wireless switch, too. My old speakers didn’t act up when I had to turn something down quickly since the knobs actually worked. Killed power with last turn.

                                      On a related note, I also buy old, dumb appliances without smart anything. They also last longer, are cheaper, and have no smart anything for people to hack. If there’s a risk from hackers, just eliminate it where it’s easy. Then, don’t think about it again.

                                      1. 3

                                        Funny, I had never even thought of tape over the webcam as a security measure.

                                        For me it’s entirely there to make sure I’m not on camera when I join meetings unless I explicitly want to be.

                                        1. 1

                                          If you buy a new laptop there is no choice between with or without webcam. I don’t need it and never use it ergo I put it sticker on the camera, a simple and pragmatic solution.

                                          1. 1

                                            Well someone can take over your bank account and take your photo.

                                          1. 2

                                            It will be my second FOSDEM and I’m anticipating the Go and Rust tracks with a lot of interesting talks, luckily they’re held separately on the first on second day.

                                            1. 3

                                              This looks a bit overengineered to me, I solely rely on GNU make, patch and git² to install and update my dotfiles and don’t miss anything. Ok, I have to admit that I don’t synchronize secrets since I keep them in keepass containers and only use the dotfiles between various Linux distributions¹. Also, I don’t think that I want to mix secrets with my dotfile environment.

                                              ¹ My simple solution should work on BSDs too.

                                              ² I expect vim and docker to be present, but the latter one is only required if you want to preview the environment. Update: I forgot to mention that I also require Rust at the moment since my shell prompt is generated from a small Rust application. This has the advantage that I don’t have to care what the escape sequences for different shell prompts are. This dependency could be removed by cross-compiling binaries for rusty-prompt.

                                              Edit: Add second footnote.

                                              1. 2

                                                I know how you feel but it does seem to add a lot of value.

                                                For instance - the templating feature. I tried using Homesick for years but eventually gave up on it because I had dot files in different locations evolving in different ways.

                                              1. 17

                                                I’ll stick to gitlab, new exciting features coming out every month. I can self host. I love gitlab’s CI system that is built in as well. I can now do chatops easily with mattermost.

                                                1. 16

                                                  An important reason I use GitHub is for the “network effect”. Everyone and their dog is on GitHub, so it’s just practical.

                                                  If I had my say, I’d still be using mercurial and BitBucket. But at some point someone “forked” my repo by converting it to git and uploading it to GitHub with their fix (I found out accidentally a year later), so then I decided to just migrate my stuff to GitHub.

                                                  Especially for my personal stuff, the differences between GitHub, GitLab, BitBucket, etc. aren’t large enough to warrant losing sleep over, so I just opt for the most practical solution.

                                                  1. 5

                                                    I use Gitlab at work where it was kind of a hassle to deal with in the beginning but since about 1.5 years it became a great product which I really like to use. As you already said, the integrated CI system is great if not the best I’ve ever used. We also have some Mattermost integrations for chatops which were straightforward to setup.

                                                    1. 4

                                                      Has its performance gotten better? Every time I’ve tried gitlab its performance and hardware resource requirements have been a blocker.

                                                      1. 4

                                                        With limited hardware resources I would recommend something like gitea. Gitlab is running on a beefy machine at work so performance is not a problem.

                                                        1. 1

                                                          I’d love to see a gitea for mercurial.

                                                          1. 2

                                                            For Mercurial, Both Kallithea and RhodeCode can run fine on a box with 1CPU and 1GB of ram not even needing database installed as they could run on SQLite.

                                                    2. 3

                                                      I also use GitLab and I’m content with its CI, or I was until recently. but I find it confusing, and more than version control and CI, it aims to become an all-in-one solution much like Azure Devops (Formerly Microsoft Team Foundation Services).

                                                      I understand that this is a strategic goal for them, they develop in that direction, I’m okay with it.

                                                      As a user I have not logged in for a few months, log in, create a new project to get a CI pipeline, and I’m overwhelmed with UI changes, something something Kubernetes… complexity and unneded feature relatd changes and options everywhere. I1m not their target audience anymore, and they may loose me very soon, as I don’t have the resources to re-learn their platform every few months to be able to disable unneeded features.

                                                      1. 3

                                                        I can understand your frustration they do through a lot of new features at you. Personally I’m a sysadmin that also writes a lot of code. My sysadmin side gets excited with all the new features that are built into a nice contained solution for us.

                                                    1. 14

                                                      Can someone explain to me why this article is getting so many upvotes since this is basically only telling to think about before introducing kubernetes? The article links to another article of hers, Building Container Images Securely on Kubernetes, which is way more interesting in my opinion.

                                                      Edit: formatting

                                                      1. 7

                                                        The author is respected for their knowledge in this area and has done a lot for the community.

                                                        1. 24

                                                          Basically, having one of the highest authorities on containers say that people are overthinking how they approach containers is a nice thing to hear, particularly as people have to fight overgrown infrastructure.

                                                          1. 3

                                                            ^ This

                                                          2. 4

                                                            Although I’m about separation architectures, I did like her write-up listing many security practices that container tech are using with a nice, little chart. I saved it in case I ever wanted to follow-up on them to use as extra layers, do work improving them from high-security perspective, or (most likely) share them with anyone using containers who could benefit. Just an example supporting your point.

                                                        1. 12

                                                          I don’t like tools that make using awk or cut harder.

                                                          The output could be improved without using the parenthesis around the bytes, or having a flag .

                                                          1. 5

                                                            Tools should have optional JSON output so I can use jq queries instead of awk or cut :P

                                                            1. 4

                                                              I really like jq too :)

                                                              1. 2

                                                                https://github.com/Juniper/libxo

                                                                This is integrated in most FreeBSD utilities.

                                                                1. 3

                                                                  We should all switch to Powershell, where values have types and structure, instead of switching to a different text format.

                                                              1. 7

                                                                clickbaity title; ‘Stop Building Single Page Apps’ would be simpler and informative

                                                                1. 1

                                                                  And the dude needs to dial the font size down a LOT. I had to view it at 70% the original size just to make it readable.

                                                                  1. 4

                                                                    Then others have to increase the font size again. For me the font size is pretty reasonable (27” WQHD). I often use Firefox’s reader mode when a page is hard to read, especially for forums and the like.

                                                                    EDIT: typos, formatting

                                                                    1. 1

                                                                      80% for me, but yeah kinda painfully big.

                                                                  1. 1

                                                                    @hwayne I just discovered your 2017 talk at StrangeLoop about TLA+ and really enjoyed it! The talk sparked my interest so much that I decided to buy your book Practical TLA+ right away.

                                                                    The examples you presented in the talk were mostly about the verification of models before their implementation, which is without a question a proper use case, but would it be practical to specify e.g. methods in legacy software which don’t have tests in order to refactor them safely?

                                                                    Update: added a question

                                                                    1. 2

                                                                      Glad you enjoyed! I know a lot of people have successfully used it for refactoring legacy systems; I don’t know if anyone who’s used it at the method level, which might be a little bit too low, but it’s worth a shot IMO.

                                                                    1. 5

                                                                      First of all - congratulations on the release! This looks cool and I’ll definitely try it out.

                                                                      So to ask an audio developer anything: How did you (and how can I) get into DSP/audio programming? I’m thinking mostly resources to learn both the concepts and math of DSP, as well as the tricks of the trade in writing fast DSP code. It seems like if you want to learn ML, or compilers, or OS-design, etc, there are piles of good books, tutorials and videos available – but I’m having trouble finding good resources to learn audio stuff. Do you have any tips?

                                                                      1. 11

                                                                        I had my introduction to signal processing with a course at the university. At least I can recommend some books for you:

                                                                        PS: I think lobste.rs’ needs an dsp tag.

                                                                        Edit: typos.

                                                                        1. 1

                                                                          There’s at least two tags we need that will cover lots of articles that people might filter if we get too specific on application area or arcane internals users of black-box tools don’t need to know. Relevant here is “parallel:” techniques for parallel programming. It’s already a huge field that HPC draws on. It can cover DSP, SIMD, multicore, NUMA, ways of stringing hardware together, parallel languages, parallelizing protocols, macros/libraries that parallelize, and so on. I plan to ask the community about the other one after I collect relevant data. Mentioning this one here since you already brought it up.

                                                                        2. 4

                                                                          Hey, thanks! Do feel free to reach out and let me know what you think.

                                                                          With regards to DSP literature – klingtnet has provided some great resources already, so I’ll just talk a little about my path. My background has always just been in development, and my math has always been weak. Hence, the best resources for me were studying other people’s code (for which github is a particularly great resource) and figuring out enough math to implement research papers in code.

                                                                          Audio DSP has this weird thing going on still where companies in the space are generally incredibly guarded about their algorithms and approaches, but there’s a few places where they’ll talk a little more openly. For me, those have been the music-dsp mailing list and the KVR audio DSP forum. The KVR forum in particular has some deep knowledge corralled away – I always search thorough there when I start implementing something to see how others have done it.

                                                                          And, one final little tidbit about DSP: in real-time, determinism is key. An algorithm which is brutally fast but occasionally is very slow could be less useful than one slower but more consistent in its performance. Always assume you’re going to hit the pessimal case right when it’s the most damaging, and in this industry those moments are when a user is playing to a crowd of tens of thousands.

                                                                          That being said, I’d encourage just jumping in! Having a good ear and taste in sound will get you further than a perfect math background will.

                                                                          1. 4

                                                                            https://jackschaedler.github.io/circles-sines-signals/index.html is a really well done interactive intro to the basics (note that the top part is the table of contents, you’ll have to click there to navigate).

                                                                            1. 1

                                                                              Thanks a lot for this, I just finished and felt like I finally got some basic things that eluded me in the past. Good intro!

                                                                          1. 3

                                                                            Opus 1.3 includes a brand new speech/music detector. It is based on a relatively new type of recurrent neuron: the Gated Recurrent Unit (GRU). Unlike simple feedforward units, the GRU has a memory. It not only learns how to use its input and memory at each time, but it can also learn how and when to update its memory. That makes it able to remember information for a long period of time but also discard some of that information when appropriate.

                                                                            The quality improvement between Opus 1.0 and 1.3 for a 9kbps bitrate speech sample is impressive.

                                                                            1. 3

                                                                              I had these exact same headphones as a warranty replacement for the previous pair that failed after 2 years. They refused to replace them this time. A $14 pair of AUkey headphones are nearly as good to me, although I’ll admit years of being an audio engineer have probably affected my hearing somewhat; I still appear to have quite good ears according to hearing tests. I have some nice studio headphones when I really need to hear clearly, and it turns out my use case for earbuds overrides the need for superior fidelity. Are the $14 buds as nice? Of course not, but sometimes good enough is good enough.

                                                                              Jaybird will never get another cent from me, and their parent company Logitech is now worthy of my scrutiny.

                                                                              1. 2

                                                                                $14 bluetooth buds?

                                                                                1. 5

                                                                                  There is no correlation between headphone frequency response and retail price, the consumer and especially the audiophile HiFi market is full marketing voodoo. The main difference between cheap and expensive headphones is the material the case is made of but the built-in drivers are usually pretty cheap and the construction of good headphones is no rocket science, even though the audio industry wants you think that. I also own a cheap pair of bluetooth in-ear headphones for commuting that cost me 20€ and are pretty reliable and sound pretty okay. I forgot them once in a pocket of my jeans and they even survived the washing machine. Another anecdote regarding relation between price and audio reproduction quality of headphones: I was looking for headphones for my home recording studio this year and tested difference models ranging from Samson SR850 for 27€ to Beyerdynamic DT-880 for around 200€. In the end I went with the Samsons’ because they sound fantastic and I can live with a non perfect case finish, heck, you can even get a pair of them for 39€.

                                                                                  1. 1

                                                                                    I don’t disagree about marketing voodoo in HiFi space, there is astonishingly good cheap gear, KSC75s possibly being the most striking example. That said, adding a mic and BT to them will cost you about $14 ($11 BT chip, $3 mic, straight from China) by itself with a so-so BT chip which doens’t license the high quality audio stream stuff and will randomly fail to pair.

                                                                                    The SR850s are exceptional, like the KSC75s, Zero Audio Tenore’s and a handful of other great drivers, so-so build quality but no core build defects.

                                                                                    EDIT: Update from the OP, it was $27, which makes A LOT more sense to me.

                                                                                    1. 1

                                                                                      I’ve been exactly there and I ended up with BD DT-770’s, which sound great but are ultimately comfortable to wear for extended periods without clamping my head or causing inner-ear pain. Were I tracking a kit I’d probably use the Sennheiser hd-280pro’s due to the superior bleed isolation but man those things kill my ears after a couple hours. What that tells me is that inside, the tech provides modest differences and it’s all about comfort and durability.

                                                                                      At the end of the day, much of the music most people consume is rammed through lossy compression and mixed to maximize volume, then rammed through a cheap DAC - so listening through a $1000 pair of headphones provides little benefit other than to point out the flaws in the recording all along the process.

                                                                                      1. 4

                                                                                        At the end of the day, much of the music most people consume is rammed through lossy compression

                                                                                        Most modern static compression is well beyond good enough even for high end gear. Note: static compression, not on the fly compression like BT does.

                                                                                        and mixed to maximize volume

                                                                                        The loudness wars left a lot of damaged music. But it is all but over at this point. Everyone from indie artists to professional mastering have stopped it as a matter of course, and it is now the exception. Mick Guzauski, Bob Ludwig and Ian Shepherd since the mid-2000s really pushed against it changing the industry. iTunes Radio really cemented it with automatically tuning down overly loud music, meaning if the copy they get from you is part of the loudness wars it is going to sound objectively horrible.

                                                                                        then rammed through a cheap DAC -

                                                                                        $3 DACs are all but perfect at this point, a lot of the difference between a $3 and $30 DAC is bit-rates used for professional mastering and its shielding. Finding an awful DAC these days takes real effort.

                                                                                        so listening through a $1000 pair of headphones provides little benefit other than to point out the flaws in the recording all along the process.

                                                                                        Really depends on the headphones, some very much show the flaws, others are just expensive and fun. Also, there is something sort of special about finding new depth in recordings through high end gear, tapping of a foot, the side mic exhale, etc. I would say like the TH900s are a nice pure-fun high end headphone: 25ohm, v-shaped, pretty to look at.

                                                                                        1. 1

                                                                                          thanks for the clarity; I agree with the that there is the possibility to renew appreciation in old favorites by changing the listening environment. Having donned the primo grados at a high-end mastering house, I am a believer.

                                                                                        2. 3

                                                                                          …mixed to maximize volume…

                                                                                          This is more a matter of taste than a sound quality problem, and yes, the loudness war caused popular music to be less dynamic because loud = good.

                                                                                          …rammed through a cheap DAC…

                                                                                          I will not deny that there are differences between a good DAC used in a professional audio interfaces and those used in a cheap laptop but even the latter ones are good now (except the one of the Raspberry Pi), but the distortion caused by a cheap DAC is orders of magnitude’s lower than that of any loudspeaker. The mechinal part of reproduction is still the weak point, by far.

                                                                                          Monty Montgomery from xiph.org (the ogg vorbis guys) made an enlightening video about D/A and A/D conversion which I can highly recommend to anyone.

                                                                                          1. 2

                                                                                            All the HD-280s I have even seen or owned died the same sad death – headband death. Either the metal strains against the plastic and breaks it, or the strain goes to the metal connect and it snaps, either way hard to repair.

                                                                                            Also, they make a great set of earmuffs.

                                                                                            1. 1

                                                                                              At the end of the day, much of the music most people consume is rammed through lossy compression and mixed to maximize volume, then rammed through a cheap DAC - so listening through a $1000 pair of headphones provides little benefit other than to point out the flaws in the recording all along the process.

                                                                                              Dunno about that. I really enjoy my AKG K812 even if plugged straight in to a laptop (most of the time) or phone (sometimes). I also enjoy my Sennheiser HD 800 even if the amp that feeds them gets analog input straight from the motherboard. Yes, it can get a little noisy when the GPU is busy. I enjoy them both, generally more than my Sennheiser HD 650, even if I’m streaming lossy music from Youtube. Or music I compressed myself at a bitrate I know is transparent (or damn well close enough) from the ABXing I’ve done in the past. If anything, I feel like the AKG K701 (cheapest cans I have right now) are more revealing in terms of recording flaws.

                                                                                              I really don’t think the DAC and compression are a big deal, even if I do also have a collection lossy music and an external head amp.

                                                                                              1. 1

                                                                                                I think it comes down to design intent for the cans in question - e.g. listening vs. mixing, and I do agree that technology has vastly improved since I last posted a diatribe about this. I think there’s also a matter of ear training here that affects me, as it’s not just headphone use where I hear every razzafrazzin sound in the room. I spent years developing critical listening skills and I can’t just turn them off.

                                                                                            2. 1

                                                                                              While I won’t argue the point of sound quality right now (because it’s all over the map), I certainly will argue about build quality.

                                                                                              I’d be willing to bet that a much larger percentage of gear priced at $200 and above will be around in 15 years, vs. lower priced gear.

                                                                                              The higher-end gear might not always be technically and sonically superior but it is usually built to a higher standard of quality.

                                                                                            3. 1

                                                                                              Oops, I fibbed, they were $26.99: https://www.amazon.com/gp/product/B06ZZSQQTD/

                                                                                              I’m one of those “excessive research” headphone chaps and I am generally highly critical of any headphones but these right here, they are a winner for me.

                                                                                              I should note my use cases are: using outdoor power equipment where bigger hearing protection doesn’t fit, using power tools in the shop, and blocking out noise on planes. The one place they fail, which is entirely due to the size, is for sleeping. Plus, more often than not I’m listening to podcasts, audio books, or lo-fi rock & roll where high fidelity or critical listening isn’t a factor.

                                                                                          1. 4

                                                                                            Help team members when they’re stuck.

                                                                                            Plan your projects’ work.

                                                                                            Create new projects.

                                                                                            For me it’s unclear how this is specific to a senior engineer, I would expect this from anyone in my team.

                                                                                            One thing I left out is “make estimates”. Making estimates is something I’m still not very good at…

                                                                                            Being a senior developer is all about experience, and making estimates benefits from exactly that, work experience. I have to admit that making estimates how long a project will take is hard for me as well, but this should be definitely on the list of things to expect from a senior engineer.

                                                                                            Make sure work is allocated in a fair way.

                                                                                            Make sure folks are working well together.

                                                                                            Those responsibilities are on everyone in the team. If I see someone struggling with the amount of work they have then I try to help them. The same goes for working together in a team, if there are conflicts in the team then one should not wait on the manager to solve them (which is hopefully not what was assumed in the article) because this is how a Kindergarten works.

                                                                                            In general I like jvns’ articles, but this time I can’t take much from it. Those are only my 2¢.

                                                                                            edit: formatting

                                                                                            1. 8

                                                                                              For me it’s unclear how this is specific to a senior engineer

                                                                                              I don’t want Jr engineers planning projects nor creating new ones. They can try to help other team members, but who knows how successful that will be, might just be tossing hours down a hole.

                                                                                              If I see someone struggling with the amount of work they have then I try to help them.

                                                                                              Again, I am not sure I want Jr engineers even attempting this.

                                                                                              if there are conflicts in the team then one should not wait on the manager to solve them

                                                                                              Please consider waiting for your manager or team lead – you probably don’t have all the information. Many attempts to “fix” stuff on a team of engineers makes it worse. Waiting for people with more information than you to give some external feedback isn’t the mark of kindergarten – it is the mark of maturity.

                                                                                              1. 3

                                                                                                Agreed.

                                                                                                A further observation, I recently got promoted to manager. A coworker I worked with at a previous employer got promoted to lead engineer at the same time. Eventually, we both realized that many of the responsibilities we had both been considering “Senior Engineer” for years were actually what most folks would call lead engineer.

                                                                                                The gradient of skill levels can seem compressed when you spend a big slice of your career in a very high-pressure/high-performing arena. (Given the lack of diverse skill-sets, you might even call them dysfunctional.) Consequently, our ideas of what responsibilities are appropriate gets skewed.

                                                                                                1. 1

                                                                                                  I don’t want Jr engineers…

                                                                                                  Again, I am not sure I want Jr engineers…

                                                                                                  Ok, I said everyone in my team, but this doesn’t mean that there are only juniors and seniors, I was referring mostly to those developers with some experience who are in between both levels.

                                                                                                  I don’t want Jr engineers planning projects nor creating new ones.

                                                                                                  For me this is okay for internal projects of smaller scale and they should be involved in the process for larger ones.

                                                                                                  1. 1

                                                                                                    Ok, I said everyone in my team, but this doesn’t mean that there are only juniors and seniors

                                                                                                    Mentally replace “Jr” with “non-senior” if that makes it more clear to you.

                                                                                                    For me this is okay for internal projects of smaller scale and they should be involved in the process for larger ones.

                                                                                                    Involved in the process for sure, that is how they learn. Creating or planning, no. Smaller projects and internal projects both tend to grow and most of the cost of a project is maintenance, should never be entered into lightly.

                                                                                              1. 37

                                                                                                What about dependencies? If you use python or ruby you’re going to have to install them on the server.

                                                                                                How much of the appeal of containerization can be boiled directly down to Python/Ruby being catastrophically bad at handling deploying an application and all its dependencies together?

                                                                                                1. 6

                                                                                                  I feel like this is an underrated point: compiling something down to a static binary and just plopping it on a server seems pretty straightforward. The arguments about upgrades and security and whatnot fail for source-based packages anyway (looking at you, npm).

                                                                                                  1. 10

                                                                                                    It doesn’t really need to be a static binary; if you have a self-contained tarball the extra step of tar xzf really isn’t so bad. It just needs to not be the mess of bundler/virtualenv/whatever.

                                                                                                    1. 1

                                                                                                      mess of bundler/virtualenv/whatever

                                                                                                      virtualenv though is all about producing a self-contained directory that you can make a tarball of??

                                                                                                      1. 4

                                                                                                        Kind of. It has to be untarred to a directory with precisely the same name or it won’t work. And hilariously enough, the --relocatable flag just plain doesn’t work.

                                                                                                        1. 2

                                                                                                          The thing that trips me up is that it requires a shell to work. I end up fighting with systemd to “activate” the VirtualEnv because I can’t make source bin/activate work inside a bash -c invocation, or I can’t figure out if it’s in the right working directory, or something seemingly mundane like that.

                                                                                                          And god forbid I should ever forget to activate it and Pip spews stuff all over my system. Then I have no idea what I can clean up and what’s depended on by something else/managed by dpkg/etc.

                                                                                                          1. 4

                                                                                                            No, you don’t need to activate the environment, this is a misconception I also had before. Instead, you can simply call venv/bin/python script.py or venv/bin/pip install foo which is what I’m doing now.

                                                                                                          2. 1

                                                                                                            This is only half of the story because you still need a recent/compatible python interpreter on the target server.

                                                                                                        2. 8

                                                                                                          This is 90% of what I like about working with golang.

                                                                                                          1. 1

                                                                                                            Sorry, I’m a little lost on what you’re saying about source-based packages. Can you expand?

                                                                                                            1. 2

                                                                                                              The arguments I’ve seen against static linking are things like you’ll get security updates etc through shared dynamic libs, or that the size will be gigantic because you’re including all your dependencies in the binary, but with node_packages or bundler etc you’ll end up with the exact same thing anyway.

                                                                                                              Not digging on that mode, just that it has the same downsides of static linking, without the ease of deployment upsides.

                                                                                                              EDIT: full disclosure I’m a devops newb, and would much prefer software never left my development machine :D

                                                                                                              1. 3

                                                                                                                and would much prefer software never left my development machine

                                                                                                                Oh god that would be great.

                                                                                                          2. 2

                                                                                                            It was most of the reason we started using containers at work a couple of years back.

                                                                                                            1. 2

                                                                                                              Working with large C++ services (for example in image processing with OpenCV/FFmpeg/…) is also a pain in the ass for dynamic libraries dependencies. Then you start to fight with packages versions and each time you want to upgrade anything you’re in a constant struggle.

                                                                                                              1. 1

                                                                                                                FFmpeg

                                                                                                                And if you’re unlucky and your distro is affected by the libav fiasco, good luck.

                                                                                                              2. 2

                                                                                                                Yeah, dependency locking hasn’t been a (popular) thing in the Python world until pipenv, but honestly I never had any problems with… any language package manager.

                                                                                                                I guess some of the appeal can be boiled down to depending on system-level libraries like imagemagick and whatnot.

                                                                                                                1. 3

                                                                                                                  Dependency locking really isn’t a sufficient solution. Firstly, you almost certainly don’t want your production machines all going out and grabbing their dependencies from the internet. And second, as soon as you use e.g. a python module with a C extension you need to pull in all sorts of development tooling that can’t even be expressed in the pipfile or whatever it is.

                                                                                                                2. 1

                                                                                                                  you can add node.js to that list

                                                                                                                  1. 1

                                                                                                                    A Node.js app, including node_modules, can be tarred up locally, transferred to a server, and untarred, and it will generally work fine no matter where you put it (assuming the Node version on the server is close enough to what you’re using locally). Node/npm does what VirtualEnv does, but by default. (Note if you have native modules you’ll need to npm rebuild but that’s pretty easy too… usually.)

                                                                                                                    I will freely admit that npm has other problems, but I think this aspect is actually a strength. Personally I just npm install -g my deployments which is also pretty nice, everything is self-contained except for a symlink in /usr/bin. I can certainly understand not wanting to do that in a more formal production environment but for just my personal server it usually works great.

                                                                                                                  2. 1

                                                                                                                    Absolutely but it’s not just Ruby/Python. Custom RPM/DEB packages are ridiculously obtuse and difficult to build and distribute. fpm is the only tool that makes it possible. Dockerfiles and images are a breeze by comparison.

                                                                                                                  1. 2

                                                                                                                    Flatpak is a definite no for me as long as they think it’s acceptable to dump things into $HOME. It’s 2018. No new application should do this.

                                                                                                                    1. 3

                                                                                                                      Can you elaborate on this? What do they dump in $HOME and where exactly? You can’t change it?

                                                                                                                      1. 0

                                                                                                                        Flatpak creates its own .var directory in $HOME.

                                                                                                                        1. 3

                                                                                                                          What’s wrong with that?

                                                                                                                          1. 0

                                                                                                                            It’s my home directory, not the application’s.

                                                                                                                      2. 3

                                                                                                                        I have the same question as @andyc. Do you think of applications that create files, like rc-files, or folders on $HOME directory level or does this even include subfolders of the XDG base directories, e.g. XDG_CONFIG_HOME (~/.config/<application>)?

                                                                                                                        Update:

                                                                                                                        I just installed an application via flatpak and checked which folders where created/modified and it showed that flatpak does not respect the XDG directories specification, instead the application was installed into .var/app/. I assume that this is what you’re referring to?

                                                                                                                        1. 3

                                                                                                                          Yes, I was referring to the .var directory.

                                                                                                                          But according to the Flatpak developers Flatpak adheres to the XDG spec and .var is “nothing to see here”: https://github.com/flatpak/flatpak.github.io/issues/191

                                                                                                                        2. 2

                                                                                                                          While I agree with you that they should have used a directory for ~/.var that adheres to the XDG spec - like ~/.local/var - they aren’t dumping configuration or any files besides that directory into $HOME. I would however like to see an explanation as to why it was necessary to use the ~/.var directory. Apparently after a discussion including XDG devs, they decided to go that route.

                                                                                                                          1. 3

                                                                                                                            It has been a common issue of application developers to believe that their app is special and should be exempt from the rules. I have seen it many times, but the Flatpak devs invented a whole new level of entitlement.

                                                                                                                            1. 2

                                                                                                                              What’s supposed to be the correct way to do this?

                                                                                                                        1. 4

                                                                                                                          Surely I’m not going to be the only one expecting a comparison here between go’s. I’m not really well versed in GC but this appears to mirror go’s quite heavily.

                                                                                                                          1. 12

                                                                                                                            It’s compacting and generational, so that’s a pair of very large differences.

                                                                                                                            1. 1

                                                                                                                              My understanding, and I can’t find a link handy, is that the Go team is on a long term path to change their internals to allow for compacting and generational gc. There was something about the Azul guys advising them a year+ ago iirc.

                                                                                                                              Edit; I’m not sure what the current status is, haven’t been following, but see this from 2012, look for Gil Tene comments:

                                                                                                                              https://groups.google.com/forum/#!topic/golang-dev/GvA0DaCI2BU

                                                                                                                              1. 4

                                                                                                                                This presentation from this July suggests they’re averse to taking almost any regressions now even if they get good GC throughput out of it. rlh tried freeing garbage at thread (goroutine) exit if the memory wasn’t reachable from another thread at any point, which seemed promising to me but didn’t pan out. aclements did some very clever experiments with fast cryptographic hashing of pointers to allow new tradeoffs, but rlh even seemed doubtful the prospects of that approach in the long term.

                                                                                                                                Compacting is a yet harder sell because they don’t want a read barrier and objects moving might make life harder for cgo users.

                                                                                                                                Does seem likely we’ll see more work on more reliably meeting folks’ current expectations, like by fixing situations where it’s hard to stop a thread in a tight loop, and we’ll probably see work on reducing garbage through escape analysis, either directly or by doing better at other stuff like inlining. I said more in my long comment, but I suspect Java and Go have gone on sufficiently different paths they might not come back that close together. I could be wrong; things are interesting that way!

                                                                                                                                1. 1

                                                                                                                                  Might be. I’m just going on what I know about the collector’s current state.

                                                                                                                              2. 10

                                                                                                                                Other comments get at it, but the two are very different internally. Java GCs have been generational, meaning they can collect common short-lived garbage without looking at every live pointer in the heap, and compacting, meaning they pack together live data, which helps them achieve quick allocation and locality that can help processor caches work effectively.

                                                                                                                                ZGC is trying to maintain all of that and not pause the app much. Concurrent compacting GCs are hard because you can’t normally atomically update all the pointers to an object at once. To deal with that you need a read barrier or load barrier, something that happens when the app reads a pointer to make sure that it ends up reading the object from the right place. Sometimes (like in Azul C4 I think) this is done with memory-mapping tricks; in ZGC it looks like they do it by checking a few bits in each pointer they read. Anyway, keeping an app running while you move its data out from under it, without slowing it down a lot, is no easier than it sounds. (To the side, generational collectors don’t have to be compacting, but most are. WebKit’s Riptide is an interesting example of the tradeoffs of non-compacting generational.)

                                                                                                                                In Go all collections are full collections (not generational) and no heap compaction happens. So Go’s average GC cycle will do more work than a typical Java collector’s average cycle would in an app that allocates equally heavily and has short-lived garbage. Go is by all accounts good at keeping that work in the background. While not tackling generational, they’ve reduced the GC pauses to more or less synchronization points, under 1ms if all the threads of your app can be paused promptly (and they’re interested in making it possible to pause currently-uncooperative threads).

                                                                                                                                What Go does have going for it throughput-wise is that the language and tooling make it easier to allocate less, similar to what Coda’s comment said. Java is heavy on references to heap-allocated objects, and it uses indirect calls (virtual method calls) all over the place that make cross-function escape analysis hard (though JVMs still manage to do some, because the JIT can watch the app running and notice that an indirect call’s destination is predictable). Go’s defaults are flipped from that, and existing perf-sensitive Go code is already written with the assumption that allocations are kind of expensive. The presentation ngrilly linked to from one of the Go GC people suggests at a minimum the Go team really doesn’t want to accept any regressions for low-garbage code to get generational-type throughput improvements. I suspect the languages and communities have gone down sufficiently divergent paths about memory and GC that they’re not that likely to come together now, but I could be surprised.

                                                                                                                                1. 1

                                                                                                                                  One question that I don’t have a good feeling for is: could Go offer something like what the JVM has, where there are several distinct garbage collectors with different performance characteristics (high throughput vs. low latency)? I know simplicity has been a selling point, but like Coda said, the abundance of options is fine if you have a really solid default.

                                                                                                                                  1. 1

                                                                                                                                    Doubtful they’ll have the user choose; they talk pretty proudly about not offering many knobs.

                                                                                                                                    One thing Rick Hudson noted in the presentation (worth reading if you’re this deep in) is that if Austin’s clever pointer-hashing-at-GC-time trick works for some programs, the runtime could choose between using it or not based on how well it’s working out on the current workload. (Which it couldn’t easily do if, like, changing GCs meant compiling in different barrier code.) He doesn’t exactly suggest that they’re going to do it, just notes they could.

                                                                                                                                  2. 1

                                                                                                                                    This is fantastic! Exactly what I was hoping for!

                                                                                                                                  3. 4

                                                                                                                                    There are decades of research and engineering efforts that put Go’s GC and Hotspot apart.

                                                                                                                                    Go’s GC is a nice introductory project, Hotspot is the real deal.

                                                                                                                                    1. 4

                                                                                                                                      Go’s GC designers are not newbies either and have decades of experience: https://blog.golang.org/ismmkeynote

                                                                                                                                      1. 2

                                                                                                                                        Google seems to be the nursing home of many people that had one lucky idea 20 years ago and are content with riding on their fame til retirement, so “famous person X works on it” has not much meaning when associated with Google.

                                                                                                                                        The Train GC was quite interesting at its time, but the “invention” of stack maps is just like the “invention” of UTF-8 … if it hadn’t been “invented” by random person A, it would have been invented by random person B a few weeks/months later.

                                                                                                                                        Taking everything together, I’m rather unconvinced that Go’s GC will even remotely approach G1, ZGC’s, Shenandoah’s level of sophistication any time soon.

                                                                                                                                      2. 3

                                                                                                                                        For me it is kind of amusing that huge amounts of research and development went into the Hotspot GC but on the other hand there seem to be no sensible defaults because there is often the need to hand tune its parameters. In Go I don’t have to jump through those hoops, and I’m not advised to, but still get very good performance characteristics, at least comparable to (in my humble opinion even better) than for a lot of Java applications.

                                                                                                                                        1. 13

                                                                                                                                          On the contrary, most Java applications don’t need to be tuned and the default GC ergonomics are just fine. For the G1 collector (introduced in 2009 a few months before Go and made the default a year ago), setting the JVM’s heap size is enough for pretty much all workloads except for those which have always been challenging for garbage collected languages—large, dense reference graphs.

                                                                                                                                          The advantages Go has for those workloads are non-scalar value types and excellent tooling for optimizing memory allocation, not a magic garbage collector.

                                                                                                                                          (Also, to clarify — HotSpot is generally used to refer to Oracle’s JIT VM, not its garbage collection architecture.)

                                                                                                                                          1. 1

                                                                                                                                            Thank you for the clarification.

                                                                                                                                      3. 2

                                                                                                                                        I had the same impression while reading the article, although I also don’t know that much about GC.

                                                                                                                                      1. 7

                                                                                                                                        I just bought my first desktop synthesizer, a Behringer Neutron, a bunch of patch cables and updated Bitwig Studio to the latest Beta. This will be a friday evening full of slowly evolving soundscapes :)