1. 6

    Holy misleading axes. The post-warmup improvements look impressive and significant until you realize that the range is chosen to make it look so. The LuaJIT improvement is about 7 milliseconds, or around 1.2%. The other improvements are on a similar scale.

    I don’t think these orders-of-magnitudes even come close to supporting the thesis of the article.

    1. 4

      I don’t think this is entirely fair - yes, the differences are not dramatic, but as the article says “We fairly frequently see performance get 5% or more worse over time in a single process execution. 5% might not sound like much, but it’s a huge figure when you consider that many VM optimisations aim to speed things up by 1% at most. It means that many optimisations that VM developers have slaved away on may have been incorrectly judged to speed things up or slow things down, because the optimisation is well within the variance that VMs exhibit.”

      1. 5

        I might be a bit more generous if the article would call out the actual differences, rather than just point at misleading graphs – seriously, with an honest y axis it’d be hard to even notice it.

        Regardless, while a 5% improvement is not trivial, it is trivial in the context of the article, which is people complaining about slow VMs. That’s in the noise as far as general programming language performance goes.

    1. 4

      Isn’t this “just” Rust’s static(-by-default) dispatching over Java’s dynamic(-only) dispatching? I.e. the difference between non-“virtual” and “virtual” methods in C++?

      1. 6

        Like with voting, this is a scenario where adding technology, esp Internet-enabled, is just a bad idea. The less tech (and potential attacks) the better. The best ways to do espionage were those in the Cold War with people, drops, and ways of hiding stuff in other stuff. If distance is a problem, then burst radio was the best way to do it. There’s still spies being caught in the U.S. using radio. It’s probably a safe route for Chinese spies if the NSA and its wireless partners still haven’t clamped down on it domestically.

        Additionally, they could just put the files encrypted in online storage from a random, hot spot. Then, send a coded version of the link via the shortwave, hidden message in mail, or drop.

        1. 5

          I couldn’t help but compare this to the Russian(?) operatives who were communicating via coded comments on a particular Britney Spears instagram post.

          1. 4

            This reminded me of the number stations, broadcast on short wave. Its reasonable for any civilian to have a radio and the broadcasts can be encoded with any book freely available from a library.

            When it comes to keeping hidden, low tech is best tech.

            1. 4

              the broadcasts can be encoded with any book freely available from a library

              Running key ciphers are bad tradecraft, use one time pads ;)

            2. 2

              Internet monitoring has gotten terrifyingly powerful - although it’s worth noting that the article doesn’t say that the Chinese found the communication channel, only that they escalated their access a lot once they’d found the channel in the first place - but radio monitoring has also advanced, with cheap and powerful software-/FPGA-defined radio and very powerful post-processing. How sure are you that radio is a good option?

              1. 2

                @c12 has the right idea. There’s both burst transmission and number stations being used by spies in the US. Goes back to Cold War at least. Watching the prosecutions, we rarely see anyone get caught with that method described despite NSA operating the largest array of SIGINT collection in existence. That means they’re letting spies they know about continue to operate (eg poisoned intel) or they can’t find them.

                Im thinking it’s the latter. If it’s analog radio, they also can’t remotely hack it like they might try with a cellphone or computer.

            1. 3

              Rust itself runs tests in CI on every pull request that are required to pass. For example, https://travis-ci.org/rust-lang/rust/builds/410084149

              I can see why in a project as young as Rust maintaining different distributions’ forked build and test scripts for the compiler could be prioritized lower than e.g. language-level improvements.

              1. 6

                Yes, Rust CI runs tests on x86_64. Hence lots of breakages on other Debian architectures.

                I agree with you on the priority, but I am of the opinion that you can’t have it both ways. Either Rust should address test failures on other architectures, or Rust should stop claiming C level portability. Rust portability is “almost” there, but not there yet because other architectures are not maintained well.

                1. 12

                  Can you please point out where Rust “claims C level portability”?

                  1. -3

                    This is one of those things where it doesn’t matter what the “official” stance but what the popular interpretation of that stance is. Certainly a number of people are writing Rust replacements for old C bastions, like grep or the coreutils. This along with the RIIR meme spreads the idea that Rust is a viable replacement for C in all situations, including portability.

                    If the RESF wants to be clearer about how Rust isn’t ready to replace C yet, they need to be clearer that it’s not ready instead of being silent on the point about portability claims and saying they never said that.

                    1. 7

                      This is one of those things where it doesn’t matter what the “official” stance but what the popular interpretation of that stance is. Certainly a number of people are writing Rust replacements for old C bastions, like grep or the coreutils. This along with the RIIR meme spreads the idea that Rust is a viable replacement for C in all situations, including portability.

                      If the RESF wants to be clearer about how Rust isn’t ready to replace C yet, they need to be clearer that it’s not ready instead of being silent on the point about portability claims and saying they never said that.

                      I realize you’re probably just going to go bitch about me on IRC again, but your comment smells like a load of bullshit to me. Rust’s supported platform list looks like a pretty good indicator to me of what the platform support looks like. In fact, it looks like the exact opposite of “silence.”

                      saying they never said that.

                      I actually didn’t say that. I asked where we claimed C level portability. If we were doing such a thing, I’d want to know about it so we can correct it. Which is exactly the thing you’re blabbering on about. But no. Instead, I get RESF thrown in my face.

                      Lose, lose. Thanks for playing.

                      1. -1

                        Aha, I also see this:

                        We do not demand that Rust run on “every possible platform”. It must eventually work without unnecessary compromises on widely-used hardware and software platforms.

                        So I guess the current situation with failing tests is entirely intentional but not well-known. Well, it should be better known.

                  2. 1

                    I agree that Rust needs a better multi-architecture story - as someone who does embedded development, I’ll play with Rust but I’d be very wary of using it “for real” - but lack of serious support for non-x86 [EDIT: non-Windows/-Linux/-Mac] is pretty well-documented.

                    [EDITed in response to sanxiyn’s clarification, thanks!]

                    1. 1

                      Rust Platform Support page is pretty clear. x86 is Tier-1, anything else is not.

                1. 1

                  Note: this links to the actual paper. Some information about TLBleed had already appeared online, but now we have everything.

                  1. 8

                    It’s like SSH, but more secure, and with cool modern features.

                    And less portable and will take forever to compile 😕

                    1. 5

                      Both arguments will probably be less and less valid with time passing though…

                      1. 4

                        How often do you compile vs. use?

                        1. 3

                          As someone involved in the packaging team on FreeBSD: I’m compiling all the time, and we have lots of users that prefer to compile ports instead of use packages for various reasons as well.

                          1. 5

                            I meant, after you compile, how often do you then use the resulting compiled artifact? I submit that the ratio of time spent compiling against time spent using approaches zero for most anyone, regardless of how long it takes to compile the thing being used.

                            1. 1

                              That depends on various factors. This is an OS with rolling-release packages. If I compile my own packages and update regularly, I will be re-compiling Oxy every time a direct dependency of Oxy gets updated in the tree.

                              1. 4

                                I’m familiar with FreeBSD ports :)

                                It sounds like all you’re saying is, “All Rust programs take an unacceptably long time to compile,” which, fine, but you can see how that sounds when it’s laid out plainly.

                                1. 5

                                  To be fair to @feld, compile times continue to be a number one request from users, and something we’re constantly working at improving.

                                  1. 4

                                    It’s appreciated. My #2 complaint as someone involved in packaging echoes the problems with the Go ecosystem: the way dependencies are managed is not great. Crates are only a marginal improvement over Go’s “you need a thousand checkouts from github of these exact hashes” issue we encounter.

                                    We want a stable ecosystem where we can package up the dependencies and lots of software can use the same dependencies with stable SEMVER release engineering. Unfortunately that’s just not the reality right now, so each software we package comes with a huge laundry list of distfiles/tarballs that need to be downloaded just to compile. As a consequence it also isn’t possible for someone to install from packages all dependencies for some software so they could do their own local development.

                                    Note: we can’t just cheat and use git as a build dependency (or whatever other tooling that wallpapers over git). Our entire package building process has to happen in a cleanroom environment without any network access. This is intentionally done for security and reproducibility.

                                    edit: here’s a particularly egregious example in Go. Look at how many dependencies we have to download that cannot be shared with other software. This all has to be audited and tracked by hand as well, which makes even minor updates of the software a daunting task.

                                    https://svnweb.freebsd.org/ports/head/security/vuls/distinfo?revision=455595&view=markup

                                    1. 3

                                      That use-case should be well supported; it’s what Firefox and other Linux distros do. They handle it in different ways; Firefox uses vendoring, Debian/Fedora convert cargo packages to .deb/.rpm and use them like any other dependency.

                                      Reproducibility has been a goal from day 1; that’s why lockfiles exist. Build scripts are the tricky bit, but most are good about it. I don’t know of any popular package that’s not well behaved in this regard.

                                      1. 1

                                        I’m fairly certain feld wants the OS packager to manage the dependencies, not just a giant multi-project tarball.

                                        1. 1

                                          Sure; that’s what I said Linux distros do.

                                      2. 2

                                        Application authors should just publish release tarballs with vendored dependencies.

                                        Check out this port: https://bugs.freebsd.org/bugzilla/attachment.cgi?id=194079&action=diff It looks like any normal port, just with BUILD_DEPENDS=cargo:lang/rust. One single distfile. That contains all the Rust stuff.

                        1. 0

                          I don’t really understand this. Sure, it’s cool to optimize something so well, but I don’t see the point of going to so much effort to reduce memory allocations. The time taken to run this, what it seems like you would actually care about, is all over the place and doesn’t get reduced that much. Why do we care about the number of allocations and GC cycles? If you care that much about not “stressing the GC”, whatever that means, then better to switch to a non-GC language than jump through hoops to get a GC language to not do its thing.

                          1. 11

                            On the contrary, I found this article a refreshing change from the usual Medium fare. Specifically, this article is actually technical, has few (any?) memes, and shows each step of optimization alongside data. More content like this, please!

                            More to your point, I imagine there was some sort of constraint necessitating it. The fact that the allocation size dropped so drastically fell out of using a pooled allocator.

                            1. 4

                              Right at the beginning of the article, it says:

                              This data is then used to power our real-time calculations. Currently this import process has to take place outside of business hours because of the impact it has on memory usage.

                              So: They’re doing bulk imports of data, and the extra allocation produces so much overhead that they need to schedule around it (“outside of business hours”). Using 7.5GB may be fine for processing a single input batch on their server, but it’s likely they want to process several data sets in parallel, or do other work.

                              Sure, they could blast the data through a DFA in C and probably do it with no runtime allocation at all (their final code is already approaching a hand-written lexer), but completely changing languages/platforms over issues like this has a lot of other implications. It’s worth knowing if it’s manageable on their current platform.

                              1. 3

                                They’re doing bulk imports of data, and the extra allocation produces so much overhead that they need to schedule around it

                                That’s what they claim, but it sounds really weird to me. I’ve worked with plenty of large data imports in GCed languages, and have never had to worry about overhead, allocation, GC details, etc. I’m not saying they don’t have these problems, but it would be even more interesting to hear why these things are a problem for them.

                                Also of note - their program never actually used 7.5GB of memory. That’s the total allocations over the course of the program, virtually all of which was surely GC’ed almost immediately. Check out the table at the end of the article - peak working set, the highest amount of memory actually used, never budged from 16kb until the last iteration, where it dropped to 12kb. Extra allocations and GC collections are what dropped. Going by the execution time listing, the volume of allocations and collections doesn’t seem to have much noticeable effect on anything. I’d very much like to know exactly what business goals they accomplished by all of that effort to reduce allocations and collections.

                                1. 1

                                  You’re right – it’s total allocations along the way rather than the allocation high water mark. It seems unlikely they’d go out of their way to do processing in off hours without running into some sort of problem first (so I’m inclined to take that assertion at face value), though I’m not seeing a clear reason in the post.

                                  Still, I’ve seen several cases where bulk data processing like this has become vastly more efficient (from hours to minutes) by using a trie and interning common repeated substrings, re-using the same stack/statically allocated buffers, or otherwise eliminating a ton of redundant work. If anything, their timings seem suspicious to me (I’d expect the cumulative time to drop significantly), but I’m not familiar enough with the C# ecosystem to try to reproduce their results.

                                2. 1

                                  From what I understood, the 7.5GB of memory is total allocations, not the amount of memory held resident, that was around 15 megs. I’m not sure why the memory usage requires running outside business hours.

                                  EDIT: Whoops, I see you responded to a similar comment that showed up below when I was reading this.

                                3. 2

                                  The article doesn’t explain why they care, but many garbage collection make it hard to hit a latency target consistently (i.e. while the GC is running its longest critical section). Also, garbage collection is (usually better optimized for short-living allocations than malloc, but still) somewhat expensive, and re-using memory makes caches happier.

                                  Of course, there’s a limit to how much optimization one needs for a CSV-like file in the hundreds of MBs…

                                  1. 1

                                    Maybe their machines don’t have 8gb of free memory lying around.

                                    1. 2

                                      As shown in the table, they don’t use anywhere close to 8gb of memory at a time. This seems like a case that .NET is already very good at at a baseline level

                                  1. 3

                                    Not what you’re asking, but modern systems seem to be using remote block storage plus SQL servers (etc.) for shared data. Are you sure you want NFS?

                                    Read Dan Luu (like, everything, but particularly his blog post on disaggregated storage.)

                                    1. 1

                                      Not what you’re asking, but modern systems seem to be using remote block storage plus SQL servers (etc.) for shared data.

                                      Jehanne was born as a response to the whole mainstream “architecture”: it’s official goal is to replace everything, from dynamic linking up to WebAssembly. I see most “modern systems” as a tower of patches, each addressing the problems introduced from the previous ones, despite the real hardware issues originally addressed at the base, have gone decades ago.

                                      Are you sure you want NFS?

                                      Actually I want a simplified and enhanced 9P2000. But yes, I think the file abstraction (once properly defined) is all we need to subsume all we have today and start building better tools.

                                      Read Dan Luu (like, everything, but particularly his blog post on disaggregated storage.)

                                      Wow! This is a great blog!

                                      But I can’t find anything about “disaggregated storage”, any direct link?

                                      1. 3

                                        Sorry for the slow response. I like files too; not sure it’s the best use of one’s time to try to boil that ocean, but it should at least be educational.

                                        I meant to point you specifically at https://danluu.com/infinite-disk/, but I was on my phone on the train at the time.

                                        You’ll likely also be interested in https://danluu.com/file-consistency/ and https://danluu.com/filesystem-errors/, although that’s not so much a fundamental issues as “actually handling errors might be a good idea”.

                                        1. 2

                                          Thanks, great reads.

                                          I like files too; not sure it’s the best use of one’s time to try to boil that ocean

                                          I think so, actually. But the point is that we need to boil that ocean, one way or another…
                                          Jehanne is my attempt, it’s probably wrong (if funny), it will fail… and whatever.

                                          But my goal is to show that it’s a road worth exploring, full of low hanging fruit left there just because everybody are looking the other way. I want to stimulate critical thinking, I want to spread a taste for simplicity, I want people to realize that we are just 70 years in computers, and that everything can change.

                                    1. 3

                                      What nickpsecurity said. Also, (Open)SSH is an example of an application (applicative?) protocol that natively includes encryption. There are also some applications that wrap individual connections - e.g. stunnel (OpenSSL) or Colin Percival’s spiped (custom). Also, consider certain Kerberized applications.

                                      But overall, you’d need a reason to not use SSL/TLS; I can think of a few reasons not to, but defaulting to “use what everyone uses” is generally a good idea.

                                      1. 1

                                        I can think of a few reasons not to…

                                        Please, can you elaborate? Which reasons?

                                        Any argument pro or against will improve my informed decision.

                                        1. 2

                                          For larger systems, read http://www.daemonology.net/blog/2011-07-04-spiped-secure-pipe-daemon.html and http://www.daemonology.net/blog/2012-08-30-protecting-sshd-using-spiped.html - basically, TLS is frighteningly complex, with all that entails. Also note that spiped has a different keying model, which can be another reason to do choose something that is not TLS. (You can usually twist certificate-based authentication to fit whatever you need, though.)

                                          For small embedded systems, you may simply not have the space to include a TLS library, or may not have the space to include a good TLS library.

                                          That said, don’t roll your own if any of this is news to you.

                                          1. 2

                                            Thanks a lot!

                                      1. 35

                                        I’ll bite.

                                        General industry trends
                                        • (5 years) Ready VC will dry up, advertising revenue will bottomout, and companies will have to tighten their belts, disgorging legions of middlingly-skilled developers onto the market–salaries will plummet.
                                        • (10 years) There will be a loud and messy legal discrimination case ruling in favor of protecting political beliefs and out-of-work activities (probably defending some skinhead). This will accelerate an avalanche of HR drama. People not from the American coasts will continue business as usual.
                                        • (10 years) There will be at least two major unions for software engineers with proper collective bargaining.
                                        • (10 years) Increasingly, we’ll see more “coop” teams. The average size will be about half of what it is today, organized around smaller and more cohesive business ideas. These teams will have equal ownership in the profits of their projects.
                                        Education
                                        • (5 years) All schools will have some form of programming taught. Most will be garbage.
                                        • (10 years) Workforce starts getting hit with students who grew up on touchscreens and walled gardens. They are worse at programming than the folks that came before them. They are also more pleasant to work with, when they’re not looking at their phones.
                                        • (10 years) Some schools will ban social media and communications devices to promote classroom focus.
                                        • (15 years) There will be a serious retrospective analysis in an academic journal pointing out that web development was almost deliberately constructed to make teaching it as a craft as hard as possible.
                                        Networking
                                        • (5 years) Mesh networks still don’t matter. :(
                                        • (10 years) Mesh networks matter, but are a great way to get in trouble with the government.
                                        • (10 years) IPv6 still isn’t rolled out properly.
                                        • (15 years) It is impossible to host your own server on the “public” internet unless you’re a business.
                                        Devops
                                        • (5 years) Security, cost, and regulatory concerns are going to move people back towards running their own hardware.
                                        • (10 years) Containers will be stuck in Big Enterprise, and everybody else will realize they were a mistake made to compensate for unskilled developers.
                                        • (15 years) There will still be work available for legacy Rails applications.
                                        Hardware
                                        • (5 years) Alternative battery and PCB techniques allow for more flexible electronics. This initially only shows up in toys, later spreads to fashion. Limited use otherwise.
                                        • (5 years) VR fails to revitalize the wounded videocard market. Videocard manufacturers are on permanent decline due to pathologies of selling to the cryptobutts folks at expense of building reliable customer base. Gamers have decided graphics are Good Enough, and don’t pay for new gear.
                                        • (10 years) No significant changes in core count or clock speed will be practical, focus will be shifted instead to power consumption, heat dissipation, and DRM. Chipmakers slash R&D budgets in favor of legal team sizes, since that’s what actually ensures income.

                                        ~

                                        I’ve got other fun ones, but that’s a good start I think.

                                        1. 7

                                          (5 years) Security, cost, and regulatory concerns are going to move people back towards running their own hardware.

                                          As of today, public cloud is actually solving several (and way more than people running their own hardware) of these issues.

                                          (10 years) Containers will be stuck in Big Enterprise, and everybody else will realize they were a mistake made to compensate for unskilled developers.

                                          Containers are actually solving some real problems, several of them already were independently solved, but containers bring a more cohesive solution.

                                          1. 1

                                            Containers are actually solving some real problems, several of them already were independently solved, but containers bring a more cohesive solution.

                                            I am interested, could you elaborate?

                                            1. 1

                                              The two main ones that I often mention in favor of containers (trying to stay concise):

                                              • Isolation: We previously had VMs on a virtualization level but they’re heavy, potentially slow to boot and obscure (try to launch xen and manage vms your pet server), and jail/chroot are way harder to setup and specific to each of your application and do not allow you to restrict resources (to my knowledge).
                                              • Standard interface: Very useful for orchestration as an example, several tool existed to deploy applications with an orchestrator, but it was mostly executables and suffered from the lack of isolation. Statically compiling solved some of theses issues, but not every application can be.

                                              Containers are a solution to some problems but not the solution to everything. I just think that wishing they weren’t there, probably means the interlocutor didn’t understand the benefits of it.

                                              1. 2

                                                I just think that wishing they weren’t there, probably means the interlocutor didn’t understand the benefits of it.

                                                I’ve been using FreeBSD jails since 2000, and Solaris zones since Solaris 10, circa 2005. I’ve been writing alternative front-ends for containers in Linux. I think I understand containers and their benefits pretty well.

                                                That doesn’t mean I don’t think docker, and kubernetes, and all the “modern” stuff are not a steaming pile, both the idea and especially the implementation.

                                                There is nothing wrong with container technology, containers are great. But there is something fundamentally wrong with the way software is deployed today, using containers.

                                                1. 1

                                                  But there is something fundamentally wrong with the way software is deployed today, using containers.

                                                  Can you elaborate? Do you have resources to share on that? I feel a comment on Lobsters might a be a bit light to explain such a statement.

                                                2. 1

                                                  You can actually set resource isolation on various levels; classic Unix quotas, priorities (“nice” in sh) and setrusage() (“ulimit” in sh); Linux cgroups etc. (which is what Docker uses, IIUC); and/or more-specific solutions such as java -Xmx […].

                                                  1. 2

                                                    So you have to use X different tools and syntax to, set the CPU/RAM/IO/… limits, and why using cgroups when you can have cgroups + other features using containers? I mean, your answer is correct but in reality, it’s deeply annoying to work with these at large scale.

                                                    1. 4

                                                      Eh, I’m a pretty decent old-school sysadmin, and Docker isn’t what I’d consider stable. (Or supported on OpenBSD.) I think this is more of a choose-your-own-pain case.

                                                      1. 3

                                                        I really feel this debate is exactly like debates about programming languages. It all depends of your use-cases and experience with each technologies!

                                                        1. 2

                                                          I’ll second that. We use Docker for some internal stuff and it’s not very stable in my experience.

                                                          1. 1

                                                            If you have <10 applications to run for decades, don’t use Docker. If you have +100 applications to launch and update regularly, or at scale, you often don’t care if 1 or 2 containers die sometimes. You just restart them and it’s almost expected that you won’t reach 100% stability.

                                                            1. 1

                                                              I’m not sure I buy that.

                                                              Out testing infrastructure uses docker containers. I don’t think we’re doing anything unusual, but we still run into problems once or twice a week that require somebody to “sudo killall docker” because it’s completely hung up and unresponsive.

                                                              1. 1

                                                                We run at $job thousands of container everyday and it’s very uncommon to have containers crashing because of Docker.

                                                  2. 1

                                                    Easier local development is a big one - developers being able to quickly bring up a full stack of services on their machines. In a world of many services this can be really valuable - you don’t want to be mocking out interfaces if you can avoid it, and better still is calling out to the same code that’s going to be running in production. Another is the fact that the container that’s built by your build system after your tests pass is exactly what runs in production.

                                                3. 7

                                                  (5 years) VR fails to revitalize the wounded videocard market. Videocard manufacturers are on permanent decline due to pathologies of selling to the cryptobutts folks at expense of building reliable customer base. Gamers have decided graphics are Good Enough, and don’t pay for new gear.

                                                  While I might accept that VR may fail, I don’t think video card companies are reliant on VR succeeding. They have autonomous cars and machine learning to look forward to.

                                                  1. 2

                                                    (10 years) No significant changes in core count or clock speed will be practical, focus will be shifted instead to power consumption, heat dissipation, and DRM. Chipmakers slash R&D budgets in favor of legal team sizes, since that’s what actually ensures income.

                                                    This trend also supports a shift away from scripting languages towards Rust, Go, etc. A focus on hardware extensions (eg deep learning hardware) goes with it.

                                                    1. 1

                                                      (10 years) Containers will be stuck in Big Enterprise, and everybody else will realize they were a mistake made to compensate for unskilled developers.

                                                      One can dream!

                                                      1. 2

                                                        Would you (or anyone) be able to help me understand this point please? My current job uses containers heavily, and previously I’ve used Solaris Zones and FreeBSD jails. What I see is that developers are able to very closely emulate the deployment environment in development, and don’t have to do “cross platform” tricks just to get a desktop that isn’t running their server OS. I see that particular “skill” as unnecessary unless the software being cross-platform is truly a business goal.

                                                        1. 1

                                                          I think Jessie Frazelle perfectly answer to this concern here: https://blog.jessfraz.com/post/containers-zones-jails-vms/

                                                          P.S.: I have the same question to people that are against containers…

                                                      2. 1

                                                        (5 years) Mesh networks still don’t matter. :( (10 years) Mesh networks matter, but are a great way to get in trouble with the government.

                                                        Serious attempts at mesh networks basically don’t exist since the 200#s when everyone discovered it’s way easier to deploy an overlay net on top of Comcast instead of making mid-distance hops with RONJA/etc.

                                                        It would be so cool to build a hybrid USPS/UPS/Fedex batch + local realtime link powered national scale network capable of, say, 100mB per user per day, with ~ 3 day max latency. All attempts I’ve found are either very small scale, or just boil down to sending encrypted packets over Comcast.

                                                        1. 1

                                                          Everyone’s definition of mesh different, but today there are many serious mesh networks, the main ones being Freifunk and Guifi

                                                        2. 1

                                                          (10 years) There will be at least two major unions for software engineers with proper collective bargaining.

                                                          What leads you to this conclusion? From what I hear, it’s rather the opposite trend, not only in the software industry…

                                                          (5 years) All schools will have some form of programming taught. Most will be garbage.

                                                          …especially if this is taken into account, I’d argue.

                                                          (10 years) Some schools will ban social media and communications devices to promote classroom focus.

                                                          Aren’t these already banned from schools? Or are you talking about general bans?

                                                          1. 1

                                                            I like the container one, I also don’t see the point

                                                            1. 1

                                                              It’s really easy to see what state a container is in because you can read a 200 line text file and see that it’s just alpine linux with X Y Z installed and this config changed. On a VM it’s next to impossible to see what has been changed since it was installed.

                                                              1. 3

                                                                ate a container is in because you can read a 200 line text file and see that it’s just alpine linux with X Y Z in

                                                                I just check the puppet manifest

                                                                1. 2

                                                                  It’s still possible to change other things outside of that config. With a container having almost no persistent memory if you change something outside of the dockerfile it will be blown away soon.

                                                              2. 1

                                                                Containers wont be needed because unikernels.

                                                              3. 1

                                                                All schools will have some form of programming taught. Most will be garbage.

                                                                and will therefore be highly desirable hires to full stack shops.

                                                                1. 1

                                                                  I would add the bottom falling out of the PC market, making PCs more expensive as gamers and enterprise, the entire reason why it still maintains economies of scale, just don’t buy new HW anymore.

                                                                  1. 1

                                                                    I used to always buy PCs, but indeed the last 5 years I haven’t used a desktop PC.

                                                                    1. 1

                                                                      If it does happen, It’ll probably affect laptops as well, but desktops especially.

                                                                  2. 1

                                                                    (5 years) All schools will have some form of programming taught. Most will be garbage.

                                                                    My prediction: Whether the programming language is garbage or not, provided some reasonable amount of time is spent on these courses we will see a general improvement in the logical thinking and deductive reasoning skills of those students.

                                                                    (at least, I hope so)

                                                                  1. 5

                                                                    Product placement and press release. :(

                                                                    1. 4

                                                                      This is significant news in an important sector of our industry. Your reflexive negativity is destructive to this website.

                                                                      1. 8

                                                                        I don’t think the personal attack was necessary here.

                                                                        1. 11

                                                                          This is significant news in an important sector of our industry.

                                                                          Sure, but unfortunately we have somewhat limited space and attention bandwidth here, and if we were to support posting every piece of significant news in important sectors of our industry, we’d find ourselves flooded. There is a great site with news for hackers–this sort of stuff is a great fit for that other site!

                                                                          Your reflexive negativity is destructive to this website.

                                                                          I’m sorry if that’s how this is perceived. I’ve gone to some lengths to do better in terms of negativity. Unfortunately, it’s hard to be positive when pointing out pathological community behaviors that have actively ruined and destroyed other sites.

                                                                          1. 2

                                                                            I think you’re somewhat right– I would have posted a more technical take like this one but didn’t see any posts about it at the time. After the other one was posted, I would have deleted this one if I was able to.

                                                                      1. 1

                                                                        defer() is basically independent of the rest of the library, isn’t it? Might want to extract that.

                                                                        1. 2

                                                                          It’s not entirely independent, as it depends on the "it" macro to create a bunch of variables and run the deferred statements. A stand-alone implementation would require a macro you call at the beginning of a block which will contain deferred expressions, and a macro you call before every return, so it’s not as nice to use. It also relies on GNU extensions, which imo is okay for a test suite, but I’d be careful relying on them in regular code.

                                                                          Anyways, I did the work to extract it into its own small library: https://gist.github.com/mortie/0696f1cf717d192a33b7d842144dcf4a

                                                                          Example usage:

                                                                          #include "defer.h"
                                                                          #include <stdio.h>
                                                                          int main() {
                                                                              defer_init();
                                                                              defer(printf("world\n"));
                                                                              defer(printf("hello "));
                                                                              defer_return(0);
                                                                          }
                                                                          

                                                                          If you want to do anything interesting with it, feel free to.

                                                                        1. 3

                                                                          I’m still looking for a test harness that doesn’t need me to explicitly call each test/suite in main. My current approach is a simple-minded code-generation. Is there a way to do this that avoids autogenerating files and whatnot?

                                                                          1. 3

                                                                            There’s a couple of ways I can imagine that would be possible. Currently, each top-level describe generates a function; I could have a global array of function pointers, and use the __COUNTER__ macro to automatically insert describe‘s functions into that array. However, that would mean that the length of the array would have to be static. It probably wouldn’t be too bad though if it was configurable by defining a macro before including the library, and defaulting the length to something like 1024, though.

                                                                            Another solution would be to not have these top-level describes, and instead have a macro called testsuite or something, which generates a main function. This would mean that, if your test suite is in multiple files, you’d have to be very careful what you have in those files, because they would be included from a function body, but it would be doable.

                                                                            I think the first approach would be the best. You could then also have a runtests() macro which loops from 0 through __COUNTER__ - 2 and runs all the tests.

                                                                            1. 1

                                                                              That’s a great idea. Thanks!

                                                                              1. 2

                                                                                An update: the first solution will be much harder than I expected, because you can’t in C do things like foo[0] = bar outside of a function. That means you can’t assign the function pointer to the array in the describe macro. If you could append to a macro frow within a macro, you could have a macro which describe appends to which, when invoked, just calls all the functions created by describe, but there doesn’t seem to be any way to append to a macro from within a macro (though we can get close; using push_macro and pop_macro in _Pragma, it would be possible to append to a macro, but not from within another macro).

                                                                                It would still be possible to call the functions something deterministic (say test_##__COUNTER__), and then, in the main function, use dlopen on argv[0], and then loop from i=0 to i=__COUNTER__-2 and use dlsym to find the symbol named "_test_$i" and call it… but that’s not something I want to do in Snow, because that sounds a little too crazy :P

                                                                                1. 1

                                                                                  I appreciate the update. Yes, that would be too crazy for my taste as well. (As is your second idea above.)

                                                                                  1. 1

                                                                                    FWIW, you can do this by placing the function pointer in a custom linker section with linker-inserted begin/end symbols; unfortunately, that requires your user to use a custom linker script, which will be annoying for them.

                                                                            1. 14

                                                                              All of my upward moves have been internal, and of the form “well, we agree that I’ve been doing the job pretty successfully; let us make my title match what I’m actually doing”. IME, seniority is as much taken as it is given. (Not sure to what extent my experience is typical.)

                                                                              (E.g. if you want to lead, mentor an intern/junior/…, or arrange to lead a small low-stakes internal project; if you want to architect, shadow an experienced architect, provide designs for your own components, and/or propose important refactorings; etc.)

                                                                              1. 7

                                                                                IME, seniority is as much taken as it is given.

                                                                                Bingo. Show initiative in a polite yet assertive way, deliver results, and talk about those results to the right people.

                                                                                1. 4

                                                                                  seniority is as much taken as it is given

                                                                                  This sounds like good advice. Perhaps it is more applicable to intra-company movements than moving to a new company. Hiring markets are probably be more efficient than intra-company hierarchies; that is, internally companies could be stifling a lot of value by not helping juniors move into seniority, and this inefficiency can be capitalized on by just taking the responsibilities of seniority for yourself.

                                                                                  1. 3

                                                                                    IME moving between companies is always where you move up

                                                                                1. 5

                                                                                  Several people here are recommending CMake as an alternative. I’ve only interacted with CMake at a fairly surface level, but found it pretty unwieldy and overcomplicated (failed the “simple things should be simple” test). Does it have merits that I wasn’t seeing?

                                                                                  1. 3

                                                                                    CMake can generate output both for Unix and for Windows systems. That’s one (good) reason lots of C++ libraries use CMake.

                                                                                    1. 2

                                                                                      CMake is pretty nice and has nice documentation. You can also pick stuff up from reading other people’s CMakeLists. For simple projects the CMake file can be pretty compact.

                                                                                      1. 3

                                                                                        I actually found the CMake documentation to be quite terrible for new users. The up-to-date documentation factually describes what the different functions do, but has very little examples of how to actually write real-world CMake scripts. There are a few official tutorials that try to do this, but they are made for ancient versions like CMake 2.6. So in order to learn how to use CMake, you are stuck reading through tons of other peoples scripts to try to deduce some common best practices.

                                                                                        While modern CMake is not terrible, you often have to restrict yourself to some ancient version (2.8.6 I believe is common) in order to support certain versions of CentOS/RHEL/Ubuntu LTS (and there were some big changes in CMake around 2.8.12/3.0).

                                                                                        Also, having string as the only data type has led to some absurd corner cases.

                                                                                      1. 4

                                                                                        While small on the surface, it can’t stand alone — it includes bsd.prog.mk has some, ahem, complexity.

                                                                                        (I couldn’t tell if your comment implies BSD makefiles are hairballs or if it implies they’re simple ;))

                                                                                        1. 3

                                                                                          bsd.prog.mk is quite the library, but CMake is much larger; I think it was meant positively.

                                                                                      1. 37

                                                                                        It wasn’t hate speech directed at some group. It was a self-described “hate post” with a one-line knee-jerk brush-off of Electrum. That’s a worthless troll.

                                                                                        I only meant to delete the parent comment and didn’t expect the entire thread to get deleted. I’ll see if I can restore the thread without it, but moderation options are pretty limited.

                                                                                        In hindsight, I see how the moderation log was misleading if you didn’t recognize the comment and will write more useful messages.

                                                                                        1. 6

                                                                                          Yeah, this seems to be a bug, probably because it’s a top-level comment.

                                                                                          1. 4

                                                                                            It’s not a bug but no reason was given. Not sure if I should reverse that or not.

                                                                                            1. 1

                                                                                              I did reverse it.

                                                                                            2. 3

                                                                                              Woo, glad it’s not a new mod policy :D - Thanks for digging in!

                                                                                            3. 6

                                                                                              I’m not sure if you made the right call here, but thanks for your efforts - communities need moderation, and it’s a hard and often thankless job. I’m happy that lobste.rs does have people willing to take that job!

                                                                                            1. 1

                                                                                              This is quite neat. One question from someone who didn’t compile the code and play with it: the “XML diff” algorithm BULD seems almost insensitive to ordering, but the order of text matters a lot (and classic diff - and your merge algorithm - are very linear comparisons.) Does the algorithm “behave” once you start moving blocks?

                                                                                              Thanks for sharing!

                                                                                              1. 2

                                                                                                BULD works on ordered trees—that was one of the reasons it was chosen. And it indeed supports the “move” concept in the edit script. In the lowdown implementation (specifically, in the merging algorithm), moves are made into insert/delete simply for the sake of readability of the output. It’s straightforward to extend the API to have “moved from” and “moved to” bits. Then have a little link in the output. Maybe in later versions…

                                                                                              1. 1

                                                                                                (Minor typo: “rooted at thd node”. Might want to fix that.)

                                                                                                1. 2

                                                                                                  Thanks, noted! (Will push when document is next updated.)

                                                                                                1. 1

                                                                                                  tl;dr: Attaching a decorator @deco to foo’s definition is semantically equivalent to writing foo = deco(foo) after foo‘s definition. Multiple decorators attached to the same definition are applied in the reverse order in which they appear in the program text. The consequences of these two facts are exactly what you would expect if you already know the remainder of Python’s syntax and semantics.

                                                                                                  1. 1

                                                                                                    True, but the article also makes the (harder!) case that using decorators in “creative” ways may not actually be a bad idea in all cases. I found it worth reading for that reason.

                                                                                                    1. 1

                                                                                                      The article doesn’t make a very good case. The most “creative” snippets (22 and 23) are also the ugliest ones.

                                                                                                      1. 2

                                                                                                        One good example of a “creative” decorator is in the contextlib package: @contextmanager takes a function and returns a callable object.