1. 3

    Not what you’re asking, but modern systems seem to be using remote block storage plus SQL servers (etc.) for shared data. Are you sure you want NFS?

    Read Dan Luu (like, everything, but particularly his blog post on disaggregated storage.)

    1. 1

      Not what you’re asking, but modern systems seem to be using remote block storage plus SQL servers (etc.) for shared data.

      Jehanne was born as a response to the whole mainstream “architecture”: it’s official goal is to replace everything, from dynamic linking up to WebAssembly. I see most “modern systems” as a tower of patches, each addressing the problems introduced from the previous ones, despite the real hardware issues originally addressed at the base, have gone decades ago.

      Are you sure you want NFS?

      Actually I want a simplified and enhanced 9P2000. But yes, I think the file abstraction (once properly defined) is all we need to subsume all we have today and start building better tools.

      Read Dan Luu (like, everything, but particularly his blog post on disaggregated storage.)

      Wow! This is a great blog!

      But I can’t find anything about “disaggregated storage”, any direct link?

      1. 3

        Sorry for the slow response. I like files too; not sure it’s the best use of one’s time to try to boil that ocean, but it should at least be educational.

        I meant to point you specifically at https://danluu.com/infinite-disk/, but I was on my phone on the train at the time.

        You’ll likely also be interested in https://danluu.com/file-consistency/ and https://danluu.com/filesystem-errors/, although that’s not so much a fundamental issues as “actually handling errors might be a good idea”.

        1. 2

          Thanks, great reads.

          I like files too; not sure it’s the best use of one’s time to try to boil that ocean

          I think so, actually. But the point is that we need to boil that ocean, one way or another…
          Jehanne is my attempt, it’s probably wrong (if funny), it will fail… and whatever.

          But my goal is to show that it’s a road worth exploring, full of low hanging fruit left there just because everybody are looking the other way. I want to stimulate critical thinking, I want to spread a taste for simplicity, I want people to realize that we are just 70 years in computers, and that everything can change.

    1. 3

      What nickpsecurity said. Also, (Open)SSH is an example of an application (applicative?) protocol that natively includes encryption. There are also some applications that wrap individual connections - e.g. stunnel (OpenSSL) or Colin Percival’s spiped (custom). Also, consider certain Kerberized applications.

      But overall, you’d need a reason to not use SSL/TLS; I can think of a few reasons not to, but defaulting to “use what everyone uses” is generally a good idea.

      1. 1

        I can think of a few reasons not to…

        Please, can you elaborate? Which reasons?

        Any argument pro or against will improve my informed decision.

        1. 2

          For larger systems, read http://www.daemonology.net/blog/2011-07-04-spiped-secure-pipe-daemon.html and http://www.daemonology.net/blog/2012-08-30-protecting-sshd-using-spiped.html - basically, TLS is frighteningly complex, with all that entails. Also note that spiped has a different keying model, which can be another reason to do choose something that is not TLS. (You can usually twist certificate-based authentication to fit whatever you need, though.)

          For small embedded systems, you may simply not have the space to include a TLS library, or may not have the space to include a good TLS library.

          That said, don’t roll your own if any of this is news to you.

          1. 2

            Thanks a lot!

      1. 35

        I’ll bite.

        General industry trends
        • (5 years) Ready VC will dry up, advertising revenue will bottomout, and companies will have to tighten their belts, disgorging legions of middlingly-skilled developers onto the market–salaries will plummet.
        • (10 years) There will be a loud and messy legal discrimination case ruling in favor of protecting political beliefs and out-of-work activities (probably defending some skinhead). This will accelerate an avalanche of HR drama. People not from the American coasts will continue business as usual.
        • (10 years) There will be at least two major unions for software engineers with proper collective bargaining.
        • (10 years) Increasingly, we’ll see more “coop” teams. The average size will be about half of what it is today, organized around smaller and more cohesive business ideas. These teams will have equal ownership in the profits of their projects.
        Education
        • (5 years) All schools will have some form of programming taught. Most will be garbage.
        • (10 years) Workforce starts getting hit with students who grew up on touchscreens and walled gardens. They are worse at programming than the folks that came before them. They are also more pleasant to work with, when they’re not looking at their phones.
        • (10 years) Some schools will ban social media and communications devices to promote classroom focus.
        • (15 years) There will be a serious retrospective analysis in an academic journal pointing out that web development was almost deliberately constructed to make teaching it as a craft as hard as possible.
        Networking
        • (5 years) Mesh networks still don’t matter. :(
        • (10 years) Mesh networks matter, but are a great way to get in trouble with the government.
        • (10 years) IPv6 still isn’t rolled out properly.
        • (15 years) It is impossible to host your own server on the “public” internet unless you’re a business.
        Devops
        • (5 years) Security, cost, and regulatory concerns are going to move people back towards running their own hardware.
        • (10 years) Containers will be stuck in Big Enterprise, and everybody else will realize they were a mistake made to compensate for unskilled developers.
        • (15 years) There will still be work available for legacy Rails applications.
        Hardware
        • (5 years) Alternative battery and PCB techniques allow for more flexible electronics. This initially only shows up in toys, later spreads to fashion. Limited use otherwise.
        • (5 years) VR fails to revitalize the wounded videocard market. Videocard manufacturers are on permanent decline due to pathologies of selling to the cryptobutts folks at expense of building reliable customer base. Gamers have decided graphics are Good Enough, and don’t pay for new gear.
        • (10 years) No significant changes in core count or clock speed will be practical, focus will be shifted instead to power consumption, heat dissipation, and DRM. Chipmakers slash R&D budgets in favor of legal team sizes, since that’s what actually ensures income.

        ~

        I’ve got other fun ones, but that’s a good start I think.

        1. 7

          (5 years) Security, cost, and regulatory concerns are going to move people back towards running their own hardware.

          As of today, public cloud is actually solving several (and way more than people running their own hardware) of these issues.

          (10 years) Containers will be stuck in Big Enterprise, and everybody else will realize they were a mistake made to compensate for unskilled developers.

          Containers are actually solving some real problems, several of them already were independently solved, but containers bring a more cohesive solution.

          1. 1

            Containers are actually solving some real problems, several of them already were independently solved, but containers bring a more cohesive solution.

            I am interested, could you elaborate?

            1. 1

              The two main ones that I often mention in favor of containers (trying to stay concise):

              • Isolation: We previously had VMs on a virtualization level but they’re heavy, potentially slow to boot and obscure (try to launch xen and manage vms your pet server), and jail/chroot are way harder to setup and specific to each of your application and do not allow you to restrict resources (to my knowledge).
              • Standard interface: Very useful for orchestration as an example, several tool existed to deploy applications with an orchestrator, but it was mostly executables and suffered from the lack of isolation. Statically compiling solved some of theses issues, but not every application can be.

              Containers are a solution to some problems but not the solution to everything. I just think that wishing they weren’t there, probably means the interlocutor didn’t understand the benefits of it.

              1. 2

                I just think that wishing they weren’t there, probably means the interlocutor didn’t understand the benefits of it.

                I’ve been using FreeBSD jails since 2000, and Solaris zones since Solaris 10, circa 2005. I’ve been writing alternative front-ends for containers in Linux. I think I understand containers and their benefits pretty well.

                That doesn’t mean I don’t think docker, and kubernetes, and all the “modern” stuff are not a steaming pile, both the idea and especially the implementation.

                There is nothing wrong with container technology, containers are great. But there is something fundamentally wrong with the way software is deployed today, using containers.

                1. 1

                  But there is something fundamentally wrong with the way software is deployed today, using containers.

                  Can you elaborate? Do you have resources to share on that? I feel a comment on Lobsters might a be a bit light to explain such a statement.

                2. 1

                  You can actually set resource isolation on various levels; classic Unix quotas, priorities (“nice” in sh) and setrusage() (“ulimit” in sh); Linux cgroups etc. (which is what Docker uses, IIUC); and/or more-specific solutions such as java -Xmx […].

                  1. 2

                    So you have to use X different tools and syntax to, set the CPU/RAM/IO/… limits, and why using cgroups when you can have cgroups + other features using containers? I mean, your answer is correct but in reality, it’s deeply annoying to work with these at large scale.

                    1. 4

                      Eh, I’m a pretty decent old-school sysadmin, and Docker isn’t what I’d consider stable. (Or supported on OpenBSD.) I think this is more of a choose-your-own-pain case.

                      1. 3

                        I really feel this debate is exactly like debates about programming languages. It all depends of your use-cases and experience with each technologies!

                        1. 2

                          I’ll second that. We use Docker for some internal stuff and it’s not very stable in my experience.

                          1. 1

                            If you have <10 applications to run for decades, don’t use Docker. If you have +100 applications to launch and update regularly, or at scale, you often don’t care if 1 or 2 containers die sometimes. You just restart them and it’s almost expected that you won’t reach 100% stability.

                            1. 1

                              I’m not sure I buy that.

                              Out testing infrastructure uses docker containers. I don’t think we’re doing anything unusual, but we still run into problems once or twice a week that require somebody to “sudo killall docker” because it’s completely hung up and unresponsive.

                              1. 1

                                We run at $job thousands of container everyday and it’s very uncommon to have containers crashing because of Docker.

                  2. 1

                    Easier local development is a big one - developers being able to quickly bring up a full stack of services on their machines. In a world of many services this can be really valuable - you don’t want to be mocking out interfaces if you can avoid it, and better still is calling out to the same code that’s going to be running in production. Another is the fact that the container that’s built by your build system after your tests pass is exactly what runs in production.

                3. 7

                  (5 years) VR fails to revitalize the wounded videocard market. Videocard manufacturers are on permanent decline due to pathologies of selling to the cryptobutts folks at expense of building reliable customer base. Gamers have decided graphics are Good Enough, and don’t pay for new gear.

                  While I might accept that VR may fail, I don’t think video card companies are reliant on VR succeeding. They have autonomous cars and machine learning to look forward to.

                  1. 2

                    (10 years) No significant changes in core count or clock speed will be practical, focus will be shifted instead to power consumption, heat dissipation, and DRM. Chipmakers slash R&D budgets in favor of legal team sizes, since that’s what actually ensures income.

                    This trend also supports a shift away from scripting languages towards Rust, Go, etc. A focus on hardware extensions (eg deep learning hardware) goes with it.

                    1. 1

                      (10 years) Containers will be stuck in Big Enterprise, and everybody else will realize they were a mistake made to compensate for unskilled developers.

                      One can dream!

                      1. 2

                        Would you (or anyone) be able to help me understand this point please? My current job uses containers heavily, and previously I’ve used Solaris Zones and FreeBSD jails. What I see is that developers are able to very closely emulate the deployment environment in development, and don’t have to do “cross platform” tricks just to get a desktop that isn’t running their server OS. I see that particular “skill” as unnecessary unless the software being cross-platform is truly a business goal.

                        1. 1

                          I think Jessie Frazelle perfectly answer to this concern here: https://blog.jessfraz.com/post/containers-zones-jails-vms/

                          P.S.: I have the same question to people that are against containers…

                      2. 1

                        (5 years) Mesh networks still don’t matter. :( (10 years) Mesh networks matter, but are a great way to get in trouble with the government.

                        Serious attempts at mesh networks basically don’t exist since the 200#s when everyone discovered it’s way easier to deploy an overlay net on top of Comcast instead of making mid-distance hops with RONJA/etc.

                        It would be so cool to build a hybrid USPS/UPS/Fedex batch + local realtime link powered national scale network capable of, say, 100mB per user per day, with ~ 3 day max latency. All attempts I’ve found are either very small scale, or just boil down to sending encrypted packets over Comcast.

                        1. 1

                          Everyone’s definition of mesh different, but today there are many serious mesh networks, the main ones being Freifunk and Guifi

                        2. 1

                          (10 years) There will be at least two major unions for software engineers with proper collective bargaining.

                          What leads you to this conclusion? From what I hear, it’s rather the opposite trend, not only in the software industry…

                          (5 years) All schools will have some form of programming taught. Most will be garbage.

                          …especially if this is taken into account, I’d argue.

                          (10 years) Some schools will ban social media and communications devices to promote classroom focus.

                          Aren’t these already banned from schools? Or are you talking about general bans?

                          1. 1

                            I like the container one, I also don’t see the point

                            1. 1

                              It’s really easy to see what state a container is in because you can read a 200 line text file and see that it’s just alpine linux with X Y Z installed and this config changed. On a VM it’s next to impossible to see what has been changed since it was installed.

                              1. 3

                                ate a container is in because you can read a 200 line text file and see that it’s just alpine linux with X Y Z in

                                I just check the puppet manifest

                                1. 2

                                  It’s still possible to change other things outside of that config. With a container having almost no persistent memory if you change something outside of the dockerfile it will be blown away soon.

                              2. 1

                                Containers wont be needed because unikernels.

                              3. 1

                                All schools will have some form of programming taught. Most will be garbage.

                                and will therefore be highly desirable hires to full stack shops.

                                1. 1

                                  I would add the bottom falling out of the PC market, making PCs more expensive as gamers and enterprise, the entire reason why it still maintains economies of scale, just don’t buy new HW anymore.

                                  1. 1

                                    I used to always buy PCs, but indeed the last 5 years I haven’t used a desktop PC.

                                    1. 1

                                      If it does happen, It’ll probably affect laptops as well, but desktops especially.

                                  2. 1

                                    (5 years) All schools will have some form of programming taught. Most will be garbage.

                                    My prediction: Whether the programming language is garbage or not, provided some reasonable amount of time is spent on these courses we will see a general improvement in the logical thinking and deductive reasoning skills of those students.

                                    (at least, I hope so)

                                  1. 5

                                    Product placement and press release. :(

                                    1. 4

                                      This is significant news in an important sector of our industry. Your reflexive negativity is destructive to this website.

                                      1. 8

                                        I don’t think the personal attack was necessary here.

                                        1. 11

                                          This is significant news in an important sector of our industry.

                                          Sure, but unfortunately we have somewhat limited space and attention bandwidth here, and if we were to support posting every piece of significant news in important sectors of our industry, we’d find ourselves flooded. There is a great site with news for hackers–this sort of stuff is a great fit for that other site!

                                          Your reflexive negativity is destructive to this website.

                                          I’m sorry if that’s how this is perceived. I’ve gone to some lengths to do better in terms of negativity. Unfortunately, it’s hard to be positive when pointing out pathological community behaviors that have actively ruined and destroyed other sites.

                                          1. 2

                                            I think you’re somewhat right– I would have posted a more technical take like this one but didn’t see any posts about it at the time. After the other one was posted, I would have deleted this one if I was able to.

                                      1. 1

                                        defer() is basically independent of the rest of the library, isn’t it? Might want to extract that.

                                        1. 2

                                          It’s not entirely independent, as it depends on the "it" macro to create a bunch of variables and run the deferred statements. A stand-alone implementation would require a macro you call at the beginning of a block which will contain deferred expressions, and a macro you call before every return, so it’s not as nice to use. It also relies on GNU extensions, which imo is okay for a test suite, but I’d be careful relying on them in regular code.

                                          Anyways, I did the work to extract it into its own small library: https://gist.github.com/mortie/0696f1cf717d192a33b7d842144dcf4a

                                          Example usage:

                                          #include "defer.h"
                                          #include <stdio.h>
                                          int main() {
                                              defer_init();
                                              defer(printf("world\n"));
                                              defer(printf("hello "));
                                              defer_return(0);
                                          }
                                          

                                          If you want to do anything interesting with it, feel free to.

                                        1. 3

                                          I’m still looking for a test harness that doesn’t need me to explicitly call each test/suite in main. My current approach is a simple-minded code-generation. Is there a way to do this that avoids autogenerating files and whatnot?

                                          1. 3

                                            There’s a couple of ways I can imagine that would be possible. Currently, each top-level describe generates a function; I could have a global array of function pointers, and use the __COUNTER__ macro to automatically insert describe‘s functions into that array. However, that would mean that the length of the array would have to be static. It probably wouldn’t be too bad though if it was configurable by defining a macro before including the library, and defaulting the length to something like 1024, though.

                                            Another solution would be to not have these top-level describes, and instead have a macro called testsuite or something, which generates a main function. This would mean that, if your test suite is in multiple files, you’d have to be very careful what you have in those files, because they would be included from a function body, but it would be doable.

                                            I think the first approach would be the best. You could then also have a runtests() macro which loops from 0 through __COUNTER__ - 2 and runs all the tests.

                                            1. 1

                                              That’s a great idea. Thanks!

                                              1. 2

                                                An update: the first solution will be much harder than I expected, because you can’t in C do things like foo[0] = bar outside of a function. That means you can’t assign the function pointer to the array in the describe macro. If you could append to a macro frow within a macro, you could have a macro which describe appends to which, when invoked, just calls all the functions created by describe, but there doesn’t seem to be any way to append to a macro from within a macro (though we can get close; using push_macro and pop_macro in _Pragma, it would be possible to append to a macro, but not from within another macro).

                                                It would still be possible to call the functions something deterministic (say test_##__COUNTER__), and then, in the main function, use dlopen on argv[0], and then loop from i=0 to i=__COUNTER__-2 and use dlsym to find the symbol named "_test_$i" and call it… but that’s not something I want to do in Snow, because that sounds a little too crazy :P

                                                1. 1

                                                  I appreciate the update. Yes, that would be too crazy for my taste as well. (As is your second idea above.)

                                                  1. 1

                                                    FWIW, you can do this by placing the function pointer in a custom linker section with linker-inserted begin/end symbols; unfortunately, that requires your user to use a custom linker script, which will be annoying for them.

                                            1. 14

                                              All of my upward moves have been internal, and of the form “well, we agree that I’ve been doing the job pretty successfully; let us make my title match what I’m actually doing”. IME, seniority is as much taken as it is given. (Not sure to what extent my experience is typical.)

                                              (E.g. if you want to lead, mentor an intern/junior/…, or arrange to lead a small low-stakes internal project; if you want to architect, shadow an experienced architect, provide designs for your own components, and/or propose important refactorings; etc.)

                                              1. 7

                                                IME, seniority is as much taken as it is given.

                                                Bingo. Show initiative in a polite yet assertive way, deliver results, and talk about those results to the right people.

                                                1. 4

                                                  seniority is as much taken as it is given

                                                  This sounds like good advice. Perhaps it is more applicable to intra-company movements than moving to a new company. Hiring markets are probably be more efficient than intra-company hierarchies; that is, internally companies could be stifling a lot of value by not helping juniors move into seniority, and this inefficiency can be capitalized on by just taking the responsibilities of seniority for yourself.

                                                  1. 3

                                                    IME moving between companies is always where you move up

                                                1. 5

                                                  Several people here are recommending CMake as an alternative. I’ve only interacted with CMake at a fairly surface level, but found it pretty unwieldy and overcomplicated (failed the “simple things should be simple” test). Does it have merits that I wasn’t seeing?

                                                  1. 3

                                                    CMake can generate output both for Unix and for Windows systems. That’s one (good) reason lots of C++ libraries use CMake.

                                                    1. 2

                                                      CMake is pretty nice and has nice documentation. You can also pick stuff up from reading other people’s CMakeLists. For simple projects the CMake file can be pretty compact.

                                                      1. 3

                                                        I actually found the CMake documentation to be quite terrible for new users. The up-to-date documentation factually describes what the different functions do, but has very little examples of how to actually write real-world CMake scripts. There are a few official tutorials that try to do this, but they are made for ancient versions like CMake 2.6. So in order to learn how to use CMake, you are stuck reading through tons of other peoples scripts to try to deduce some common best practices.

                                                        While modern CMake is not terrible, you often have to restrict yourself to some ancient version (2.8.6 I believe is common) in order to support certain versions of CentOS/RHEL/Ubuntu LTS (and there were some big changes in CMake around 2.8.12/3.0).

                                                        Also, having string as the only data type has led to some absurd corner cases.

                                                      1. 4

                                                        While small on the surface, it can’t stand alone — it includes bsd.prog.mk has some, ahem, complexity.

                                                        (I couldn’t tell if your comment implies BSD makefiles are hairballs or if it implies they’re simple ;))

                                                        1. 3

                                                          bsd.prog.mk is quite the library, but CMake is much larger; I think it was meant positively.

                                                      1. 37

                                                        It wasn’t hate speech directed at some group. It was a self-described “hate post” with a one-line knee-jerk brush-off of Electrum. That’s a worthless troll.

                                                        I only meant to delete the parent comment and didn’t expect the entire thread to get deleted. I’ll see if I can restore the thread without it, but moderation options are pretty limited.

                                                        In hindsight, I see how the moderation log was misleading if you didn’t recognize the comment and will write more useful messages.

                                                        1. 6

                                                          Yeah, this seems to be a bug, probably because it’s a top-level comment.

                                                          1. 4

                                                            It’s not a bug but no reason was given. Not sure if I should reverse that or not.

                                                            1. 1

                                                              I did reverse it.

                                                            2. 3

                                                              Woo, glad it’s not a new mod policy :D - Thanks for digging in!

                                                            3. 6

                                                              I’m not sure if you made the right call here, but thanks for your efforts - communities need moderation, and it’s a hard and often thankless job. I’m happy that lobste.rs does have people willing to take that job!

                                                            1. 1

                                                              This is quite neat. One question from someone who didn’t compile the code and play with it: the “XML diff” algorithm BULD seems almost insensitive to ordering, but the order of text matters a lot (and classic diff - and your merge algorithm - are very linear comparisons.) Does the algorithm “behave” once you start moving blocks?

                                                              Thanks for sharing!

                                                              1. 2

                                                                BULD works on ordered trees—that was one of the reasons it was chosen. And it indeed supports the “move” concept in the edit script. In the lowdown implementation (specifically, in the merging algorithm), moves are made into insert/delete simply for the sake of readability of the output. It’s straightforward to extend the API to have “moved from” and “moved to” bits. Then have a little link in the output. Maybe in later versions…

                                                              1. 1

                                                                (Minor typo: “rooted at thd node”. Might want to fix that.)

                                                                1. 2

                                                                  Thanks, noted! (Will push when document is next updated.)

                                                                1. 1

                                                                  tl;dr: Attaching a decorator @deco to foo’s definition is semantically equivalent to writing foo = deco(foo) after foo‘s definition. Multiple decorators attached to the same definition are applied in the reverse order in which they appear in the program text. The consequences of these two facts are exactly what you would expect if you already know the remainder of Python’s syntax and semantics.

                                                                  1. 1

                                                                    True, but the article also makes the (harder!) case that using decorators in “creative” ways may not actually be a bad idea in all cases. I found it worth reading for that reason.

                                                                    1. 1

                                                                      The article doesn’t make a very good case. The most “creative” snippets (22 and 23) are also the ugliest ones.

                                                                      1. 2

                                                                        One good example of a “creative” decorator is in the contextlib package: @contextmanager takes a function and returns a callable object.

                                                                  1. 4

                                                                    Pretty unpleasant results - often-inconsistent behavior makes it hard to even define “performance after warmup”, and lots of measurements end up finding that the “steady state” is either unsteady or worse than the startup behavior.

                                                                    Nice, and quite thorough, work!

                                                                    1. 2

                                                                      Thanks :)

                                                                    1. 1

                                                                      Is that company using OpenBSD for any of their products? They seem to be working on some innovative phone features, but it’s not clear (to me) what the underlying OS is.

                                                                      Would be pretty amazing though, if it was based on OpenBSD.

                                                                      1. 5

                                                                        No idea. But note that Android borrows big chunks of OpenBSD libc for bionic, for instance; it’s entirely possible to be grateful to OpenBSD without using full OpenBSD.

                                                                        1. 3

                                                                          They might also rely on it internally.

                                                                        1. 7

                                                                          A summary of some of what I found during my research. Interestingly, the hardware engineer that taught me a lot about subversion based on his own experience doing and countering it showed up again on Schneier’s blog. I told him to prove his identity with an example of analog subversion. His reply has nice examples of how easy it is to slip something through with no hope of verifying that stuff with any simple or cheap method:

                                                                          https://www.schneier.com/blog/archives/2017/08/nsa_collects_ms.html#c6757659

                                                                          I mean, it’s estimated there’s only around 2,000 engineers world-wide that understand analog enough for high-end ASIC’s. It’s also a black art among the rest with all kinds of tricks going back decades. So, even the nodes you can review might have tricks built into them a talented attacker can use to extract keys. We’ve been seeing mini-examples of that with side channel work on things like power analysis.

                                                                          1. 2

                                                                            Wow, this is fascinating stuff and something I don’t see discussed nearly enough. Thanks for sharing your research!

                                                                            I am hoping for mini-fabs (if I’m using that term correctly) so that we can decentralize chip design and manufacturing. Everyone should be able to print their CPUs locally. Any hope of that anytime soon?

                                                                            1. 2

                                                                              Mapper is a company you want to watch here.

                                                                              (For values of “everyone” and “locally” in the “couple millions of euro’s” range, IIRC.)

                                                                              1. 1

                                                                                That would probably be this:

                                                                                http://www.sematech.org/meetings/archives/litho/forum/20040128/presentations/29_CPM_Kruit_Mapper.pdf

                                                                                Thanks for the tip! That looks nice. Especially if they can pull off 10 wafers per hour low cost at 45nm. The technical details are also a good example of why this stuff might be mind-bogglingly hard to verify as I just wrote in the other comment here. So many tech together from MEMS to ASIC’s to X-Rays to make this stuff work.

                                                                              2. 2

                                                                                It’s ridiculously hard science and tech. The amount of money, labor, and PhD’s that go into each process node or set of advances is mind-boggling. They have patents on most of it. The barrier to entry is high. The simplest setup is tech that directly writes the chip onto the wafers without steppers or anything. eASIC uses eBeam Workstations for that sort of thing. Their prototyping runs… a loss leader so numbers might be off… is $50-60k for around 50 chips. The machines themselves are very expensive. Only so many companies that make machines that can do stuff like this.

                                                                                There was a fab in Europe I have in my bookmarks somewhere that operated solely with such machines. Gave rapid, turn-around time. Went out of business I think. Tough market. However, shows that groups (eg Universities or businesses) could partner together to invest in local companies doing that with specific gear. The trick is then that the supplier of thing printing or thing verifying the chips might be subverted or malicious. Tech is so complex it might be too hard to verify that’s not the case.

                                                                                So, it’s an open, expensive, and complex problem if you want chips that are efficient. Playing shell games hiding what equipment and fabs are in use for each run was a temporary solution I came up with. Also, doing a high-performance, massive FPGA that we map other stuff on. It gets periodically checked by ChipWorks and other firms.

                                                                            1. 1

                                                                              WTF? You have to sign in to read medium articles now? Do they think this will make me read more articles? Sigh, another domain for the shit list.

                                                                              (Nobody else complaining out of politeness? Or you all signed up already? Or did I just win the A/B lottery?)

                                                                              (So it seems to come and go. When I’m “lucky”, it says “you’ve already read one article this month. Sign in to read more. Sign in with Google. Sign in with Facebook. Sign in…” but there’s no way to avoid signing in. Switch browsers, no popup.)

                                                                              1. 4

                                                                                I’m for this change, it will make it much easier to not read Medium.

                                                                                1. 1

                                                                                  I’m not signed in, but I remember that Medium bugged be once to log in. I do believe that I had the option to choose “go away and don’t bug me again”.

                                                                                  1. 1

                                                                                    It doesn’t ask me to sign in (Safari on iPhone), FWIW.

                                                                                    1. 1

                                                                                      Works for me on mobile

                                                                                      1. 1

                                                                                        I’m on desktop Edge. It’s not asking me to sign in. Perhaps it knows you have an account? Try clearing your cookies.

                                                                                      1. 1

                                                                                        Gliffy. Not recommended.

                                                                                        1. 5

                                                                                          I’m a little disappointed that this “critical vulnerability” is just a local side channel attack that requires an attacker to have access to run arbitrary code on your system. Furthermore, the attack seems to require the victim process to use shared memory that the attacker can flush from cache..

                                                                                          That said, js + wasm look worse and worse in light of local side channel attacks. Maybe one day someone will weaponize them to both detect interesting events (use of private keys?) and gather & leak info about them. And pledge & privileges will not help.

                                                                                          1. 4

                                                                                            Yes; it’s good work, but this article way over-hypes it.

                                                                                            1. 2

                                                                                              The article indicates keys can be leaked across VMs - plenty of people are running on shared hardware, and they’re vulnerable to a remote attack, no?

                                                                                              1. 4

                                                                                                I guess it depends on your definition of remote attack. Yes, if you run your code on some shared host with a hypervisor that snoops your memory, and an attacker can run his code on that same host, I guess you might consider that attack “remote.”

                                                                                                But to me they are all on the same system, and this is why I’m just a little disgusted with VMs. They are all too cheap and convenient for people to even consider the risks of running their applications on the same system with other unknown users.

                                                                                                Think about it – a headline like “this attack steals your keys across VMs!” is sure to grab your attention. But what are you actually doing with these private keys? Surely something that should be kept private or secret!

                                                                                                Yet, unless all your application logic is designed to run in constant time with special attention to make every page unique, private information is at risk even if the keys are not compromised. Focusing on the keys and thinking the application is safe if they have carefully implemented crypto that isn’t suspectible to side channel attacks, people miss the forest for that single tree. Everything you do could be suspectible to information leakage via side channel attacks unless you’re careful about it. Keys are an obviously interesting target, but they are just the tip of the iceberg.

                                                                                                It would be nice if all VPS providers disclosed whether they’re snooping you or not. Apart from that, there ought to be some mitigation available (e.g. randomized builds, random junk on allocated pages, in paddings, etc.). But if everyone used these, there’s not much point in snooping to begin with. Maybe people should just stop sharing memory across VPSen to begin with.

                                                                                                Of course that would make it a little less cheap as memory requirements increase. It’s the price you pay…

                                                                                                1. 2

                                                                                                  But to me they are all on the same system, and this is why I’m just a little disgusted with VMs

                                                                                                  I can’t agree more - these attacks will only become more sophisticated and pervasive as we move more into shared resources.

                                                                                            1. 5

                                                                                              This is pretty interesting, TEMPEST omg, all that, but… questionable choice of target. They start by mentioning that this is entirely feasible against a network encryption appliance, but then they target some weird underpowered FPGA thing? Their target, as I understand it, is a software T-table implementation (which has other side channel leaks as well). But a real network appliance would be using a hardware implementation, no? Can you demo this attack against some more common hardware, like a MacBook, which would be using AES-NI on the CPU?

                                                                                              1. 4

                                                                                                I think we did mix too many attack models, especially since we/I tried to keep the paper readable for a general technical audience (which made it hard to concisely define these things). The following are indeed two distinct points:

                                                                                                • The “network encryption appliance” scenario is intended to show that many-trace chosen-input attacks are feasible in at least one relevant(-to-us) threat model.

                                                                                                • As discussed in https://news.ycombinator.com/item?id=14618916, “the contribution of this work is mostly in showing that you can break realistic-but-not-great implementations very quickly, cheaply, and without needing to open most enclosures.” We attack a MicroSemi SmartFusion 2’s ARM core (arguably a “weird underpowered FPGA thing”), a Xilinx Zynq’s ARM core (not a bad model for an IoT-ish device), and a naive hardware implementation (that mostly didn’t make it into the blog). The internship was mostly run on what one of our FPGA experts had lying on his desk. ;-)

                                                                                                A specialized network crypto definitely needs (and has) a better crypto core than the “realistic-but-not-great” cores we attacked here. (At least, ours do. No promises about today’s model of cheap Linux router. ;-) )

                                                                                                1. 2

                                                                                                  Oh, nice, thanks for clarifying. Good work.