1. 6

    Interesting that this same thing happened 20 years ago, in 1999, when consumer GPS systems, while not as popular and prevalent as they are today, did exist. I don’t remember hearing anything about a GPS rollover at the time, even though the vaguely-similar Y2K bug was all over the news and pop culture.

    1. 4

      Back in 1999 consumer GPS units were pretty rare eg: Garmin 12 and similar. Selective Availability severely limited their accuracy so they weren’t much use in town but they were still useful for eg: offroad motorcycling in remoter parts & doing daft things like finding confluence points: http://confluence.org/photo.php?visitid=4068&pic=3 But GPSes were pretty rare and not really thought of as a “computer thing” … I was involved in Y2K planning & testing but don’t remember this being discussed at the time.

    1. 3

      Lots more detail in this other blog post: http://make.mad-scientist.net/papers/advanced-auto-dependency-generation/#tldr

      I’d never heard of gcc -MF before so this is pretty cool …

      1. 1

        I’m still trying to find a primary reference for the quote about the Forth VLSI compiler being too limited to design a “real” microprocessor but that’s okay because it was enough to design a CPU capable of running Forth … any hints?

        1. 1

          I showed Chuck Moore’s description of that to an ASIC hacker. He said it was very simplistic – no way you could do with that what people are doing with standard tools.

          So we would have to ask Yossi Kreinin who that ASIC hacker is?

          1. 1

            Although I can’t do a counter in Verilog, I do know a lot about EDA processes since I collected and skimmed tons of papers on hardware to have something ready for open-source attempts at EDA. One site described it as a whole series of NP-Hard problems you had to do with multi-variable optimization. That looks correct based on what I found. The design rules to enforce and problems to solve go up with each process shrink. It think it was a few hundred on the ones Moore uses. It’s about 2,500 on 28nm. Gets to the point that they have to do things like image recognition on the circuits to find patterns that do weird things before trying to rewire each into equivalent circuits that lack those patterns. That’s just one or a few rules.

            I’m not even sure someone could understand in their head a design of significant size on recent nodes. All those rules along with their interactions even on a stack processor must be huge complexity. Abstraction is critical on those nodes. Most designers just synthesize the RTL from high-level descriptions to manage complexity. I think that’s the right thing to do, too, since we shouldn’t have to understand a pile of booleans. They’re effectively meaningless to a human whereas the high-level ASM’s or FSM’s do have meaning. So, they do high-level stuff with multiple stages of synthesis, optimization, verification, testing, and inspection.

        1. 1

          I’ve been playing around with similar ideas: https://nick.zoic.org/art/programming-beyond-text-files/

          1. 1

            That’s a good start, but I feel one of the main issue is navigation (with mouse and keyboard), and the other is dealing with eventual consistency with the schema – it would be interesting to see how you would address these in WASTE. If you’re interested in dataflow and visual languages, I’d recommend reading the corresponding chapter in Wouter van Oortmerssen’s Aardapel thesis. It features some interesting insights into the advantages and drawbacks of visual and dataflow languages.

            1. 1

              Thanks! It’s something I’m messing with on and off as time allows, so it’s evolving very slowly.

              The general idea was to force the document to always be consistent with the schema by having kind of a typeahead buffer (keystrokes don’t commit to the tree until they are parseable) and/or inserting extra leaves (you type ‘and’ on the end of an expression, it adds a temporary ‘true’ on the end to make that into a proper tree, and lets you overtype that with whatever you were about to say.

              I’d like it to work with existing languages eg: Python/Javascript/Haskell because I’m more interested in the user interface aspect than the language design on this particular idea, but obviously that brings its challenges.

              Aardapel looks interesting, thanks for the reference, I’ll have a read!

          1. 36

            When reading this article I wanted to echo the same thing that Daniel Steinberg basically said.

            DoH is necessary because the DNS community messed up over the past two decades. Instead of privacy, hop-to-hop authentication and integrity, they picked end2end integrity protection but no privacy (DNSSEC) and wasted two decades on heaping that unbelievable amount of complexity on top of DNS. Meanwhile cleanup of basic protocol issues with DNS crawled along at glacial pace.

            This is why DNSSEC will never get browser support, but DoH is getting there rapidly. It solves the right problems.

            1. 4

              I haven’t studied DoH or DoT enough to feel comfortable talking about the solutions, but on the requirements side, intuitively I don’t get where this all-consuming “privacy” boundary is supposed to be. Is the next step that all browsers will just ship with mandatory VPNs so nobody can see what IP address I’m talking to? (Based on history, that wouldn’t really surprise me.) So then there’s a massive invisible overlay network just for the WWW?

              And by “nobody” I mean nobody who doesn’t really matter anyway, since I’d think no corporation with an extensive network, nor any country with extensive human rights problems, is going to let you use either protocol anyway (or they’ll require a MITM CA).

              1. 5

                The end game is all traffic is protected by a sort of ad hoc point to point VPN between endpoints. There can be traffic analysis but no content analysis.

                1. 6

                  We’re slowly moving towards “Tor”. It seems all privacy enhancements being implemented slowly build up to something that Tor already provides for a long time…

                  1. 4

                    “Tor all the things” would be awesome.. if it could be fast

                    1. 2

                      Or what dnscurve did years ago.

                    2. 1

                      But the point of this seems to be making the “endpoint” private as well. The line between “traffic” and “content” is ever blurrier — I wouldn’t have thought DNS is “content”. If it is, then I don’t know why IP addresses aren’t “content” just as much. Is this only supposed to improve privacy for shared servers?

                      1. 8

                        I’ve never thought of the content of DNS packets as anything other than content. Every packet has a header containing addresses and some data. The data should be encrypted.

                        1. 1

                          I don’t think the argument is that simple. ICMP and ARP packets are also headers and data, but that data surely isn’t “content”. I would have made your statement just about application UDP and TCP.

                          I think of “content” as what applications exchange, and “traffic” (aka “metadata”) as what the network that connects applications needs to exchange to get them connected. Given that both DNS names and IP addresses identify endpoints, it’s not obvious to me why DNS names are more sensitive than IP addresses. The end result of a DNS lookup is that you immediately send a packet to the resulting IP address, which quite often identifies who you’re talking to just as clearly as the DNS name.

                          No doubt I’m just uneducated on this — my point was I don’t understand where that line is being drawn. When I try to follow this line of reasoning I end up needing a complete layer-3 VPN (so you can’t even see the IP addresses), not just some revisions to the DNS protocol.

                          1. 2

                            The end result of a DNS lookup is that you immediately send a packet to the resulting IP address

                            This is a very limited view of DNS.

                            1. 1

                              Is there another usage of DNS that’s relevant to this privacy discussion that’s going on?

                              1. 3

                                Most browsers do DNS prefetching, which reveals page content even for links you don’t visit.

                                1. 1

                                  Good point! It makes me think that perhaps we should make browsers continually prefetch random websites that the users don’t visit, which would improve privacy in much the same way as the CDNs do. (Actually, I feel like that has been proposed, though I can’t find a reference.)

                                  iTerm had a bug in which it was making DNS requests for bits of terminal output to see if they were links it should highlight. So sometimes content does leak into DNS — by either definition.

                                  1. 1
                                2. 1

                                  CNAME records, quite obviously, for one

                                  1. 1

                                    OK, obviously, but then is there something relevant to privacy that you do with CNAME records, other than simply looking up the corresponding A record and then immediately going to that IP address?

                                    If the argument is “ah, but the A address is for a CDN”, that thread is below…I only get “privacy” if I use a CDN of sufficient size to obscure my endpoint?

                                    1. 3

                                      OK, obviously, but then is there something relevant to privacy that you do with CNAME records, other than simply looking up the corresponding A record and then immediately going to that IP address

                                      I resolve some-controversial-site-in-my-country.com to CNAME blah.squarespace.com. I resolve that to A {some squarespace IP}

                                      Without DoH or equiv, its obvious to a network observer who I’m talking to. With it, it is impossible to distinguish it from thousands of other sites.

                                      If the argument is “ah, but the A address is for a CDN”, that thread is below…I only get “privacy” if I use a CDN of sufficient size to obscure my endpoint?

                                      Yes, this doesn’t fix every single privacy issue. No, that doesn’t mean it doesn’t improve the situation for a lot of things.

                          2. 5

                            IP addresses are content when they are A records to your-strange-porno-site.cx or bombmaking-101.su.

                            They are metadata when they redirect to *.cloudfront.net, akamiedge.net, cdn.cloudflare.com, …, and huge swaths of the Internet are behind giant CDNs. Widespread DoH and ESNI adoption will basically mean that anyone between you and that CDN will be essentially blind to what you are accessing.

                            Is this better? That’s for you to decide ;)

                            1. 6

                              Well, here again I don’t quite get the requirements. I’m not sure it’s a good goal to achieve “privacy” by routing everything through three giant commercial CDNs.

                              1. 3

                                Because three CDNs are literally the only uses of Virtual Hosting and SNI on the entire internet?

                                I’d venture to say that the overwhelming majority of non-corporate, user generated content (and a large amount of smaller business sites) are not hosted at a dedicated IP. It’s all shopify equivalents and hundreds of blog and CMS hosting services.

                                1. 1

                                  Well, the smaller the host is, the weaker the “security” becomes.

                                  Anyway, I was just trying to understand the requirements behind this protocol, not make a value judgment. Seems like the goal is increased obscurity for a large, but undefined and unstable, set of websites.

                                  If I were afraid of my website access being discovered, I personally wouldn’t rely on this mechanism for my security, without some other mechanism to guarantee the quantity and irrelevance of other websites on the same host/proxy. But others might find it useful. It seems to me like an inelegant hack that is partially effective, and I agree it’s disappointing if this is the best practical solution the internet engineering community has come up with.

                                  1. 2

                                    I have multiple subdomins on a fairly small site. Some of them are less public than others, so it would be nice to not reveal their presence.

                    1. 4

                      JIRA doesn’t lie

                      Nonsense.

                      1. 3

                        It’s bucketing down, so I’m working on FuPy, or at least getting the build environment up and running at home.

                        1. 4

                          I had no idea notebooks worked like that, I assumed they’d maintain a dependency graph and re-run cells like a spreadsheet. That’s…. awful.

                          1. 3

                            Mathematica supports that through Dynamic cells, though they aren’t the default type of cell.

                            1. 3

                              I started on this with my jupyter-like-thing-for-micropython WebPad, although at the moment it is just a linear chain of blocks. A dependency graph is a great idea though.

                              1. 2

                                That sounds like an interesting way to do notebooks, but it would either require a different language than python or an explicit way of tracking cell dependencies.

                                In Excel it works because the expression language is built with dependency tracking as a consideration. You could theoretically use event handlers or an FRP-style setup for doing the dependency tracking, but either seems like it’d be awkward in python, due to it’s lack of elegant ways of expressing multiline anonymous functions.

                              1. 3

                                https://nick.zoic.org/ … a fair bit of Python, MicroPython, IoT kind of stuff.

                                Doesn’t actually have an RSS feed right now, I probably should fix that.

                                1. 1

                                  OK, so now it has an RSS feed.

                                1. 12

                                  A realization I recently had:

                                  Why don’t we abstract away all display affordances from a piece of code’s position in a file? That is, the editor reads the file, parses its AST, and displays it according to the programmer’s preference (e.g., elastic tabstops, elm-like comma-leading lists, newline/no-newline before opening braces, etc). And prior to save, the editor simply runs it through an uncustomized prettier first.

                                  There are a million and one ways to view XML data without actually reading/writing pure XML. Why not do that with code as well?

                                  1. 4

                                    This idea is floating around the interwebz for a long time. I recall it being stated almost verbatim on Reddit, HN, probably on /.

                                    1. 6

                                      And once you take it a step further, it’s clear that it shouldn’t be in a text file in the first place. Code just isn’t text. If you store it as a tree or a graph in some sort of database, it becomes possible to interact with it in much more powerful ways (including displaying it any way you like). We’ve been hobbled by equating display representation with storage format.

                                      1. 7

                                        This talk touches on this issue, along with some related ones and HCI in general: Bret Victor: The Future of Programming

                                        1. 2

                                          God, I have been trying to recall the name of this talk for ages! Thank you so much, it is a great recommendation

                                        2. 5

                                          Text is great when (not if) your more complicated tools fail or do something you can’t tolerate and you need to use tools which don’t Respect The Intent of designers who, for whatever reason, don’t respect your intent or workflow. Sometimes, solving a problem means working around a breakage, whether or not that breakage is intentional on someone else’s part.

                                          Besides, we just (like, last fifteen or so years) got text to the point where it’s largely compatible. Would be a shame to throw that away in favor of some new AST-database-thing which only exists on a few platforms.

                                          1. 1

                                            I’m not sure I get your point about about intent. Isn’t the same already true of, say, compilers? There are compiler bugs that we have to work around, there are programs that seem logical to us but the compiler won’t accept, and so on. Still, everybody seems to be mostly happy to file a compiler bug or a feature request, and live with a workaround for the present. Seems like it works well enough in practice.

                                            I understand your concern about introducing a new format but it sounds like a case of worse-is-better. Sure, we get a lot of convenience from the ubiquity of text, but it would nevertheless be sad if we were stuck with it for the next two centuries.

                                            1. 1

                                              With compilers, there are multiple of them for any given language, if the language is important enough, and you can feed the same source into all of them, assuming that source is text.

                                              1. 2

                                                I’ve never seen anyone casually swap out the compiler for production code. Also, for the longest time, if you wrote C++ for Windows, you pretty much had to use the Microsoft compiler. I’m sure that there are many embedded platforms with a single compiler.

                                                If there’s a bug in the compiler, in most casss you work around it, then patiently wait for a fix from the vendor.

                                                So that’s hardly a valid counterpoint.

                                                1. 1

                                                  Re: swapping out compiler for production code: most if not all cross-platform C++ libraries can be compiled on at least llvm, gcc and msvc.

                                                  1. 1

                                                    Yes, I’m aware of that, but what does it have to do with anything I said?

                                                    EDIT: Hey, I went to Canterbury :)

                                                    1. 1

                                                      “I’ve never seen anyone casually swap out the compiler for production code” sounded like you were saying people didn’t tend to compile the same production code on multiple compilers, which of course anyone that compiles on windows and non-windows does. Sorry if I misinterpreted your comment!

                                                      My first comment is in response to another Kiwi. Small world. Pretty cool.

                                          2. 1

                                            This, this, a thousand times this. Text is a good user-interface for code (for now). But it’s a terrible storage and interchange format. Every tool needs its own parser, and each one is slightly different, leaving begging the amount of cpu and programmer time we waste going from text<->ast<->text.

                                            1. 2

                                              Yeah, it’s obviously wasteful and limiting. Why do you think we are still stuck with text? Is it just sheer inertia and incrementalism, or does text really offer advantages that are challenging to recreate with other formats?

                                              1. 7

                                                The text editor I use can handle any computer language you can throw at it. It doesn’t matter if it’s BASIC, C, BCPL, C++, SQL, Prolog, Fortran 77, Pascal, x86 Assembler, Forth, Lisp, JavaScript, Java, Lua, Make, Hope, Go, Swift, Objective-C, Rexx, Ruby, XSLT, HTML, Perl, TCL, Clojure, 6502 Assembler, 68000 Assembler, COBOL, Coffee, Erlang, Haskell, Ocaml, ML, 6809 Assembler, PostScript, Scala, Brainfuck, or even Whitespace. [1]

                                                Meanwhile, the last time I tried an IDE (last year I think) it crashed hard on a simple C program I attempted to load into it. It was valid C code [2]. That just reinforced my notion that we aren’t anywhere close to getting away from text.

                                                [1] APL is an issue, but only because I can’t type the character set on my keyboard.

                                                [2] But NOT C++, which of course, everybody uses, right?

                                                1. 0

                                                  To your point about text editors working with any language, I think this is like arguing that the only tool required by a carpenter is a single large screwdriver: you can use it as a hammer, as a chisel, as a knife (if sharpened), as a wedge, as a nail puller, and so on. Just apply sufficient effort and ingenuity! Does that sound like an optimal solution?

                                                  My preference is for powerful specialised tools rather than a single thing that can be kind of sort of applied to a task.

                                                  Or, to approach from the opposite direction, would you say that a CAD application or Blender are bad tools because they only work with a limited number of formats? If only they also allowed you to edit JPEGs and PDFs, they would be so much better!

                                                  To your point about IDEs: I think that might even support my argument. Parsing of freeform text is apparently sufficiently hard that we’re still getting issues like the one you saw.

                                                  1. 9

                                                    I use other tools besides the text editor—I use version control, compilers, linkers, debuggers, and a whole litany of Unix tools (grep, sed, awk, sort, etc). The thing I want to point out is that as long as the source code is in ASCII (or UTF-8), I can edit it. I can study it. I might not be able to compile it (because I lack the INRAC compiler but I can still view the code). How does one “view” Smalltalk code when one doesn’t have Smalltalk? Or Visual Basic? Last I hear, Microsoft wasn’t giving out the format for Visual Basic programs (and good luck even finding the format for VB from the late 90s).

                                                    The other issue I have with IDEs (and I will come out and say I have a bias against the things because I’ve never had one that worked for me for any length of time without crashing, and I’ve tried quite a few over 30 years) is that you have one IDE for C++, and one for Java, and one for Pascal, and one for Assembly [1] and one for Lua and one for Python and man … that’s just too many damn environments to deal with [2]. Maybe there are IDEs now that can work with more than one language [3] but again, I’ve yet to find one that works.

                                                    I have nothing against specialized tools like AutoCAD or Blender or PhotoShop or even Deluxe Paint, as long as there is a way to extract the data when the tool (or the company) is no longer around. Photo Shop and Deluxe Paint work with defined formats that other tools can understand. I think Blender works with several formats, but I am not sure about AutoCAD (never having used it).

                                                    So, why hasn’t anyone stored and manipulated ASTs? I keep hearing cries that we should do it, but yet, no one has yet done it … I wonder if it’s harder than you even imagine …

                                                    Edited to add: Also, I’m a language maven, not a tool maven. It sounds like you are a tool maven. That colors our perspectives.

                                                    [1] Yes, I’ve come across several of those. Never understood the appeal …

                                                    [2] For work, I have to deal with C, C++, Lua, Make and Perl.

                                                    [3] Yeah, the last one that claimed C/C++ worked out so well for me.

                                                    1. 1

                                                      For your first concern about the long term accessibility of the code, you’ve already pointed out the solution: a defined open format.

                                                      Regarding IDEs: I’m not actually talking about IDEs; I’m talking about an editor that works with something other than text. Debugging, running the code, profiling etc. are different concerns and they can be handled separately (although again, the input would be something other than text). I suppose it would have some aspects of an IDE because you’d be manipulating the whole code base rather than individual files.

                                                      Regarding the language maven post: I enjoyed reading it a few years ago (and in practice, I’ve always ended up in the language camp as an early adopter). It was written 14 years ago, and I think the situation is different now. People have come to expect tooling, and it’s much easier to provide it in the form of editor/IDE plugins. Since language creators already have to do a huge amount of work to make programs in their languages executable in some form, I don’t think it would be an obstacle if the price of admission also included dealing with the storage format and representation.

                                                      To your point about lack of implementations: don’t Smalltalk and derivatives such as Pharo qualify? I don’t know if they store ASTs but at least they don’t store text. I think they demonstrate that it’s at least technically possible to get away from text, so the lack of mainstream adoption might be caused by non-technical reasons like being in a local maximum in terms of tools.

                                                      The problem, as always, is that there is such a huge number of tools already built around text that it’s very difficult to move to something else, even if the post-transition state of affairs would be much better.

                                                      1. 1

                                                        Text editors are language agnostic.

                                                        I’m trying to conceive of an “editor” that works with something other than text. Say an AST. Okay, but in Pascal, you have to declare variables at the top of each scope; you can declare variables anywhere in C++. In Lua, you can just use a variable, no declaration required. LISP, Lua and JavaScript allow anonymous functions; only the latest versions of C++ and Java allow anonymous functions, but they they’re restricted in that you can’t create closures, since C++ and Java have no concept of closures. C++ has exceptions, Java has two types of exceptions, C doesn’t; Lua kind of has exceptions but not really. An “AST editor” would have to somehow know that is and isn’t allowed per language, so if I’m editing C++ and write an anonymous function, I don’t reference variables outside the scope of said function, but that it can for Lua.

                                                        Okay, so we step away from AST—what other format do you see as being better than text?

                                                        1. 1

                                                          I don’t think it could be language agnostic - it would defeat the purpose as it wouldn’t be any more powerful than existing editors. However, I think it could offer largely the same UI, for similar languages at least.

                                                          1. 1

                                                            And that is my problem with it. As stated, I use C, C++ [1], Lua, Make and a bit of Perl. That’s at least what? Three different “editors” (C/C++, Lua/Perl (maybe), Make). No thank you, I’ll stick with a tool that can work with any language.

                                                            [1] Sparingly and where we have no choice; no one on my team actually enjoys it.

                                                          2. 1

                                                            Personally, I’m not saying you should need to give up your editor of choice. Text is a good (enough for now) UI for coding. But it’s a terrible format to build tools on. If the current state of the code lived in some sort of event-based graph database for example, your changes could trigger not only your incremental compiler, but source analysis (only on what’s new), it could also maintain a semantic changelog for version control, trigger code-generation (again, only what’s new).

                                                            There’s a million things that are currently “too hard” which would cease to be too hard if we had a live model of the code as various graphs (not just the ast, but call graphs, inheritance graphs, you-name-it) that we could subscribe to, or even write purely-functional consumers that are triggered only on changes.

                                                  2. 4

                                                    Inertia, arrogance, worse-is-better; Working systems being trapped behind closed doors at big companies; Hackers taking their language / editor / process on as part of their identity that needs to be defended with religious zeal; The complete destruction of dev tools as a viable business model; Methodologies-of-the-week…. The causes are numerous and varied, and the result is software dev is being hamstrung and we’re all wasting countless hours and dollars doing things computers should be doing for us.

                                                    1. 2

                                                      I think that part of the issue is that we haven’t seen good structured editor support outside of Haskell and some Lisps.

                                                      Having a principled foundation for structured editor + a critical mass by having it work for a language like Javascript/Ruby, would go a long way to making this concept more mainstream. After which we could say “provide a grammar for favorite language X and get structured editor support!”. This then becomes “everything is structured at all levels!”

                                                      1. 3

                                                        I think it’s possible that this only works for a subset of languages.

                                                        Structured editing is good in that it operates at a higher level than characters, but ultimately it’s still a text editing tool, isn’t it? For example, I think it should be trivial to pull up a list of (editable) definitions for all the functions in a project that call a given function, or to sort function and type definitions in different ways, or to substitute function calls in a function with the bodies of those functions to a given depth (as opposed to switching between different views to see what those functions do). I don’t think structured editing can help with tasks like that.

                                                        There are also ideas like Luna, have you seen it? I’m not convinced by the visual representation (it’s useful in some situations but I’m not sure it’s generally effective), but the interesting thing is they provide both a textual and a visual representation of the code.

                                                    2. 1

                                                      Python has a standard library module for parsing Python code into an AST and modifying the AST, but I don’t know of any Python tools that actually use it. I’m sure some of them do, though.

                                                    3. 1

                                                      Smalltalk. The word you’re looking for is Smalltalk. ;)

                                                      1. 2

                                                        Lisp, in fact. Smalltalk lives in an image, Lisp lives in the real world. ;)

                                                        Besides, Lisp already is the AST. Smalltalk has too much sugar, which is a pain in the AST.

                                                        1. 1

                                                          Possibly, but I’m only talking about a single aspect of it: being able to analyse and manipulate the code in more powerful ways than afforded by plain text. I think that’s equally possible for FP languages.

                                                      2. 1

                                                        Ultimately I think this is the only teneble solution. I feel I must be in the minority in having an extreme dislike of columnar-style code, and what I call “white space cliffs” where a column dictates a sudden huge increase in whitespace. But I realize how much it comes down to personal aesthetics, so I wish we could all just coexist :)

                                                        1. 1

                                                          Yeah, I’ve been messing around with similar ideas, see https://nick.zoic.org/art/waste-web-abstract-syntax-tree-editor/ although it’s only vapourware so far because things got busy …

                                                          1. 1

                                                            Many editors already do this to some extent. They just render 4-space tabs as whatever the user asks for. Everything after the indent, though, is assumed to be spaced appropriately (which seems right, anyway?)

                                                            1. 1

                                                              You can’t convert to elastic-tabstop style from that, and without heavy language-grammar knowledge you can’t do this for 4-space “tabs” generally.

                                                              Every editor ever supports this for traditional indent style, though: http://intellindent.info/seriously/

                                                              1. 1

                                                                To be clear, you can absolutely render a file that doesn’t have elastic tabstops as if it did. The way a file is rendered has nothing to do with the actual text in the file.

                                                                It’s like you’re suggesting that you can’t render a file containing a ton of numbers as a 3D scene in a game engine. That would be just wrong.

                                                                Regardless, my point is specifically that this elastic tabstops thing is not necessary and hurts code readability more than it helps.

                                                                The pefantics of clarifying between tabs and tabstops is a silly thing as well. Context gives more than enough information to know which one is being talked about.

                                                                It sounds like this concept is creating more problems than it solves, and is causing your editor to solve problems that only exist in the seveloper’s imagination. It’s not “KISS” at all, quite the opposite.

                                                            2. 1

                                                              Because presentation isn’t just a function of the AST. Indentation usually is, but alignment can be visually useful for all kinds of reasons.

                                                            1. 2

                                                              It’s in my category of things which are an interesting idea, but which should be a tablet app instead of a piece of hardware.

                                                              1. 1

                                                                I believe one of their main markets is use by HS students and for major exams such as the SAT or ACT.

                                                              1. 1

                                                                Yep, been doing this for years in LaTeX documents and now markdown (etc), it makes a big difference to the readability of diffs and after a while I found it felt natural to read in that form, a bit like reading dialog in a book.

                                                                1. 1

                                                                  Uh, that Tesla screen. I got to see one in person, it looks even worse.

                                                                  No idea what it’s like when driving, they declined my offer to swap for my ‘96 Toyota Hilux despite the Hilux’s clearly superior dashboard ergonomics.

                                                                  1. 4

                                                                    I rarely ever care about progression of time and version, and the author doesn’t make a good case for why I should in the case of FreeBSD. It seems like a very fussy distinction.

                                                                    1. 1

                                                                      At first I thought I was going to miss svn “r1234” numbers and considered putting a server-side update hook on a central server there which automatically tagged pushes to ‘master’ with.

                                                                      I never ended up feeling the need, but given it’s just a few lines of bash perhaps it’d be worth trying and see if people use the sequential numbers or if they’re just a distraction.

                                                                    1. 6

                                                                      There’s an argument that the really good thing about docker is not in fact docker itself, but Dockerfile. By standardizing on a description for containers, you can now implement containers in whatever way you want, over zones / jails / namespaces or virtual machines, and they’ll work the same.

                                                                      1. 18

                                                                        Sounds like the C version of English As She Is Spoke

                                                                        1. 5

                                                                          My wife teaches English and will really appreciate this. Thanks!

                                                                          1. 3

                                                                            This type of naive translation is really common. A personal favourite

                                                                          1. 4

                                                                            In terms of desktop adblocking and tracking blocking solutions, I use uBlock Origin, Privacy Badger, and this hosts file.

                                                                            1. 4

                                                                              I like the technique in general but often have local stuff listening on various ports. I wish there was a well-known ‘/dev/null’ IP address which these could be routed to … a tiny daemon could then return a protocol-appropriate NAK immediately and log the attempt.

                                                                              1. 1

                                                                                Rather than 127.0.0.1, could you just route to something on your LAN that doesn’t exist?

                                                                              2. 3

                                                                                I use uBlock Origin as well, works like a charm! Just have to disable font blocking on a few pages for them to be readable.

                                                                              1. 1

                                                                                It could be historically quite interesting I suppose. No sign of it under https://github.com/universityofadelaide yet though.

                                                                                1. 2

                                                                                  Yeah, I had a play with this a while ago and it works pretty well: in a field trial among ~20 mostly non-tech friends, only one noticed that there was no way to set a password.

                                                                                  1. 2

                                                                                    One of your comments is that cookie security wasn’t taken seriously. Could it be the case that you can include in the cookie a per-browser fingerprint as well as a key, and then use a HMAC to hide this information and ensure the authenticity and integrity.

                                                                                    If an attacker were to steal the cookie, the server could identify that even though the secret is valid, the browser has changed significantly enough (cookie theft in the positive case, a sufficiently serious upgrade in the negative case) and force a new login for the browser?

                                                                                    1. 1

                                                                                      I thought this way too until I started tracking our users’ browser fingerprints. Then I discovered that they changed way more than you’d expect; I never investigated why but saw that it was unreliable enough to not pursue.

                                                                                      1. 1

                                                                                        Yeah, I think the very fast browser update cycle these days might stuff this idea up.

                                                                                  1. 2

                                                                                    A lot of the “Unix way” stuff is kinda homoiconic, with the underlying data structure being lines of text in files. Sometimes, lines can be further divided by whitespace. There’s heaps of tools based around this, like ‘wc’, ‘awk’, ‘grep’ and ‘cpp’.

                                                                                    C is an odd case, because it with readline it is quite easy to write C code which manipulates C code in a very limited way. Thus the limitations of cpp’s macros. It seems like a shame that the most basic ‘file’ isn’t a more flexible data structure. But that’s a rant for another day :-)