If you can’t provide an alternative then I can’t take this seriously, and - until then - I hope that nobody else can either.
What do you need Cloudflare for? I’ve never seen a single use case for it (other than their DNS service, which is well done compared to many other providers). People who claim “DDoS protection” seem to have either picked a crappy web host, don’t know how to use caching, or are running Apache.
An alternative to surrendering your visitors to surveillance capitalism and forcing them to train Google’s AI that will enslave them?
I guess there are some use cases for services like CF, but most of the time it is just incompetency, forced on developers by their managers, or a fascination with bloat. A page without a spinner is just not the modern web!
Does configuring rate limiting and doing load testing before production deployment not count as an alternative? It’s not like we weren’t running websites and dealing with the problems cloudflare tries to address before that service existed.
No. Cloudflare isn’t a “rate limiting” service. Your load testing isn’t going to compare to real traffic. It’s a nice thing to do, but should never be considered representative of real traffic.
A lot of the problems that Cloudflare addresses have become worse due to multiple reasons.
Firstly, this is later in time. Technology has improved. This means that attacks have become stronger.
Secondly, services like Cloudflare weren’t there and people now have to find ways to attack against services like Cloudflare. This means that doing it yourself is substantially harder now, since you probably can’t compete with them in terms of DDoS protection. I doubt you ever saw anyone performing the largest DDoS in the world by hacking into people’s IoT cameras back then, either, but comparing reality 10 years ago to now isn’t the best approach to solving these problems.
How are you going to implement DDoS protection? Rate limiting isn’t doing that for you, it’s just rejecting requests that are excessive. That’s what Cloudflare is trying to do here.
It’s not trying to rate limit, that makes little-to-no sense.
EDIT: Also, if you’re the one that marked my response as “incorrect” then I don’t think that you know what “incorrect” means. It is absolutely correct to say not to consider a non-alternative as an alternative. Downvotes shouldn’t be an “I don’t agree” button.
This doesn’t feel like a review as much as if feels like Intel bashing and AMD advertising. I’d like to see more data from actually using the device.
Everything in this article could be assumed from just reading the specs, I think.
when clicking through to the pages after the first, there are benchmark results: https://www.phoronix.com/scan.php?page=article&item=amd-linux-2990wx&num=4
Ooooh, that makes more sense… Not sure if that doesn’t show up on mobile or if maybe I thought it was links to comments?
As someone writing an article, I guess it’s difficult to know where to stop the details or not as well. Like, that’s probably a difficult line to draw? Either way, though, I think there’s probably an easy way to tell that I just don’t 100% know how to explain objectively.
I like to think of scrum as agile training wheels. Calling what this article suggests “post agile” is avoiding the concepts behind why agile exists and what it actually is. Agile is not scrum, but scrum is agile. In the case that this seems confusing, let me clarify.
Yes, scrum is great sometimes - but it should never be the ideal solution. With teams that don’t often change, scrum tends to require a lot of bottle-necking on progress, large in-group meetings, and extraneous communication. This is because scrum introduces a large amount of process which essentially only exists to help everyone communicate.
When people get to a point where they are able to communicate effectively with the right people without needing daily stand-ups, for instance, standups become a bottle-neck. The reason for this is that there are tendencies in many developers to avoid talking to people until next standup out of fear of bothering the other person or even sometimes laziness. This often can block people for half a day if you do daily standups.
This is made even worse with teams that do standups in the morning, because often times developers are doing their standups before they even have a full handle on and remember what they were working on and running into the day before. Standups should be mid-afternoon, post-lunch, etc at a good breaking point where the majority of the team is going to be taken away from producing anyway.
That’s a long example, but it demonstrates how just one of the practices common in scrum can block people.
I believe that we should always be working toward a lean agile model, and scrum should be used as a way to help people gel when it isn’t happening. Once people are communicating effectively, get rid of it.
A lean agile model never precludes or punishes asynchronous work, and it is completely compatible with everything suggested in this article. With all that background out of the way, I suggest to consider that the bigger problem is simply that people think scrum means agile - when agile is a higher level methodology.
Agile is about being able to change direction easily and quickly with minimal velocity impact. It’s not about sitting in a room talking about how great last week went or standing in a group reiterating what you’re doing even though everyone already knows. That’s just a scrum thing, and if everyone already knows you shouldn’t be doing it.
Let us not invent a new name for something that already exists. Instead, let’s embrace what agile is and try to understand where our practices and methodologies have caused us to conflate different ideas and processes.
This is made even worse with teams that do standups in the morning, because often times developers are doing their standups before they even have a full handle on and remember what they were working on and running into the day before
Sounds like every stand-up I’ve ever been in. I saw an interesting comment about the trajectory of stand-ups, where people eventually end up spending half their time talking up how busy they were yesterday. Seems like moving stand-ups to after-lunch would discourage that, too.
Don’t sacrafice fonts for being “minimal”, though. That font and text size is not great for reading if that’s your primary goal.
Not really. Browser zoom is so that people who need to zoom can do so on the page. However, the page still should be designed with your average user in mind. There’s no reason to force the average user to zoom when unnecessary, that is a usability concern. My other usability concern with this suggestion is that browser text zoom makes the usability of your documents pretty poor on mobile.
Whatever happened to the end user being in control of fonts and colors, anyway? If minimal sites became common again, I’d like to see client-side styling become much more prominent (say, a font & size dropdown right next to the URL bar on every major browser, along with a background & foreground color selector).
Leaving the web designer in charge of the theming is a boon for branding, but the end user doesn’t care about supporting some company’s branding (which in some cases – like the prevelence of blue-heavy designs – does real harm), and it’s ultimately they who use the thing so they should have full control. Yet, overriding default colors and fonts breaks most websites (not just webapps – which we would expect to be fragile against that – but web SITES).
I’m absolutely not suggesting that the user shouldn’t be in control of the fonts and colors - which they completely are even in many modern web documents - but only to suggest that the defaults provided for your document should be reasonable for your average user.
The way to think about font size is based on a rough average number of characters width per-column, because it helps prevent eye strain. My assumption here is that most users are human and using their eyes to read the content. Other cases exist, so the defaults must not be assumed to be the only case - but they should be reasonable for the average user.
they completely are even in many modern web documents
As someone who, for years, set his default font style to monospace & color to orange-on-black for usability reasons & enabled font override on as many sites as possible, this does not track with my experience at all. Even the main google search page was not usable when font colors were overridden – most buttons became invisible.
It was probably a mistake to allow web designers to control the fonts, colors, and positions of elements in the first place. Giving that control to them has only provided shallow benefits & an invitation to implement really bad ideas, while nearly every time they’re taken advantage of, usability & accessibility suffers.
We can’t possibly be talking about the same facility.
Every major browser has, buried in the settings, control over default typeface, size, and color, along with a checkbox indicating that the recommendations by the website itself should be overridden. This configuration (for the past ~10 years, on both chrome and firefox) will not fix hard-coded CSS alignment (which is tragically common) and will also not fix the use of transparent images on top of faux-buttons.
The result: if you increase font size and use a dark background color with a light foreground color, text overruns boxes and sits on top of other text while faux-buttons become totally invisible. This is a behavior that happens in all major browsers (because it’s not a browser behavior but a result of idiomatic use of CSS being fragile), and it’s a huge accessibility issue for people who have poor vision but do not use a screen reader.
It’s trivial to reproduce: go into your browser settings & invert the colors, then visit gmail. This problem is, essentially, the reason extensions like deluminate & features like default zoom exist: normal font & color controls are borderline useless because most existing CSS breaks in response to these controls.
The alignment thing is tragically common because CSS didn’t have any other way to perform alignment until recently.
There is the facility that you mentioned, users are allowed to install extensions, and you can disable sites from using CSS. If you use the first one then disabling CSS is probably reasonable.
I’m sure there are other ways to solve this. Either way, you are describing problems with browsers and not problems with the way the website is designed or the web itself?
Still sounds like an issue with your browser. links renders Google fine, for instance.
Links renders google fine because links ignores all css color information (meaning that background & foreground colors cannot be specified through secondary methods in piecemeal ways).
And yes, I consider browsers, web standards, and web developers equally at fault for the state of the world in this respect. These idioms (justified by browser features, made possible by web standards, and used by web developers) are user-hostile.
How the web designer would like something to look is completely irrelevant. A site that doesn’t work if you turn off CSS is broken. But, people who actually modify how sites look in any way are rare enough and quiet enough that it’s possible for web developers to go through life not considering whether or not their sites still work when they’ve been re-styled. That is not a ‘browser problem’ – it’s a culture problem.
Honestly, I think that this is a bug and the Chrome team is just being lazy about it.
If the type wasn’t provided, I’d expect this behavior, but Chrome shouldn’t be allowing you to embed the wrong type w/ the same URL that the right type was hosted on before. That’s the issue here in my opinion.
The chromium devs have commented on their bug now, as the spec bug has been raised they’re probably going to look at fixing it once the spec has been finalised.
In both Firefox and Chrome they handle it differently, Safari seems to be the only one (from what I’ve tested, I didn’t test Edge or Opera!) that is very opinionated.
There could surely be a way to perform an amplification attack on a system using CloudFlare over Tor
I think it’s reasonable to presume Cloudflare’s DDoS protection doesn’t reply on a tracking a single user on a shared IP address.
Yeah, but you start getting this page when you start getting DDoS’d or it thinks you’re a bot. I’m not sure if the feature for disabling this page will skip the DDoS barrier as well is what I’m saying
For those wanting the rationale, this is in the same Pony article:
“From a practical perspective, having division as a partial function is awful. You end up with code littered with trys attempting to deal with the possibility of division by zero. Even if you had asserted that your denominator was not zero, you’d still need to protect against divide by zero because, at this time, the compiler can’t detect that value dependent typing. So, as of right now (ponyc v0.2), divide by zero in Pony does not result in error but rather 0.”
Im sure many of us would find it interesting. I have a total, mental block on divide by zero given it’s always a bug in my field. This thread is refreshingly different. :)
This is very true. The fact that division by zero causes us to write so many guards can cause major issues.
I wonder, though, won’t explicit errors be better than implicit unexpected results which may be caused by this unusual behavior?
I guess if you write a test before writing code, it should be possible to spot the error either way?
It would be good to push this to the type system exactly so that we don’t have to remember to test for it.
Totally, but I am saying that there are specific cases where this may still throw people off and cause bugs - even when the typing is as expected here.
I am interested in personal opinions about CHICKEN vs Racket. I want to get into one of them but I am not sure which one. I am looking at them from the point of view.of someone who likes developing web and apps. Can anyone share some of their experiences with me?
Caveat: I’m a CHICKEN user.
Racket is a kitchen-sink/batteries-included kind of Scheme that compiles to bytecode that runs in a virtual machine. It’s got the largest Scheme community and ecosystem by far. It seems to excel in GUI in particular. It also has its own varieties like Typed Racket and Lazy Racket, which are quite neat. (You could argue that Racket is a separate dialect of Scheme at this point, as it doesn’t exactly follow the RnRS.)
CHICKEN is a much more minimal Scheme dialect that compiles to C. It’s fast and portable, and the compiled applications are very easy to deploy elsewhere, given you bundle libchicken.so with the executable (or statically link it). It has a very clean C FFI. It implements most of R5RS with growing R7RS support.
Honestly, if you like developing web apps, I’d personally recommend Racket since it has a sizable and mature codebase for web dev, mostly using a sublanguage called Insta.
The book How to Design Programs is written by Racket authors and uses Racket throughout.
“A practical and portable Scheme system”
From the website
(The post and the linked email didn’t give me any idea what Chicken was.)
The way I remember it is it’s the Scheme that compiles to C for speed and portability. silentbicycle posted this interview with the author. aminb added someone’s blog posts on interesting work.
I am a maths researcher at the university of Cologne and adressed this in a thesis I wrote in 2016. See chapter 3, especially the first part of section 3.1.
Dividing by zero is totally well defined for the projectively extended real numbers (only one unsigned infinity inf) but the argument for the usual extended real numbers (+-inf) not working is not based on field theory, but of infinitisemal nature, given you can approach a zero-division both from below and above and get either +inf or-inf equally likely.
Defining 1/0=0 not only breaks this infinitiseminal form, it‘s also radically counterintuïtive given how the values behave when you approach the division from small numbers, e.g. 1/10, 1/1, 1/0.1, 1/0.001…
lim x->0 1/x = 0 makes no sense and is wrong in terms of limits.
See the thesis where I proved a/0=inf to be well-defined for a!=0.
tl;dr: There‘s more to this than satisfying the field conditions. If you redefine division, this has consequences on higher levels, in this case most prominently in infinitisemal analysis.
I used to be a maths researcher, and would just like to point out that some of the people who define division by zero to mean infinity do it because they’re more interested in the geometric properties of the spaces that functions are defined on than the functions themselves. This is the reason for the Riemann sphere in complex analysis, where geometers really like compact spaces more than noncompact ones, so they’re fine with throwing away the field property of the complex numbers. The moment any of them need to compute things, however, they pick local coordinates where division by zero doesn’t happen and use the normal tools of analysis.
Thanks for laying this out and pointing out the issue with +/- Inf
Could you summarize here why +Inf is a good choice. As a practical man I approach this from the limit standpoint - usually when I end up with a situation like this it’s because the correct answer is +/- Inf and it depends on the context which one it should be. Here context means on which side of zero was my history of the denominator.
The issue is that the function 1/x has a discontinuity at 0. I was taught that this means 1/0 is “undefined”. IMO in code this means throw an exception.
In practical terms I end up adding a tiny number to the denominator (e.g. 1e-10) and continuing, but that implicitly means I’m biased to the positive side of line.
I think Pony’s approach is flat out wrong.
It is not +inf, but inf. For the projectively extended real numbers, we only extend the set with one infinite element which has no sign. Take a look at page 18 of the thesis which includes an illustration of this. Rather than having a number line we have a number circle.
Dividing by zero, the direction we approach the denominator does not matter, even if we oscillate around zero, given it all ends up in one single point of infinity. We really don’t limit ourselves here with hat as we can express a limit to +inf or -inf in the traditional real number extension by the direction from which we approach inf in the projectively extended real numbers (see remark 3.5 on page 19).
1/x is discontinuous at 0, this is true, but we can always look at limits. :) I am also a practical man and hope this relatively formal way I used to describe it did not distract from the relatively simple idea behind this.
Pony’s approach is reasonable within field theory, but it’s not really useful when almost the entire analytical building on top of it collapses on your head. NaN was invented for a reason and given the IEEE floating-point numbers use the traditional +-inf extension, they should just return the indeterminate form on division by zero in Pony.
NaN only exists for floating point, not integers. If you want to use NaN or something like it for integers, you will need to box all integer numbers and take a large performance hit.
Just curious, but why isn’t 1/0=1? Would 1/0=Inf not require that infinity exists between 0 and 1?
I personally use Bitwarden, which I would say that fulfills all 3 points that you want, but I never tinkered with the SSH sync (although its privacy section ensures that its part of how it syncs). If you want a simpler and lower level alternative, probably pushcx’s advice of checking out pass works best for you.
Strong +1. Been using Bitwarden for 1.5 years now and it’s everything I hoped it would be.
Overall the experience has only improved. I’m sure it has a bright future.
Thanks for introducing me to this, it’s about what I am looking for. Time to ditch manually synced (& merged, of inevitable forks) KeePass.
Can also recommend Bitwarden, I have not tried the desktop application, but the mobile version on Android and browser extensions have worked without any issues for me so far across different browsers and operating systems.
Edit: Apparently, I posted the same comment twice, my mistake.
It is indeed, but there’s a caveat with self-hosting it that irks me. Though apparently there are ways to work around it, as mentioned.
a few people incl. me have been able to code a client-compatible self-hosted version as well, gives you a lot of insight and trust in it
https://github.com/vvondra/bitwarden-serverless https://github.com/jcs/bitwarden-ruby
A realization I recently had:
Why don’t we abstract away all display affordances from a piece of code’s position in a file? That is, the editor reads the file, parses its AST, and displays it according to the programmer’s preference (e.g., elastic tabstops, elm-like comma-leading lists, newline/no-newline before opening braces, etc). And prior to save, the editor simply runs it through an uncustomized prettier first.
There are a million and one ways to view XML data without actually reading/writing pure XML. Why not do that with code as well?
This idea is floating around the interwebz for a long time. I recall it being stated almost verbatim on Reddit, HN, probably on /.
And once you take it a step further, it’s clear that it shouldn’t be in a text file in the first place. Code just isn’t text. If you store it as a tree or a graph in some sort of database, it becomes possible to interact with it in much more powerful ways (including displaying it any way you like). We’ve been hobbled by equating display representation with storage format.
This talk touches on this issue, along with some related ones and HCI in general: Bret Victor: The Future of Programming
God, I have been trying to recall the name of this talk for ages! Thank you so much, it is a great recommendation
Text is great when (not if) your more complicated tools fail or do something you can’t tolerate and you need to use tools which don’t Respect The Intent of designers who, for whatever reason, don’t respect your intent or workflow. Sometimes, solving a problem means working around a breakage, whether or not that breakage is intentional on someone else’s part.
Besides, we just (like, last fifteen or so years) got text to the point where it’s largely compatible. Would be a shame to throw that away in favor of some new AST-database-thing which only exists on a few platforms.
I’m not sure I get your point about about intent. Isn’t the same already true of, say, compilers? There are compiler bugs that we have to work around, there are programs that seem logical to us but the compiler won’t accept, and so on. Still, everybody seems to be mostly happy to file a compiler bug or a feature request, and live with a workaround for the present. Seems like it works well enough in practice.
I understand your concern about introducing a new format but it sounds like a case of worse-is-better. Sure, we get a lot of convenience from the ubiquity of text, but it would nevertheless be sad if we were stuck with it for the next two centuries.
With compilers, there are multiple of them for any given language, if the language is important enough, and you can feed the same source into all of them, assuming that source is text.
I’ve never seen anyone casually swap out the compiler for production code. Also, for the longest time, if you wrote C++ for Windows, you pretty much had to use the Microsoft compiler. I’m sure that there are many embedded platforms with a single compiler.
If there’s a bug in the compiler, in most casss you work around it, then patiently wait for a fix from the vendor.
So that’s hardly a valid counterpoint.
Re: swapping out compiler for production code: most if not all cross-platform C++ libraries can be compiled on at least llvm, gcc and msvc.
Yes, I’m aware of that, but what does it have to do with anything I said?
EDIT: Hey, I went to Canterbury :)
“I’ve never seen anyone casually swap out the compiler for production code” sounded like you were saying people didn’t tend to compile the same production code on multiple compilers, which of course anyone that compiles on windows and non-windows does. Sorry if I misinterpreted your comment!
My first comment is in response to another Kiwi. Small world. Pretty cool.
This, this, a thousand times this. Text is a good user-interface for code (for now). But it’s a terrible storage and interchange format. Every tool needs its own parser, and each one is slightly different, leaving begging the amount of cpu and programmer time we waste going from text<->ast<->text.
Yeah, it’s obviously wasteful and limiting. Why do you think we are still stuck with text? Is it just sheer inertia and incrementalism, or does text really offer advantages that are challenging to recreate with other formats?
The text editor I use can handle any computer language you can throw at it. It doesn’t matter if it’s BASIC, C, BCPL, C++, SQL, Prolog, Fortran 77, Pascal, x86 Assembler, Forth, Lisp, JavaScript, Java, Lua, Make, Hope, Go, Swift, Objective-C, Rexx, Ruby, XSLT, HTML, Perl, TCL, Clojure, 6502 Assembler, 68000 Assembler, COBOL, Coffee, Erlang, Haskell, Ocaml, ML, 6809 Assembler, PostScript, Scala, Brainfuck, or even Whitespace. [1]
Meanwhile, the last time I tried an IDE (last year I think) it crashed hard on a simple C program I attempted to load into it. It was valid C code [2]. That just reinforced my notion that we aren’t anywhere close to getting away from text.
[1] APL is an issue, but only because I can’t type the character set on my keyboard.
[2] But NOT C++, which of course, everybody uses, right?
To your point about text editors working with any language, I think this is like arguing that the only tool required by a carpenter is a single large screwdriver: you can use it as a hammer, as a chisel, as a knife (if sharpened), as a wedge, as a nail puller, and so on. Just apply sufficient effort and ingenuity! Does that sound like an optimal solution?
My preference is for powerful specialised tools rather than a single thing that can be kind of sort of applied to a task.
Or, to approach from the opposite direction, would you say that a CAD application or Blender are bad tools because they only work with a limited number of formats? If only they also allowed you to edit JPEGs and PDFs, they would be so much better!
To your point about IDEs: I think that might even support my argument. Parsing of freeform text is apparently sufficiently hard that we’re still getting issues like the one you saw.
I use other tools besides the text editor—I use version control, compilers, linkers, debuggers, and a whole litany of Unix tools (grep, sed, awk, sort, etc). The thing I want to point out is that as long as the source code is in ASCII (or UTF-8), I can edit it. I can study it. I might not be able to compile it (because I lack the INRAC compiler but I can still view the code). How does one “view” Smalltalk code when one doesn’t have Smalltalk? Or Visual Basic? Last I hear, Microsoft wasn’t giving out the format for Visual Basic programs (and good luck even finding the format for VB from the late 90s).
The other issue I have with IDEs (and I will come out and say I have a bias against the things because I’ve never had one that worked for me for any length of time without crashing, and I’ve tried quite a few over 30 years) is that you have one IDE for C++, and one for Java, and one for Pascal, and one for Assembly [1] and one for Lua and one for Python and man … that’s just too many damn environments to deal with [2]. Maybe there are IDEs now that can work with more than one language [3] but again, I’ve yet to find one that works.
I have nothing against specialized tools like AutoCAD or Blender or PhotoShop or even Deluxe Paint, as long as there is a way to extract the data when the tool (or the company) is no longer around. Photo Shop and Deluxe Paint work with defined formats that other tools can understand. I think Blender works with several formats, but I am not sure about AutoCAD (never having used it).
So, why hasn’t anyone stored and manipulated ASTs? I keep hearing cries that we should do it, but yet, no one has yet done it … I wonder if it’s harder than you even imagine …
Edited to add: Also, I’m a language maven, not a tool maven. It sounds like you are a tool maven. That colors our perspectives.
[1] Yes, I’ve come across several of those. Never understood the appeal …
[2] For work, I have to deal with C, C++, Lua, Make and Perl.
[3] Yeah, the last one that claimed C/C++ worked out so well for me.
For your first concern about the long term accessibility of the code, you’ve already pointed out the solution: a defined open format.
Regarding IDEs: I’m not actually talking about IDEs; I’m talking about an editor that works with something other than text. Debugging, running the code, profiling etc. are different concerns and they can be handled separately (although again, the input would be something other than text). I suppose it would have some aspects of an IDE because you’d be manipulating the whole code base rather than individual files.
Regarding the language maven post: I enjoyed reading it a few years ago (and in practice, I’ve always ended up in the language camp as an early adopter). It was written 14 years ago, and I think the situation is different now. People have come to expect tooling, and it’s much easier to provide it in the form of editor/IDE plugins. Since language creators already have to do a huge amount of work to make programs in their languages executable in some form, I don’t think it would be an obstacle if the price of admission also included dealing with the storage format and representation.
To your point about lack of implementations: don’t Smalltalk and derivatives such as Pharo qualify? I don’t know if they store ASTs but at least they don’t store text. I think they demonstrate that it’s at least technically possible to get away from text, so the lack of mainstream adoption might be caused by non-technical reasons like being in a local maximum in terms of tools.
The problem, as always, is that there is such a huge number of tools already built around text that it’s very difficult to move to something else, even if the post-transition state of affairs would be much better.
Text editors are language agnostic.
I’m trying to conceive of an “editor” that works with something other than text. Say an AST. Okay, but in Pascal, you have to declare variables at the top of each scope; you can declare variables anywhere in C++. In Lua, you can just use a variable, no declaration required. LISP, Lua and JavaScript allow anonymous functions; only the latest versions of C++ and Java allow anonymous functions, but they they’re restricted in that you can’t create closures, since C++ and Java have no concept of closures. C++ has exceptions, Java has two types of exceptions, C doesn’t; Lua kind of has exceptions but not really. An “AST editor” would have to somehow know that is and isn’t allowed per language, so if I’m editing C++ and write an anonymous function, I don’t reference variables outside the scope of said function, but that it can for Lua.
Okay, so we step away from AST—what other format do you see as being better than text?
I don’t think it could be language agnostic - it would defeat the purpose as it wouldn’t be any more powerful than existing editors. However, I think it could offer largely the same UI, for similar languages at least.
And that is my problem with it. As stated, I use C, C++ [1], Lua, Make and a bit of Perl. That’s at least what? Three different “editors” (C/C++, Lua/Perl (maybe), Make). No thank you, I’ll stick with a tool that can work with any language.
[1] Sparingly and where we have no choice; no one on my team actually enjoys it.
Personally, I’m not saying you should need to give up your editor of choice. Text is a good (enough for now) UI for coding. But it’s a terrible format to build tools on. If the current state of the code lived in some sort of event-based graph database for example, your changes could trigger not only your incremental compiler, but source analysis (only on what’s new), it could also maintain a semantic changelog for version control, trigger code-generation (again, only what’s new).
There’s a million things that are currently “too hard” which would cease to be too hard if we had a live model of the code as various graphs (not just the ast, but call graphs, inheritance graphs, you-name-it) that we could subscribe to, or even write purely-functional consumers that are triggered only on changes.
Inertia, arrogance, worse-is-better; Working systems being trapped behind closed doors at big companies; Hackers taking their language / editor / process on as part of their identity that needs to be defended with religious zeal; The complete destruction of dev tools as a viable business model; Methodologies-of-the-week…. The causes are numerous and varied, and the result is software dev is being hamstrung and we’re all wasting countless hours and dollars doing things computers should be doing for us.
I think that part of the issue is that we haven’t seen good structured editor support outside of Haskell and some Lisps.
Having a principled foundation for structured editor + a critical mass by having it work for a language like Javascript/Ruby, would go a long way to making this concept more mainstream. After which we could say “provide a grammar for favorite language X and get structured editor support!”. This then becomes “everything is structured at all levels!”
I think it’s possible that this only works for a subset of languages.
Structured editing is good in that it operates at a higher level than characters, but ultimately it’s still a text editing tool, isn’t it? For example, I think it should be trivial to pull up a list of (editable) definitions for all the functions in a project that call a given function, or to sort function and type definitions in different ways, or to substitute function calls in a function with the bodies of those functions to a given depth (as opposed to switching between different views to see what those functions do). I don’t think structured editing can help with tasks like that.
There are also ideas like Luna, have you seen it? I’m not convinced by the visual representation (it’s useful in some situations but I’m not sure it’s generally effective), but the interesting thing is they provide both a textual and a visual representation of the code.
Python has a standard library module for parsing Python code into an AST and modifying the AST, but I don’t know of any Python tools that actually use it. I’m sure some of them do, though.
Lisp, in fact. Smalltalk lives in an image, Lisp lives in the real world. ;)
Besides, Lisp already is the AST. Smalltalk has too much sugar, which is a pain in the AST.
Possibly, but I’m only talking about a single aspect of it: being able to analyse and manipulate the code in more powerful ways than afforded by plain text. I think that’s equally possible for FP languages.
Ultimately I think this is the only teneble solution. I feel I must be in the minority in having an extreme dislike of columnar-style code, and what I call “white space cliffs” where a column dictates a sudden huge increase in whitespace. But I realize how much it comes down to personal aesthetics, so I wish we could all just coexist :)
Yeah, I’ve been messing around with similar ideas, see https://nick.zoic.org/art/waste-web-abstract-syntax-tree-editor/ although it’s only vapourware so far because things got busy …
Many editors already do this to some extent. They just render 4-space tabs as whatever the user asks for. Everything after the indent, though, is assumed to be spaced appropriately (which seems right, anyway?)
You can’t convert to elastic-tabstop style from that, and without heavy language-grammar knowledge you can’t do this for 4-space “tabs” generally.
Every editor ever supports this for traditional indent style, though: http://intellindent.info/seriously/
To be clear, you can absolutely render a file that doesn’t have elastic tabstops as if it did. The way a file is rendered has nothing to do with the actual text in the file.
It’s like you’re suggesting that you can’t render a file containing a ton of numbers as a 3D scene in a game engine. That would be just wrong.
Regardless, my point is specifically that this elastic tabstops thing is not necessary and hurts code readability more than it helps.
The pefantics of clarifying between tabs and tabstops is a silly thing as well. Context gives more than enough information to know which one is being talked about.
It sounds like this concept is creating more problems than it solves, and is causing your editor to solve problems that only exist in the seveloper’s imagination. It’s not “KISS” at all, quite the opposite.
Because presentation isn’t just a function of the AST. Indentation usually is, but alignment can be visually useful for all kinds of reasons.
“Using spaces to align columns is obviously a kludge”
I don’t think that this is true at all. You can easily tell editors to display indents differently than they are in the file. Furthermore, not using tabs after an indent itself solves the problem completely.
This seems like someone just REALLY doesn’t want to use spaces - which isn’t even an argument that I’ve heard anyone have in nearly 10 years. I honestly figured everyone figured this out by now, but I guess not?
Do y’all still see this argument a lot?
I have been at a few companies who have decided to do this while I was there. I have also joined a number of companies like this after the direction has already been taken. People need to realize what decisions they are making, and think about the future more often. This is almost always not the right call.
…Oh, damn! It looks like this is the “I Have NIHS and Want to Build My Own Framework With Less Commuity Support Manifesto”.
I highly recommend avoiding this story. It sounds so easy, but it is hard. Small things become big things. Big things become bigger things. So many things become SO MANY THINGS. Nobody outside of your company knows wtf your thing is, because there isn’t a community fostering it.
You probably won’t believe me now, but when someone can recreate your 2-year-old startup from scratch with higher quality code in 3 months because they were smart enough to choose the right tools while you were off in play land inventing your own, you’ll realize this. It’s a huge time sink that is easy to avoid when you’re only seeing the day-to-day and not focused on the month-to-month.
Since you gave NixOS a try in April @cmb put together a netboot installer and added notes on installing NixOS to our wiki. The process is better documented now, in part from your willingness to experiment with it earlier this year.
Seems fair I guess. They probably made thousands of easy ad dollars off Nintendo’s property, so it’s normal they have a problem with this.
However, is Nintendo actually making profit of the original Zelda, for example? I mean, is there a way for me as a player to get to play the original Zelda without having to search for a second hand NES and fishing for the original cartridge in flea markets? I get that is their intellectual property, but still it’s not like they still sell those games
The current philosophy of the law is that Nintendo has an eternal right to tax Zelda. It was never meant to go into the public domain, will never go into the public domain, and if legislators have funny ideas about this stuff then they’ll use their billions of previous culture tax revenue to bribe (er… “lobby”) them to have the right ideas again.
Anyone who gripes about this state of affairs is obviously a commie trying to steal from them.
In my understanding, in France and probably other countries, works (not sure what, but writings and musics are included for example, probably programs/video games?) enter public domain 70 years after creator’s death.
How can this apply to a living company?
The original author(s) license (indirect in employment contract or direct via a specific one) rights to the work. The ‘death’ clause becomes really gnarly when the actual work of art is an aggregate of many copyright holders.
This becomes more complicated as the licensing gets split up into infinitely small pieces, like “time-limited distribution within country XYZ on the media of floppy discs”. Such time-limit clauses are a probable cause when contents to whole games suddenly disappear, typically sublicensed contents like music.
This, in turn, gets even more complicated by the notion of ‘derivative’ work; fanart or those “HD remakes” as even abstract nuances have to be considered. The stories about Sherlock Holmes are in the public domain, but certain aesthetics, like the deerstalker/pipe/… figure - are still(?) copyrighted. Defining ‘derivative’ work is complex in and of itself. For instance, Blizzard have successfully defended copyright of the linked and loaded process of the World of Warcraft client as such, in the case against certain cheat-bots - and similar shenanigans to take down open source / reversed starcraft servers.
Then a few years pass and nobody knows who owns what or when or where, copyright trolls dive in and threaten extortion fees based on rights they don’t have. Copyright in its current form has nothing to do about the ‘artist’ and is complete, depressing, utter bullshit - It has turned into this bizarre form of mass hypnosis where everyone gets completely and thoroughly screwed.
These aspects, when combined, is part of the reason as to why “sanctioned ROM stores” that virtual console and so on holds have very limited catalogs, the rightsholders are nowhere to be found and can’t be safely licensed.
Yep, Nintendo do still sell these games, and it is possible for you to buy them. I bought one of these last week.
I just got a NES Classic and SNES Classic. They are pretty dope! I think that they are starting to care a lot more now that these are a thing :)
This does, however, have the unfortunate side effect of players not being able to play their favorites unless they are one of the ~60 games on these two classic editions. So, that’s sad. :(
Although I’m sure that many people have a desire to have an OS this small, I don’t think that coming to it from the point of file size itself is very practical. Deciding on a floppy for the mechanism to require for max file size seems a bit off, since I don’t think I’ve used a floppy in nearly 20 years. Do people still use them?
Yeah, we need new metrics. Maybe the cheapest x86 CPU, flash chip, and single stick of RAM. Maybe an energy-efficient, embedded board like the VIA Artigo’s I used to like. Perhaps something like PCEngines that people are already using. I’m not sure but it should be more relevant minimum.
I think driving the metrics off of a 2GB SD card and a raspberry pi may be reasonable at this point as well. X86 isn’t driving everything as super much as before :)
I would say the Raspberry Pi Zero since it’s the least expensive and therefore most widely adoptable, but I’m honestly not sure what the best metric would be for deciding on this. :)
Note: The zero is 11 GBP (just under $15) and 1Ghz/512MB, and I made this decision because I think that an OS should support people with as low income as possible. (https://thepihut.com/products/raspberry-pi-zero)
One of my early boxes was a 200+MHz PII with 64MB of RAM. That ran Windows 98 with video games, WinAMP, Visual Studio 6, and so on. Although Intel’s chips are highly-optimized, the spec difference indicates a solution for the Zero might be in that ballpark somewhere. At least something like MenuetOS. One list I found just showed a bunch of Linux remixes so far. They look nice but that tells me there’s opportunities for more efficiency, education-oriented, or other design choices.
Moving from Linux, though, could have upsides for Google. Android’s use of the technology, which is distributed by Oracle Corp., is at the center of a lengthy, bitter lawsuit between the two companies.
I am confused. I thought they were confusing Linux with Java, but the very next paragraph addresses the Java situation.
A previous version of this story was corrected to make clear Oracle link with Linux.
🤔
If I had to guess, the reporter writing the story couldn’t imagine them spending the resources to replace something in Android and have that thing not be what Oracle is suing them over.
Although this response is reasonable up to and including the first point, the second point is a little less convincing. The idea that it’s okay to be bad at security simply because someone else was bad at security is unfortunate at best.
The first point is - although not completly wrong - definitely debatable since these connections can be made simultaneously and aren’t blocking each other, which seems like they are trying to insinuate here.
I think there’s a distinction to be made between “bad at security” and “not actually a security boundary”. If you retroactively redefine public info to be a secret, it shouldn’t be surprising that everyone is “bad” at protecting it, or that someone might pushback and say not a bug.
And how and why is a username considered public information, they asked? https://lobste.rs/u
They mentioned the MaxStartups parameter, which does seem like it will cause connections to block.