1. 2

First-class packages are the most underrated feature of lisp. AFAIK only perl offers it fully but it uses very bad syntax, globs . Most macros merely suppress evaluation and this can be done using first class functions. Here is my question for lispers, If you can use lex / yacc and can write a full fledged interpreter do you really need macros ?

1. 7

Most macros merely suppress evaluation and this can be done using first class functions.

I strongly disagree with this. Macros are not there to “merely suppress evaluation.” As you point out, they’re not needed for that, and in my opinion they’re often not even the best tool for that job.

“Good” macros extend the language in unusual or innovative ways that would be very clunky, ugly, and/or impractical to do in other ways. It’s in the same vein as asking if people really need all these control flow statements when there’s ‘if’ and ‘goto’.

To give some idea, cl-autowrap uses macros to generate Common Lisp bindings to C and C++ libraries using (cl-autowrap:c-include "some-header.h"). Other libraries, like “iterate” add entirely new constructs or idioms to the language that behave as if they’re built-in.

Here is my question for lispers, If you can use lex / yacc and can write a full fledged interpreter do you really need macros ?

Lex/Yacc and CL macros do very different things. Lex/Yacc generate parsers for new languages that parse their input at runtime. CL macros emit CL code at compile time which in turn gets compiled into your program.

In some sense your question is getting DSLs backwards The idea isn’t to create a new language for a special domain, but to extend the existing language with new capabilities and operations for the new domain.

1. 1

Here are examples of using lex/yacc to extend a language

1. Ragel compiles state machines to multiple languages
2. Swig which does something like autowrap
3. The babel compiler uses parsing to add features ontop of older javascript like asyc/await.

I am guessing all these use lex/yacc internally. Rails uses scaffolding and provides helpers to generate js code compile time. Something like parenscript.

The basic property of a macro is to generate code at compile time. Granted most of these are not built into the compiler but nothing is stopping you adding a new pre-compile step with the help of a make file.

Code walking is difficult in lisp as well. How would I know if an expression is a function or a macro ? If I wanted to write a code highlighter in vim that highlights all macros differently I would have a difficult time doing this by parsing alone even though lisp is an easy language to parse.

1. 5

Code walking is difficult in lisp as well. How would I know if an expression is a function or a macro ?

CL-USER> (describe #'plus-macro)
#<CLOSURE (:MACRO PLUS-MACRO) {1002F8AB1B}>
[compiled closure]

Lambda-list: (&REST SB-IMPL::ARGS)
Derived type: (FUNCTION (&REST T) NIL)
Documentation:
T
Source file: SYS:SRC;CODE;SIMPLE-FUN.LISP
; No value
CL-USER> (describe #'plus-fn)
#<FUNCTION PLUS-FN>
[compiled function]

Lambda-list: (A B)
Derived type: (FUNCTION (T T) (VALUES NUMBER &OPTIONAL))
Source form:
(LAMBDA (A B) (BLOCK PLUS-FN (+ A B)))
; No value


You underestimate the power of the dark side Common Lisp ;)

In other words … macros aren’t an isolated textual tool like they are in other, less powerful, languages. They’re a part of the entire dynamic, reflective, homoiconic programming environment.

1. 2

I know that but without using lisp runtime and parsing alone can you do the same ?

1. 3

I’m not sure where you’re going with this.

In the Lisp case, a tool (like an editor) only has to ask the Lisp environment about a bit of syntax to check if it’s a macro, function, variable, or whatever.

In the non-Lisp case, there’s no single source of information, and every tool has to know about every new language extension and parser that anybody may write.

1. 1

I believe the their claim is that code walkers can provide programmers with more power than Lisp macros. That’s some claim, but the possibility of it being true definitely makes reading the article they linked ( https://mkgnu.net/code-walkers ) worthwhile.

2. 2

Yes. You’d start by building a Lisp interpreter.

1. 1

… a common lisp interpreter, which you are better off writing in lex/yacc. Even if you do that each macro defines new ways of parsing code so you can’t write a generic highlighter for loop like macros. If you are going to write a language interpreter and parse, why not go the most generic route of lex/yacc and support any conceivable syntax ?

1. 5

I really don’t understand your point, here.

Writing a CL implementation in lex/yacc … I can’t begin to imagine that. I’m not an expert in either, but it seems like it’d be a lot of very hard work for nothing, even if it were possible, and I’m not sure it would be.

So, assuming it were possible … why would you? Why not just use the existing tooling as it is intended to be used???

1. 2

That’s too small of a problem to demonstrate why code walking is difficult. How about this then,

1. Count number of s-expression used in the program
2. Shows the number of macros used
3. Show number of lines generated by each macro and measure line savings
4. Write a linter which enforces stylistic choices
5. Suggest places where macros could be used for minimising code
6. Measure code complexity, coupling analysis
7. Write a lisp minifier, obfuscator
8. Find all places where garbage collection can be improved and memory leaks can be detected
9. Insert automatic profiling code for every s-expression and list out where the bottlenecks are
10. Write code refactoring tools.
11. List most used functions in runtime to suggest which of them can be optimised for speed

Ironically the above is much easier todo with assembly.

My point is simply this, lisp is only easy to parse superficially. Writing the above will still be challenging. Writing lexers and parsers is better at code generation and hence macros in the most general sense. If you are looking for power then code walking beats macros and thats also doable in C.

1. 1

Okay, I understand your argument now.

I’ll read that article soon.

1. 6

“That’s two open problems: code walkers are hard to program and compilers to reprogram.”

The linked article also ends with something like that. Supports your argument given macros are both already there in some languages and much easier to use. That there’s lots of working macros out there in many languages supports it empirically.

There’s also nothing stopping experts from adding code walkers on top of that. Use the easy route when it works. Take the hard route when it works better.

1. 6

Welcome back Nick, haven’t seen you here in a while.

1. 4

Thank you! I missed you all!

I’m still busy (see profile). That will probably increase. I figure I can squeeze a little time in here and there to show some love for folks and share some stuff on my favorite, tech site. :)

2. 1

While intriguing, it would be nice if the article spelled out the changes made with code walkers. Hearing that a program ballooned 9x isn’t impressive by itself. Without knowing about the nature of the change it just sounds bloated. (Which isn’t to say that it wasn’t valid, it’s just hard to judge without more information.)

Regarding your original point, unless I’m misunderstanding the scope of code walkers, I don’t see why it needs to be an either/or situation. Macros are a language supported feature that do localized code changes. It seems like code walkers are not language supported in most cases (all?), but they can do stateful transformations globally across the program. It sounds like the both have their use cases. Like lispers talk about using macros only if functions won’t cut it, maybe you only use code walkers if macros won’t cut it.

BTW, it looks like there is some prior art on code walkers in Common Lisp!

2. 1

That kind of is the point. Lisp demonstrates that there is no real boundary between the language as given and the “language” it’s user creates, by extending and creating new functions and macros. That being said, good lisp usually follows conventions so that you may recognize if something is a macro (eg. with-*) or not.

2. 1

Well, if your question is “Would you prefer a consistent, built-in way of extending the language, or a hacked together kludge of pre-processors?” then I’ll take the macros… ;-)

Code walking is difficult in lisp as well. How would I know if an expression is a function or a macro ? If I wanted to write a code highlighter in vim that highlights all macros differently I would have a difficult time with doing pure code walking alone even though lisp is an easy language to parse.

My first question would be whether or not it makes sense to highlight macros differently. The whole idea is that they extend the language transparently, and a lot of “built-in” constructs defined in the CL standard are macros.

Assuming you really wanted to do this, though, I’d suggest looking at Emacs’ Slime mode. It basically lets the CL compiler do the work. It may not be ideal, but it works, and it’s better than what you’d get using Ragel, Swig, or Babel.

FWIW, Emacs, as far as I know (and as I have it configured), only highlights symbols defined by the CL standard and keywords (i.e. :foo, :bar), and adjusts indentation based on cues like “&body” arguments.

1. 1

Btw there is already a syntax highlighter that uses a code walker and treats macros differently. The code walker may not be easy to write, but it can hardly be said that it is hard to use.

https://github.com/scymtym/sbcl/blob/wip-walk-forms-new-marco-stuff/examples/code-walking-example-syntax-highlighting.lisp

2. 1

Here are examples of using lex/yacc to extend a language

Those are making new languages, as they use new tooling, which doesn’t come with existing tooling for the language. If someone writes Babel code, it’s not JavaScript code anymore - it can’t be parsed by a normal JavaScript compiler.

Meanwhile, Common Lisp macros extend the language itself - if I write a Common Lisp macro, anyone with a vanilla, unmodified Common Lisp implementation can use them, without any additional tooling.

Granted most of these are not built into the compiler but nothing is stopping you adding a new pre-compile step with the help of a make file.

…at which point you have to modify the build processes of everybody that wants to use this new language, as well as breaking a lot of tooling - for instance, if you don’t modify your debugger, then it no longer shows an accurate translation from your source file to the code under debugging.

If I wanted to write a code highlighter in vim that highlights all macros differently I would have a difficult time doing this by parsing alone even though lisp is an easy language to parse.

Similarly, if you wanted to write a code highlighter that highlights defined functions differently without querying a compiler/implementation, you couldn’t do it for any language that allows a function to be bound at runtime, like Python. This isn’t a special property of Common Lisp, it’s just a natural implication of the fact that CL allows you to create macros at runtime.

Meanwhile, you could capture 99.9%+ of macro definitions in CL (and function definitions in Python) using static analysis - parse code files into s-expression trees, look for defmacro followed by a name, add that to the list of macro names (modulo packages/namespacing).

tl;dr “I can’t determine 100% of source code properties using static analysis without querying a compiler/implementation” is not an interesting property, as all commonly used programming languages have it to some extent.

1. 1

If you can use lex / yacc and can write a full fledged interpreter do you really need macros ?

I don’t know why you’d think they are comparable. The amount of effort to write a macro is way less than the amount of effort required to write a lexer + parser. The fact that macros are written in lisp itself also reduces the effort needed. But most importantly one is an in-process mechanism for code generation and the other one involves writing the generated code to the file. The first mechanism makes it easy to iterate and modify the generated codec. Given that most of the time you are maintain, hence modifying, code I’d say that is a pretty big difference.

The babel compiler uses parsing to add features on top of older javascript like asyc/await.

Babel is an example of how awful things can be when macros happen out of process. The core of babel is a macro system + plugable reader .

I am guessing all these use lex/yacc internally.

Babel certainly doesn’t. When it started it used estools which used acorn iirc. I think nowadays it uses its own parser.

Rails uses scaffolding and provides helpers to generate js code compile time. Something like parenscript.

I have no idea why you think scaffolding it is like parenscript. The common use case for parenscript is to do the expansion of the fly. Not to generate the initial boilerplate.

Code walking is difficult in lisp as well.

And impossible to write in portable code, which is why most (all?) implementations come with a code-walker you can use.

1. 1

If syntax is irrelevant, why even bother with Lisp ? If I just stick to using arrays in the native language I can also define functions like this and extend the array language to support new control flow structures

["begin",
["define", "fib",
["lambda", ["n"],
["cond", [["eq", "n", 0], 0],
[["eq", "n", 1], 1],
["T", ["+", ["fib", ["-", "n", 1]], ["fib", ["-", "n", 2]]]] ]]],
["fib", 6]]

3. 5

Yes, you absolutely want macros even if you Lex/Yacc and interpreters.

Lex/Yacc (and parsers more generally), interpreters (and “full language compilers”), and macros all have different jobs at different stages of a language pipeline. They are complimentary, orthogonal systems.

Lex/Yacc are for building parsers (and aren’t necessarily the best tools for that job), which turn the textual representation of a program into a data structure (a tree). Every Lisp has a parser, for historical reasons usually called a “reader”. Lisps always have s-expression parsers, of course, but often they are extensible so you can make new concrete textual notations and specify how they are turned into a tree. This is the kind of job Lex and Yacc do, though extended s-expression parsers and lex/yacc parsers generally have some different capabilities in terms of what notations they can parse, how easy it is to build the parser, and how easy it is to extend or compose any parsers you create.

Macros are tree transformers. Well, M4 and C-preprocessor are textual macro systems that transform text before parsing, but that’s not what we’re talking about. Lisp macros transform the tree data structure you get from parsing. While parsing is all about syntax, macros can be a lot more about semantics. This depends a lot on the macro system – some macro systems don’t allow much more introspection on the tree than just what symbols there are and the structure, while other macro systems (like Racket’s) provide rich introspection capabilities to compare binding information, allow macros to communicate by annotating parts of the tree with extra properties, or by accessing other compile-time data from bindings (see Racket’s syntax-local-value for more details), etc. Racket has the most advanced macro system, and it can be used for things like building custom DSL type systems, creating extensible pattern matching systems, etc. But importantly, macros can be written one at a time as composable micro-compilers. Rather than writing up-front an entire compiler or interpreter for a DSL, with all its complexity, you can get most of it “for free” and just write a minor extension to your general-purpose language to help with some small (maybe domain-specific) pain point. And let me reiterate – macros compose! You can write several extensions that are each oblivious to each other, but use them together! You can’t do that with stand-alone language built with lex/yacc and stand-alone interpreters. Let me emphatically express my disagreement that “most macros merely suppress evaluation”!

Interpreters or “full” compilers then work after any macro expansion has happened, and again do a different, complimentary job. (And this post is already so verbose that I’ll skip further discussion of it…)

If you want to build languages with Lex/Yacc and interpreters, you clearly care about how languages allow programmers to express their programs. Macros provide a lot of power for custom languages and language extensions to be written more easily, more completely, and more compositionally than they otherwise can be. Macros are an awesome tool that programmers absolutely need! Without using macros, you have to put all kinds of complex stuff into your language compiler/interpreter or do without it. Eg. how will your language deal with name binding and scoping, how will your language order evaluation, how do errors and error handling work, what data structures does it have, how can it manipulate them, etc. Every new little language interpreter needs to make these decisions! Often a DSL author cares about only some of those decisions, and ends up making poor decisions or half-baked features for the other parts. Additionally, stand-alone interpreters don’t compose, and don’t allow their languages to compose. Eg. if you want to use 2+ independent languages together, you need to shuttle bits of code around as strings, convert data between different formats at every boundary, maybe serialize it between OS processes, etc. With DSL compilers that compile down to another language for the purpose of embedding (eg. Lex/Yacc are DSLs that output C code to integrate into a larger program), you don’t have the data shuffling problems. But you still have issues if you want to eg. write a function that mixes multiple such DSLs. In other words, stand-alone compilers that inject code into your main language are only suitable for problems that are sufficiently large and separated from other problems you might build a DSL for.

With macro-based embedded languages, you can sidestep all of those problems. Macro-based embedded languages can simply use the features of the host language, maybe substituting one feature that it wants to change. You mention delaying code – IE changing the host language’s evaluation order. This is only one aspect of the host language out of many you might change with macros. Macro extensions can be easily embedded within each other and used together. The only data wrangling at boundaries you need to do is if your embedded language uses different, custom data structures. But this is just the difference between two libraries in the same language, not like the low-level serialization data wrangling you need to do if you have separate interpreters. And macros can tackle problems as large as “I need a DSL for parsing” like Yacc to “I want a convenience form so I don’t have to write this repeteating pattern inside my parser”. And you can use one macro inside another with no problem. (That last sentence has a bit of ambiguity – I mean that users can nest arbitrary macro calls in their program. But also you can use one macro in the implementation of another, so… multiple interpretations of that sentence are correct.)

To end, I want to comment that macro systems vary a lot in expressive power and complexity – different macro systems provide different capabilities. The OP is discussing Common Lisp, which inhabits a very different place in the “expressive power vs complexity” space than the macro system I use most (Racket’s). Not to disparage the Common Lisp macro system (they both have their place!), but I would encourage anyone not to come to conclusions about what macros can be useful for or whether they are worthwhile without serious investigation of Racket’s macro system. It is more complicated, to be certain, but it provides so much expressive power.

1. 4

I mean, strictly, no - but that’s like saying “if you can write machine code, do you really need Java?”

(Edited to add: see also Greenspun’s tenth rule … if you were to build a macro system out of such tooling, I’d bet at least a few pints of beer that you’d basically wind up back at Common Lisp again).

1. 2

I’m not claiming to speak for all lispers, but the question

Here is my question for lispers, If you can use lex / yacc and can write a full fledged interpreter do you really need macros ?

might be misleading. Obviously you don’t need macros, and everything could be done some other way, but macros are easy to use, while also powerful, can be dynamically created or restricted to a lexical scope. I’ve never bothered to learn lax/yacc, so I might be missing something.

1. 2

First-class packages are the most underrated feature of lisp. AFAIK only perl offers it fully

OCaml has first-class modules: https://ocaml.org/releases/4.11/htmlman/firstclassmodules.html

I’m a lot more familiar with them than I am with CL packages though, so they may not be 100% equivalent.

1. 6

I like lisp but macros should be a last resort thing. Is it really needed in those cases, I wonder.

1. 18

I disagree. Macros, if anything, are easier to reason about than functions, because in the vast majority of cases their expansions are deterministic, and in every situation they can be expanded and inspected at compile-time, before any code has run. The vast majority of bugs that I’ve made have been in normal application logic, not my macros - it’s much more difficult to reason about things whose interesting behavior is at run-time than at compile-time.

Moreover, most macros are limited to simple tree structure processing, which is far more constrained than all of the things you can get up to in your application code.

Can you make difficult-to-understand code with macros? Absolutely. However, the vast majority of Common Lisp code that I see is written by programmers disciplined enough to not do that - when you write good macros, they make code more readable.

1. 3

“Macros, if anything, are easier to reason about than functions, because in the vast majority of cases their expansions are deterministic, and in every situation they can be expanded and inspected at compile-time, before any code has run. The vast majority of bugs that I’ve made have been in normal application logic”

What you’ve just argued for are deterministic, simple functions whose behavior is understandable at compile time. They have the benefits you describe. Such code is common in real-time and safety/security-critical coding. An extra benefit is that static analysis, automated testing, and so on can easily flush bugs out in it. Tools that help optimize performance might also benefit from such code just due to easier analysis.

From there, there’s macros. The drawback of macros is they might not be understood instantly like a programmer will understand common, language constructs. If done right (esp names/docs), then this won’t be a problem. Next problem author already notes is that tooling breaks down on them. Although I didn’t prove it out, I hypothesized this process to make them reliable:

1. Write the code that the macros would output first on a few variations of inputs. Simple, deterministic functions operating on data. Make sure it has pre/post conditions and invariants. Make sure these pass above QA methods.

2. Write the same code operating on code (or trees or whatever) in an environment that allows similar compile-time QA. Port pre/post conditions and invariants to code form. Make sure that passes QA.

3. Make final macro that’s a mapping 1-to-1 of that to target language. This step can be eliminated where target language already has excellent QA tooling and macro support. Idk if any do, though.

4. Optionally, if the environment supports it, use an optimizing compiler on the macros integrated with the development environment so the code transformations run super-fast during development iterations. This was speculation on my part. I don’t know if any environment implements something like this. This could also be a preprocessing step.

The resulting macros using 1-3 should be more reliable than most functions people would’ve used in their place.

1. 2

What you’ve just argued for are deterministic, simple functions whose behavior is understandable at compile time.

In a very local sense, I agree with you - a simple function is easier to understand than a complex function.

However, that’s not a very interesting property.

A more interesting question/property is “Is a large, complex system made out of small, simpler functions easier to manipulate than one made from larger, more complex functions?”

My experience has been that, when I create lots of small, simple functions, the overall accidental complexity of the system increases. Ignoring that accidental complexity for the time being, all problems have some essential complexity to them. If you make smaller, simpler functions, you end up having to make more of them to implement your design in all of its essential complexity - which, in my experience, ends up adding far more accidental complexity due to indirection and abstraction than a smaller number of larger functions.

That aside, I think that your process for making macros more reliable is interesting - is it meant to make them more reliable for humans or to integrate tools with them better?

1. 1

“A more interesting question/property is “Is a large, complex system made out of small, simpler functions easier to manipulate than one made from larger, more complex functions?”

I think the question might be what is simple and what is complex? Another is simple for humans or machines? I liked the kinds of abstractions and generative techniques that let a human understand something that produced what was easy for a machine to work with. In general, I think the two often contradict.

That leads to your next point where increasing the number of simple functions actually made it more complex for you. That happened in formally-verified systems, too, where simplifications for proof assistants made it ugly for humans. I guess it should be as simple as it can be without causing extra problems. I have no precise measurement of that. Plus, more R&D invested in generative techniques that connect high-level, human-readable representations to machine-analyzable ones. Quick examples to make it clear might be Python vs C’s looping, parallel for in non-parallel language, or per-module choices for memory management (eg GC’s).

“is it meant to make them more reliable for humans or to integrate tools with them better?”

Just reliable in general: they do precisely what they’re specified to do. From there, humans or tools could use them. Humans will use them as they did before except with precise, behavioral information on them at the interface. Looking at contracts, tools already exist to generate tests or proof conditions from them.

Another benefit might be integration with machine learning to spot refactoring opportunities, esp if it’s simple swaps. For example, there’s a library function that does something, a macro that generates an optimized-for-machine version (eg parallelism), and the tool swaps them out based on both function signature and info in specification.

2. 7

Want to trade longer runtimes for longer compile times? There’s a tool for that. Need to execute a bit of code in the caller’s context, without forcing boilerplate on the developer? There’s a tool for that. Macros are a tool, not a last resort. I’m sure Grammarly’s code is no more of a monstrosity than you’d see at the equivalent Java shop, if the equivalent Java shop existed.

1. 9

Java shop would be using a bunch of annotations, dependency injection and similar compile time tricks with codegen. So still macros, just much less convenient to write :)

1. 1

the equivalent Java shop

I guess that would be Languagetool. How much of a monstrosity it is is left as an exercise to the reader, mostly because it’s free software and anybody can read it.

2. 7

This reminds me of when Paul Graham was bragging about how ViaWeb was like 25% macros and other lispers were kind of just looking on in horror trying to imagine what a headache it must be to debug.

1. 6

The source code of the Viaweb editor was probably about 20-25% macros. Macros are harder to write than ordinary Lisp functions, and it’s considered to be bad style to use them when they’re not necessary. So every macro in that code is there because it has to be. What that means is that at least 20-25% of the code in this program is doing things that you can’t easily do in any other language.

It’s such a bizarre argument.

1. 3

I find it persuasive. If a choice is made by someone who knows better, that choice probably has a good justification.

1. 10

It’s a terrible argument; it jumps from “it’s considered to be bad style to use [macros] when they’re not necessary” straight to “therefore they must have been necessary” without even considering “therefore the code base exhibited bad style” which is far more likely. Typical pg arrogance and misdirection.

1. 3

I don’t have any insight into whether the macros are necessary; it’s the last statement I take issue with. For example: Haskell has a lot of complicated machinery for working with state and such that doesn’t exist in other languages, but that doesn’t mean those other languages can’t work with state. They just do it differently.

Or to pick a more concrete example, the existence of the loop macro and the fact that it’s implemented as a macro doesn’t mean other languages can’t have powerful iteration capabilities.

1. 1

One hopes.

1. 7

I’m going to throw Joe Armstrong’s thesis onto the pile: Making Distributed Systems Reliable in the Presence of Software Errors

1. 2

I’ll pair with that this classic on fault-tolerant systems from makers of NonStop:

Why Computers Stop and What Can Be Done About It (1985) (pdf)

1. 4

From a design perspective, I have a few in this list that have capabilities todays’ systems don’t. Examples: clustering reliability of OpenVMS systems; productivity and whole-system debugging of LISP machines; predictability and hang-resistance of RTOS’s like QNX/INTEGRITY; self-healing of MINIX 3 (esp drivers); loose coupling with easy integration of components in systems like Genode.

I’ll add OpenVMS making all languages use same calling conventions and stuff. On modern systems, you often get the best performance or integration using C’s. Then, a language doing it differently has a mismatch that can affect performance or correctness. Knocking that out encourages using best tool for the job from the platform up. .NET CLR took a page from their book at VM level. Then, its VM sits on native languages illustrating the point.

eCOS let you configure the OS to leave off every unnecessary component at kernel level. Mainly for size optimization. That would help with reliability, security, and (in clouds) transfer costs.

Re hardware

I miss the power switch on old machines that was next to the “power” switch. Sometimes one would flush out problems. Other times I needed both. I’ve come to believe it was peripheral devices that needed a hard reboot whose invisible failures propagated to higher-level, visible functions. I like power buttons that work. I also don’t want to hold it for 10 seconds.

Audio knobs that use analog or just reliable circuits. Too many times something gets really loud with me having to turn volume down. Soft buttons were often unreliable at that exact moment (high system load?). I could grab and twist that knob in a split second with it working every time.

Repairs/customization. Let’s say the system components are pluggable with standardized, commoditized interfaces. I can replace faulty components with new or used ones cheaper than a whole system. Even neater, modders can extend systems to make them do what they previously could not.

Instant or rapid startup. This was a special feature rather than common. The system would turn on to something usable instantly or so fast you didn’t loose what was in your head while waiting. Sleep mode got rid of this problem for most of us. I’d still find it useful for debugging or just improving reliability/durability with common restarts. You turn them off at a good time to reduce the odds they themselves off at a bad time. QNX still advertises this as a differentiator for them in applications like automotive, entertainment systems.

RAM-based HD’s. You see those memory hierarchies showing how slow disks are. Not if they use RAM! Many applications. Work best when they have onboard flash to back up volatile state plus time for an initial load.

NUMA architecture. Save best for last. Companies like SGI in systems like SGI UV had ability to chain motherboards together over a memory bus that transferred GB’s a second, microsecond latencies, and maintained consistency. First benefit was hundreds of CPU’s, up to TB’s of RAM, and many graphics cards. A benefit people forget is NUMA turns parallel programming from something with distributed, middleware-heavy programming to something like multithreaded programming just watching out for data locality (i.e. ops and data they use stay on same node). Vast, vast improvement in usability for programmers wanting to scale.

So, there’s some of my favorite things. Hope that helps some people.

So, there’s a few for me.

1. 12

Returning long enough to congratulate you on getting to the finish line. It’s an important topic that you’ve done a massive amount of work on. The topics covered may broaden people’s minds a lot more versus the more common, narrow theories of software developmemt. Having many references, a free PDF, and all your supporting code and data available also sets a great example all scientific works should follow.

I hope your work gets the impact it deserves. I also hope you’re doing well personally. :)

1. 4

has anyone gotten a blue lobster yet?

1. 2

It seems certain given our volume of traffic, but I don’t check the logs for it and they get logrotate’d out after a week or two. Might be fun to add a cron job to the Lobsters repo to grep for them.

1. 4

you should make it so the person who gets the blue lobster gets a blue lobster hat

1. 2

While I love the idea or a blue lobster hat or even a listing of who & when saw the blue lobster, this however could lead to some people trying to abuse the system to get the achievement and by the nature of how this would go, that process would generate arbitrary load on lobsters.

2. 1

rain1’s profile pic was a blue lobster.

1. 3

Does anyone know what the spikes in user sign ups correspond to?

1. 9
1. 2

TIL about the invitation queue. Someone should build a site map for the site that details all these pages :-)

1. 8

It’s been disabled for years, though the code exists for sister sites.

2. 2

The other two are mentions on high-volume sites and people bringing in their buddies. An example of high-volume site that probably feeds over here is Hacker News with 20 million hits a month. We had a bit of cross-posting and shared users which increases exposure to both sites.

I’m sure some comes from sites posted here whose members end up here. Perhaps a Lobster comments on their site saying they saw it on Lobsters. Traffic follows. I know it happens but don’t have data on what it generates.

1. 1

Probably when the site is mentioned/promoted somewhere else.

1. 1

I battle my bipolar disorder (very successfully) with physical exercise. I cannot do that anymore.

It’s going exactly how you think it would be going. 🙃

1. 1

Did you mean you can’t go to the gym or something happen to you physically that prevents you from exercising?

If the first, I found some benefit in using resistance bands. I’ve been using them on a tree outside whose angle lets me do my arms at least. You might be able to wrap them around something heavy in your house if nothing outside. Just be careful doing that.

1. 2

I’ve actually been able to do some things at home (I own a set of resistance bands and a medicine ball for at-home exercising) but my primary source of exercising pre-pandemic was ice hockey which got me ~200 BPM exercises that didn’t feel like “work”. I get by with the at-home stuff, but the hockey nearly eliminated my symptoms.

1. 2

I’ve had some luck with VR games for “workout without it feeling like work”. Hard to keep enough space clear though.

1. 2

Ah that makes sense. Hopefully this stuff clears up soon so you can get back to hockey.

1. 1

Physically, I put on the quarantine 15 because a vacation segued directly into self-isolation for 14 days after a likely exposure while attending a 5,000 person event from which three people were hospitalized with COVID-19 during that vacation. I’d suspended my diet for that vacation knowing that I’d put on a few pounds that I could work off in a few weeks. I did not expect to need to plan for a nigh post-apocalyptic diet as leaving the house became risky. Carbs were eaten. Many. I worked off about 5 of those pounds in the months following but I’m back up following a switch back to the keto diet I’ve ~maintained for 6+ years. I’m looking forward to fitting back into clothes I could wear when I was 15 lbs lighter for all of 2019.

Mentally, I’m doing OK. Some living situation changes are normalized now as a family member moved in with us because of her live-in job going away at the start of the pandemic. I’m happy to have them around but it’s showing that our house barely fits three adults. My work is slow but steady. One of my non-profits has really benefitted from the pandemic while another one is essentially suspended as the tech conference industry has gone “free and online”. My side business has lost about 40% of its revenue with no upward force in sight. We have savings that will keep our heads at the water line but if conditions continue into next year, we’ll have to drastically alter our business plan to keep the business alive. We’re already looking at other ways to generate revenue as being a primarily mendicant operation is fraught with revenue unreliability. Altogether, I’m probably about even but it’s not been without ups and downs, wins and losses, and some long conversations about the future.

I wish I had more time to focus on me, but servant leadership is the path I’ve chosen and it is not one easily paused or exited.

1. 2

Props for looking after others with the self-isolation and going for servant leadership. Hope and pray you do well with all this.

1. 4

Ok, so I was an ex-Christian that ditched the Bible due to science, morals, etc. That’s despite apparent miracles happening with my family at times. Turned into a good guy who woild sacrifice enormously for others but also plenty raunchy, argumentative, etc. Most people liked me. Burnout from a job was so stressful others were breaking out crying, falling out on cars, etc. I could take it despite PTSD by using a combo of breathing, positive attitude, and tough experience.

Was deep in burnout for long time with days blending together. Prolly had liver, heart, and cancer problems on the way. Plus, anyone real never really leaves. Called out to unknown God that if they exist and want me back to give me a little time to pull myself up and Id bring others up with me.

High-talent people popped up outta nowhere binded by all kinds of coincidences. Mostly went well. One was damaged and needed help which I gave. Stayed in tons of prayer. Situation kept challenging me to change super-fast to help them. I get blindsided by being disowned, then a fake stalking claim (our 1-on-1’s were a setup), and fake sexual harrassment claim. About to go to court, Lord said hold off: “I gotcha.” I did hesitantly. Within days, she ended it with a deal splitting us up with nothing on my record. My mgmt went “Wth?!”

I wondered what I was being prepared for. Next shift was coronavirus. Skeleton crew with hours non-stop of desparate, angry people. In Christ, I was only person at peace (stressed though!). I took the worst calls, calming them down. We made it. I’ve served as many as I can since.

Next tests were simpler. I handled a highly-privileged bully with patience and professionslism vs going ham. They escalated. Prayed on that. Corporate moved them in way that nobody has ever seen happen.

A relative had let someone move in free to have money for bail. That person turned into total bum for many months. They were too loving to kick them out. I prayed hard for them while planning a response. Like the “Then Satan entered him verses,” the guy suddenly went nuts, tried to get their landlord to evict them with wild stories, and that got him kicked out. They were confused until I said it matched my specific prayer for enemies.

Most were good. One that tested me was a guy got destroyed before my eyes by claim like first psycho, I forgave/blessed that enemy, and they got a house and new job out of state. Hmm… Still praying they transform then…

So, lots of stuff like this. I started with prayer. Professed faith again later. Back into being righteous. Using tons of energy to have servant attitude toward everyone, love even the haters, kick mental immorality common in summer, get in Scripture, give to who needs, and pray without ceasing for many I encounter.

Most PTSD symptoms and insomnia are minimal at the moment. I’m at this stuff from 5a-6:30a to midnight many nights. Tired but in a good way. The HR person that dealt with the people above is now my direct superior with them still here. Next test is on the way. Good that in my corner is My Heavenly Father and Lord Jesus Christ with a Holy Spirit sustaining me in 13hr sprint shifts. I’ll be blessed either way. I’ll also try to pray for any here that request it where I have time. :)

1. 23

It boggles my mind that there are more and more websites that just contain text and images, but are completely broken, blank or even outright block you if you disable JavaScript. There can be great value in interactive demos and things like MathJax, but there is no excuse to ever use JavaScript for buttons, menus, text and images which should be done in HTML/CSS as mentioned in the blog post. Additionally, the website should degrade gracefully if JavaScript is missing, e.g. interactive examples revert to images or stop rendering, but the text and images remain in place.

I wonder how we can combat this “JavaScript for everything” trend. Maybe there should be a website that names and shames offending frameworks and websites (like https://plaintextoffenders.com/ but for bloat), but by now there would probably be more websites that belong on this list than websites that don’t. The web has basically become unbrowsable without JavaScript. Google CAPTCHAs make things even worse. Frankly, I doubt that the situation is even salvageable at this point.

I feel like we’re witnessing the Adobe Flash story all over again, but this time with HTML5/JS/Browser bloat and with the blessing of the major players like Apple. It’ll be interesting to see how the web evolves in the coming decades.

1. 5

Rendering math on the server/static site build host with KaTeX is much easier than one might have thought: https://soap.coffee/~lthms/cleopatra/soupault.html#org97bbcd3

Of course this won’t work for interactice demos, but most pages aren’t interactice demos.

1. 9

If I am making a website, there is virtually no incentive to care about people not allowing javascript.

The fact is the web runs on javascript. The extra effort does not really give any tangible benefits.

1. 21

You just proved my point. That is precisely the mechanism by which bloat finds its way into every crevice of software. It’s all about incentives, and the incentives are often stacked against the user’s best interest, particularly if minorities are affected. It is easier to write popular software than it is to write good software.

1. 7

Every advance in computers and UI has been called bloat at one time or another.

The fact of the matter is that web browsers “ship” with javascript enabled. A very small minority actually disable it. It is not worth the effort in time or expense to cater to a group that disables stuff and expects everything to still work.

Am I using a framework?

Most of the time, yes I am. To deliver what I need to deliver it is the most economical method.

The only thing I am willing to spend extra time on is reasonable accommodation for disabilities. But most of the solutions for web accessibility (like screenreaders) have javascript enabled anyhow.

You might get some of what you want with server side rendering.

Good software is software that serves the end user’s needs. If there is interactivity, such as an app, obviously it is going to have javascript. Most things I tend to make these days are web apps. So no, Good Software doesn’t always require javascript.

1. 10

I actually block javascript to help me filter bad sites. If you are writing a blog and I land there, and it doesn’t work with noscript on, I will check what domains are being blocked. If it is just the one I am accessing I will temp unblock and read on. If it is more than a couple of domains, or if any of them are unclear as to why they need to be loaded, you just lost a reader. It is not about privacy so much as keeping things neat and tidy and simple.

People like me are probably a small enough subset that you don’t need our business.

1. 4

Ah, the No-Script Index!

How many times does one have to click “Set all this page to temporarily trusted” to get a working website? (i.e. you get the content you came for)

Anything above zero, but definitely everything above one is too much.

1. 3

The absolute worst offender is microsoft. Not only is their average No-Script index around 3, but you also get multiple cross site scripting attack warnings. Additionally when it fails to load a site because of js not working it quite often redirects you to another page, so set temp trusted doesn’t even catch the one that caused the failure. Often you have to disable no-script altogether before you can log in and then once you are logged in you can re-enable it and set the domains to trusted for next time.

That is about 3% of my total rant about why microsoft websites are the worst. I cbf typing up the rest.

2. 3

i do this too, and i have no regrets, only gratitude. i’ve saved myself countless hours once i realized js-only correlates heavily with low quality content.

i’ve also stopped using medium, twitter, instagram, reddit. youtube and gmaps, i still allow for now. facebook has spectacular accessibility, ages ahead of others, and i still use it, after years away.

1. 1

My guess is that a lot of people who use JS for everything, especially their personal blogs and other static projects, are either lazy or very new to web development and programming in general. You can expect such people to be less willing or less able to put the effort into making worthwhile content.

1. 2

that’s exactly how i think it work, and why i’m happy to skip the content on js-only sites.

3. 6

The only thing I am willing to spend extra time on is reasonable accommodation for disabilities.

Why do you care more about disabled people than the privacy conscious? What makes you willing to spend time for accommodations for one group, but not the other? What if privacy consciousness were a mental health issue, would you spend time on accommodations then?

1. 12

Being blind is not a choice: disabling JavaScript is. And using JavaScript doesn’t mean it’s not privacy-friendly.

1. 4

It might be a “choice” if your ability to have a normal life, avoid prison, or not be executed depends on less surveillance. Increasingly, that choice is made for them if they want to use any digital device. It also stands out in many places to not use a digital device.

1. 2

This bears no relation at all to anything that’s being discussed here. This moving of goalposts from “a bit of unnecessary JavaScript on websites” to “you will be executed by a dictatorship” is just weird.

1. 4

You framed privacy as an optional choice people might not need as compared to the need for eyesight. I’d say people need sight more than privacy in most situations. It’s more critical. However, for many people, privacy is also a need that supports them having a normal, comfortable life by avoiding others causing them harm. The harm ranges from social ostracism upon learning specific facts about them to government action against them.

So, I countered that privacy doesn’t seem like a meaningless choice for those people any more than wanting to see does. It is a necessity for their life not being miserable. In rarer cases, it’s necessary for them even be alive. Defaulting on privacy as a baseline increases the number of people that live with less suffering.

1. 2

You framed privacy as an optional choice

No, I didn’t. Not even close. Not even remotely close. I just said “using JavaScript doesn’t mean it’s not privacy-friendly”. I don’t know what kind of assumptions you’re making here, but they’re just plain wrong.

1. 3

You also said:

“Being blind is not a choice: disabling JavaScript is.”

My impression was that you thought disabling Javascript was a meaningless choice vs accessibility instead of another type of necessity for many folks. I apologize if I misunderstood what you meant by that statement.

My replies don’t apply to you then: just any other readers that believed no JS was a personal preference instead of a necessity for a lot of people.

2. 3

The question isn’t about whether it’s privacy-friendly, though. The question is about whether you can guarantee friendliness when visiting any arbitrary site.

If JS is enabled then you can’t. Even most sites with no intention of harming users are equipped to do exactly that.

3. 12

Why do you care more about disabled people than the privacy conscious?

Oh that one is easy: Its the law.

Being paranoid isn’t a protected class, it might be a mental health issue - but my website has nothing to do with its treatment.

For the regular privacy, you have other extensions and cookie management you can do.

4. 3

You have some good points. One thing I didn’t see addressed is the number of people on dial-up, DSL, satellite, cheap mobile, or other bad connections. The HTML/CSS-type web pages usually load really fast on them. The Javascript-type sites often don’t. They can act pretty broken, too. Here’s some examples someone posted to HN showing impact of JavaScript loads.

“If there is interactivity, such as an app, obviously it is going to have javascript. “

I’ll add that this isn’t obvious. One of the old models was client sending something, server-side processing, and server returns modified HTML. With HTML/CSS and fast language on server, the loop can happen so fast that the user can barely perceive a difference vs a slow, bloated, JS setup. It would also work for vast majority of websites I use and see.

The JS becomes necessary as the UI complexity, interactivity (esp latency requirements), and/or local computations increase past a certain point. Google Maps is an obvious example.

1. 3

It is interesting to see people still using dialup. Professionally, I use typescript and angular. The bundle sizes on that are rather insane without much code. Probably unusable on dialup.

However, for my personal sites I am interested in looking at things like svelte mixed with dynamic loading. It might help to mitigate some of the issues that Angular itself has. But fundamentally, it is certainly hard to serve clients when you have apps like you mention - Google Maps. Perhaps a compromise is to try to be as thrifty as can be justified by the effort, and load most of the stuff up front, cache it as much as possible, and use smaller api requests so most of the usage of the app stays within the fast local interaction.

1. 2

<rant>

Google Maps used to have an accessibility mode which was just static pages with arrow buttons – the way most sites like MapQuest worked 15 years ago. I can only guess why they took it away, but now you just get a rather snarky message.

Not only that, but to add insult to injury, the message is cached, and doesn’t go away even when you reload with JS enabled again. Only when you Shift+reload do you get the actual maps page.

This kind of experience is what no-JS browsers have to put up with every fucking day, and it’s rather frustrating and demoralizing. Not only am I blocked from accessing the service, but I’m told that my way of accessing it itself invalid.

Sometimes I’m redirected to rather condescending “community” sites that tell me step by step how to re-enable JavaScript in my browser, which by some random, unfortunate circumstance beyond my control must have become disabled.

All I want to say to those web devs at times like that is: Go fuck yourself, you are all lazy fucking hacks, and you should be ashamed that you participated in allowing, through action or inaction, this kind of half-baked tripe to see the light of day.

My way of accessing the Web is just as valid as someone’s with JS enabled, and if you disagree, then I’m going to do everything in my power to never visit your shoddy establishment again.

</rant>

Edit: I just want to clarify, that this rant was precipitated by other discussions I’ve been involved in, my overall Web experience, and finally, parent comment’s mention of Google Maps. This is not aimed specifically at you, @zzing.

2. 9

It shouldn’t be extra effort, is the point. If you’re just writing some paragraphs of text, or maybe a contact form, or some page navigation, etc etc you should just create those directly instead of going through all the extra effort of reinventing your own broken versions.

1. -2

Often the stuff I am making has a lot more than that. I use front end web frameworks to help with it.

Very few websites today have just text or a basic form.

1. 10

Ok, well, that wasn’t at all clear since you were replying to this:

It boggles my mind that there are more and more websites that just contain text and images, but are completely broken, blank or even outright block you if you disable JavaScript.

Many websites I see fit this description. They’re not apps, they don’t have any “behaviour” (at least none that a user can notice), but they still have so much JS that it takes over 256MB of RAM to load them up and with JS turned off they show a blank white page. That’s the topic of this thread, at least by the OP.

1. 0

Very few websites today have just text or a basic form.

Uhh… Personal websites? Blogs? Many of the users here on Lobsters maintain sites like these. No need to state falsehoods to try and prove your point; there are plenty of better arguments you could be making.

As an aside, have you seen Sourcehut? That’s an entire freakin’ suite of web apps which don’t just function without JavaScript but work beautifully. Hell, Lobsters almost makes it into this category as well.

2. 1

I’m trying to learn more about accessibility, and recently came across a Twitter thread with this to say: “Until the platform improves, you need JS to properly implement keyboard navigation”, with a couple video examples.

1. 2

I think that people that want keyboard navigation will use a browser that supports that out of the box, they won’t rely on each site to implement it.

1. 2

The world needs more browsers like Qutebrowser.

2. 1

Some types of buttons, menus, text and images aren’t implemented in plain HTML. These kinds should still be built in JS. For instance, 3-state buttons. There are CSS hacks to make a button appear 3-state, but no way to define behavior for them without JS. People can hack together radio inputs to look like a single multi-state button, but that’s a wild hack that most developers aren’t going to want to tackle.

1. 9

Completely tech-unrelated, but I started again to learn to draw, with some more decent resources and peeps to talk about art to. Hope I’ll manage to stay motivated for long enough to not burn out this time

1. 3

I go on and off drawing partly learning with some resources and try to abstract the concept behind what I learned, partly just having fun with an idea. It is the most balanced way that I have found to keep drawing even when I take long break from it. When I draw for me, I try to draw a bit like we can do automatic writing. I don’t have a mental picture and Ior a precise idea but I just try to let out and get in the “emotional flow”.

It depends what you want to achieve with it too. More technical drawings or artistic or just to express yourself?

1. 2

I can’t draw at all. Many times I’d have loved to so others could picture what I saw or imagined. I envy folks that can do it.

Do you or anyone else here have resources for beginners who are definitely non-artists wanting to get something out? I’d appreciate them. :)

1. 1

I’ve started to pick up drawing on my Surface. Concept art is a great place to start as it can be as messy as you want it to be. I’ve been following this guy’s youtube channel

Another good tip is to practice by drawing stuff in your home / environment.

1. 1

I asked a few friends and threads about resources, here’s a list I compiled of all the resources mixed

1. 6

I think there are two aspects to this. Below, I will use now old-fashioned word RIA(Rich Internet Application) to refer to “mutated application runtime”, its functionality, not its implementation.

Replying to “HTML, which started as document markup, should never have grown into RIA”, the author basically explains RIA-less HTML wouldn’t be much simpler, nor would it be much more efficient. In other words, the post is entirely about document, not RIA.

In my experience, when the argument is brought up, it is usually about RIA, not document. HTML-less RIA, not RIA-less HTML. HTML-less RIA, legacy free RIA implementation designed from scratch for RIA need, could be simpler and more efficient. There is also no backward compatibility need here. Writing a cross platform application runtime is a big task, so it isn’t easy, but the task is not helped by need to serve document markup legacy and web compatibility burden.

Flutter is a try to create HTML-less RIA. I doubt the author thinks Flutter does not make sense; it clearly does. Now, once we have HTML-less RIA, RIA-less HTML could save time specifying and implementing endless stream of APIs necessary for RIA, and focus on its already awesome styling and layout and rendering engine of document. I agree it wouldn’t be much simpler nor much more efficient, but it would also greatly help. This is why I feel the argument and the reply in the post is talking past each other.

1. 4

I think what the author is doing is responding to the many people out there on the Internet who treat this as a throwaway line (on HN for instance). I read most of them as asking for a RIA-less-HTML, and I think this is a good criticism of that idea.

I don’t know which of us is right about what people who use this line are asking for.

1. 2

You mentioned HN, so let’s try some empiricism. This article just hit HN front page. https://news.ycombinator.com/item?id=23599734 is a typical response. Note that it is entirely about RIA and whether DOM is a good basis for RIA, not about document, as I predicted.

2. 1

HTML-less RIA, legacy free RIA implementation designed from scratch for RIA need, could be simpler and more efficient.

I think our GUI builders like VB6 and Lazaurus already implied this by their feature vs footprint compared to web offerings. For more apples to apples, I also like to bring up Sciter because it’s so much more efficient than Electron etc. We could definitely do better than HTML and web browsers if just wanting to render content efficiently. Its dominance is a legacy and/or ecosystem effect, not technical superiority, at this point.

Edit to add: I’ll add that OS’s like MenuetOS fit a whole system in a floppy. Nobody’s building RIA’s like that for various reasons, esp productivity. It does imply our platforms or supporting libraries that the RIA’s run on could be much leaner. I’m thinking something like a GUI builder combined with a runtime lean like MenuetOS.

1. 6

It’s probably only because Rust is so unreadable that they didn’t find anything. /s

On a serious note, no matter what you think about Rust, more diversity in the realm of TLS libraries is a good thing. Just like BearSSL, rustls offers a way to escape the factual OpenSSL-monoculture.

1. 2

It’s probably only because Rust is so unreadable that they didn’t find anything. /s

I see the /s, but is that a common criticism? I genuinely do not know how Rust is perceived other than the hype/enthusiasm.

1. 3

Speaking from experience, there’s technically nothing wrong with Rust’s syntax. In fact, there’s lots of great stuff about it, like types that remain human-readable even when they’re complex (nested arrays and function pointers are easy). Greppable fn keyword for function definitions is very handy too.

However, Rust tries to look like C, but has syntax details significantly different from C. I suspect it gives an “uncanny valley” impression to users coming from C-family languages. Rust doesn’t need as many round parens, but requires more braces: if true {}. Rust has generics, which sometimes sprinkle the code with lots of <T>. This might affect overall aesthetics of the code, but I don’t find anything that would be objectively unreadable about that.

1. 1

Ive seen many say it’s hard to learn but don’t see much of that claim. All the changes going on in the Rust ecosystem, esp libraries, suggests folks can read the code fine. Suggests, not proves.

1. 4

I found this video particularly interesting given the discussion also occurring right now on the SQLite As An Application File Format thread. The conclusion of that argument on SQLite’s own webpage is:

SQLite is not the perfect application file format for every situation. But in many cases, SQLite is a far better choice than either a custom file format, a pile-of-files, or a wrapped pile-of-files. SQLite is a high-level, stable, reliable, cross-platform, widely-deployed, extensible, performant, accessible, concurrent file format. It deserves your consideration as the standard file format on your next application design.

Obviously, the more complex we make things, the broader the attack surface gets. I find it helpful to ask why something was created and whether what I’m trying to use it for was that. In the case of SQLite, I never would have thought twice about running a query against a SQLite file until watching this video. But putting a database in the place of other formats would seem odd to me and this video helps reinforce some of the benefits of trying to be as simple as possible. To tie in another current thread, this would be why my websites have reverted back to static HTML pages: lower attack surface and cheaper hosting due to less computing requirements.

1. 2

“But putting a database in the place of other formats”

It is designed to handle issues, like filesystem failures, that most developers don’t even know how to handle. Many won’t do it. Then, SQLite became complex. Then, we saw how they tested it. We said to ourselves, “Wow! There’s no way anything we quickly throw together will be that reliable. Let’s just use SQLite.” Then, some did. :)

“I never would have thought twice about running a query against a SQLite file”

Almost all code is designed assuming the inputs are non-malicious. Many look for faulty input which has some overlap. Evil inputs might do that or more insidious behaviors to force the system to do what it wasn’t designed for. You should assume these two things:

1. All systems are insecure unless designed otherwise with rigorous, proven methods with an independent evaluation by expert breakers. All apps, especially but not limited to memory-unsafe, may fail insecurely if fed malicious input. They have to be explicitly designed to enforce security properties and/or stop classes of attack.

2. “Attacks only get better.” (Schneier) New classes of attack will occur. So, you should use mitigations like OpenBSD’s to potentially combat them, damage containment, monitor for odd behavior, have read-only backups of critical data, and a battle-tested way of restoring the system.

Apply these principles to every piece of hardware, OS kernel, library, or application. It’s always true unless there’s counterexample Im forgetting.

1. 1

But putting a database in the place of other formats would seem odd to me

I think this is unfair in the context that many file formats are attempts at reimplementing dbms with relational objects. Some formats are simple, but many are complex enough and have yielded their fair share of CVE. The way I read the SQLite As An Application File Format article is that instead of reimplementing yet another dbms, why not reuse a battle tested engine.

On the other hand, this talk brings some question about the SQLite threat model dealing with untrusted database input. A “simple” alternative could be to split SQLite into multiple processes where a restricted “server” handle the database and the client process use a simple protocol to send queries and read results. In most case files parsing is where exploitation happens and doing it in unprivileged process it is simply good practice.

1. 20

Of course everyone is free to spend as much money as they like, but if you want to start a blog and self-host, and might be discouraged, please let me give you another estimate that should 100% cover your needs:

• Cloud VPS to host your blog: 3 EUR per month (Hetzner / Scaleway / whatever)
• Domain: 12 EUR per year.

And then you still have plenty of resources left to run stuff on your VPS.

1. 12

And in case you decide to go with a static site, Netlify has an extremely generic free tier which would waive off those 3€ per month as well.

1. 3

Supporting your point, I have a non-optimized, web app written in Python with plain HTML and CGI serving people daily at under 30% utilization of a $5 VM. Static, cached website offloading to a CDN might be even cheaper. 1. 3 You can get a VPS for free (and domain as well), check out: https://matrix.org/docs/guides/free-small-matrix-server#get-a-free-server (yes, I wrote that page). 1. 4 If something if free, you’re the product. ;-) 1. 7 This isn’t like facebook/whatsapp/google(well, some of their services) where you cannot pay for the services. It’s a freebie to get you hooked. Start using and then discover you need more but don’t have the time/effort/resources to move someplace else, so you need to start paying to grow. 1. 1 I became really disenchanted with the US engineering program I went through when I found out that they only taught us to use$1000+ software titles. Not that open source existed for some of those titles then or now, but I felt a ton like the product…

2. 2

It’s actually really impressive that Oracle gives enough to run an actual ha service. It’s the core of any system to scale from one to two. Terraform even has the free tier all coded up (copyright Oracle, obviously): https://github.com/terraform-providers/terraform-provider-oci/blob/master/examples/always_free/main.tf

3. 2

Good point.

You can do things even cheaper if you use plain html/css files. I paid $37 on nearlyfreespeech, but I could’ve shaved off another ~$15 if I only had one site instead of two.

Bandwidth has never been a concern, but if it is, Cloudflare has a free plan.

1. 3

I think a static blog can easily be hosted on netlify/git(hub|lab) pages for free

1. 2

I just now realized I didn’t specify a time frame. Whoops. That’s $37 for all of 2019, or$3 a month.

2. 1

Depending on how important it is for people to self-host, one might reconsider and use services like neocities, SDF or one of the many friendly tilde communities. True, you don’t get to decide that much, but you can still learn a lot under constraints, that you can then apply if you reconsider again later on and “self-host” (though that’s not always the right term with VPS’s).

1. 1

I’ve seriously considered hand writing a blog on Neocities, but my current blog takes an enough time as it is without having to hand code the entire thing. Would be a lot of fun though.

1. 1

As if you can’t use an SSG witth neocities. ;)

2. 1

With HTML and CSS knowledge one can just set up a static site.

Of course it’s not as convenient as logging into a CMS but unless you have loads of traffic it will be free, most likely forever.

1. 3

Just a quick meta-reminder from the submission Guidelines:

When submitting a URL, the text field is optional and should only be used when additional context or explanation of the URL is needed. Commentary or opinion should be reserved for a comment, so that it can be voted on separately from the story.

1. 4

When submitting a URL

To my eyes, this post is not a URL submission. I can also appreciate the concern though. But it seems like there is no main page for this project, so the best way to do it is with the text field.

1. 3

It’s a Show Lobsters. I suggested that modification through the system.

1. 2

Oops, you’re right I missed that. Ignore the above comment then.

1. 2

The Hamler 0.1 compiler was initially attempted to be implemented based on the GHC 8.10.1, but was later changed to adapt from Purescript Compiler 0.13.6’s implementation.

Interesting choice.

1. 2

They said more here.

1. 4

I’ve written some Go and some Rust. I feel like I usually enjoy Rust more, though I struggle to explain why.

I think, for Rust, I find the error handling really ergonomic. Using ? in a function that does a bunch of things that can fail is just so much nicer than having every other line be a if err == nil { return err }. I also find it easier to follow how references work in Rust, oddly enough maybe. And using modules through Cargo is just so nice, while Go modules is kind of a messy hack in comparison. Oh and the macros are just so nice too.

But on Go’s side, Go concurrency is really awesome and smooth, especially compared to the half-complete hacks that are tokio and the Rust async system. Did I mention how nice the built-in channels are, and how a bunch of places in the standard lib use them? And easy cross-compilation is pretty nice too. And you gotta love that massive standard library. And I suppose not having to wrestle with complex too-clever generic hierarchies is nice sometimes too.

1. 16

side-note: i think it’s a bit off-topic (and meme-y, rust strike force, etc. :) to compare to rust when the article only speaks of go :)

Using ? in a function that does a bunch of things that can fail is just so much nicer than having every other line be a if err == nil { return err }.

i really like the explicit error handling in go and that there usually is only one control flow (if we ignore “recover”). i guess that’s my favorite go-feature: i don’t have to think hard about things when i read them. it’s a bit verbose, but that’s a trade-off i’m happy to make.

1. 7

i really like the explicit error handling in go

I would argue that Go’s model of error handling is a lot less explicit than Rust’s - even if Go’s is more verbose and perhaps visually noticeable, Rust forces you to handle errors in a way that Go doesn’t.

1. 1

I have just read up un rusts error handling, it seems to be rather simila, except that return types and errors are put together as “result”: https://doc.rust-lang.org/book/ch09-00-error-handling.html

my two cents: i like that i’m not forced to do things in go, but missing error handling sticks out as it is unusual to just drop errors.

1. 4

Well since it’s a result, you have to manually unwrap it before you can access the value, and that forces you to handle the error. In Go, you can forget to check err for nil, and unless err goes unused in that scope, you’ll end up using the zero value instead of handling the error.

1. 1

i like that i’m not forced to do things in go, but missing error handling sticks out as it is unusual to just drop errors

The thing is, while it may be unusual in Go, it’s impossible to “just drop errors” in Rust. It’s easy to unwrap them explicitly if needed, but that’s exactly my point: it’s very explicit.

2. 3

The explicit error handling is Very Visible, and thus it sticks out like a sore thumb when it’s missing. This usually results in better code quality in my experience.

1. 2

It did occur to me that it may come off like that :D It’s harder to make interesting statements about a language without comparing it to its peers.

IMO, Rust and Go being rather different languages with different trade-offs that are competing for about the same space almost invites comparisons between them. Kind of like how temping it is to write comparisons between Ruby, Python, and Javascript.

1. 1

I think Swift fits in quite well in-between. Automatic reference counting, so little need to babysit lifetimes, while using a powerful ML-like type system in modernised C-like syntax.

2. 15

But on Go’s side, Go concurrency is really awesome and smooth

Concurrency is an area I feel Go really lets the programmer down. There is a simple rule for safe concurrent programming: No object should be both mutable and shared between concurrent execution contexts at the same time. Rust is not perfect here, but it uses the unique ownership model and the send trait to explicitly transfer ownership between threads so you can pass mutable objects around, and the sync trait for safe-to-share things. The only safe things to share in safe rust are immutable objects. You can make other things adopt the sync trait if you’re willing to write unsafe Rust, but at least you’re signposted that here be dragons. For example, the ARC trait in Rust (for atomic reference counting), which gives you a load of read-only shared references to an object and the ability to create a mutable reference if there are no other outstanding references.

In contrast, when I send an object down a channel in Go, I still have a pointer to it. The type system gives me nothing to help avoid accidentally aliasing an object between two threads. To make things worse, the Go memory model is relaxed consistency atomic, so you’re basically screwed if you do this. To make things even worse, core bits of the language semantics rely on the programmer not doing this. For example, if you have a slice that is in an object that is shared between two goroutines, both can racily update it. The slice contains a base and a length and so you can see tearing: the length from one slice and the base from another. Now you can copy it, dereference it and read or write past the end of an array. This is without using anything in the unsafe package: you can violate memory safety (let alone type safety) purely in ‘safe’ Go, without doing anything that the language helps you avoid.

I wrote a book about Go for people who know other languages. It didn’t sell very well, in part because it ended up being a long description of things that Go does worse than other languages.

1. 2

That’s a worthwhile point. I haven’t been bitten by the ability to write to Go object that have already been sent down a channel yet, but I haven’t worked on any large-scale long-term Go projects. I’ve found it straightforward enough to just not use objects after sending. But then, the reason why we build these fancy type systems with such constraints is that even the best developers have proved to be not very good at consistently obeying these limits on large-scale projects.

I’m hoping that the Rust issues with async and tokio are more like teething pains for new tech than a fundamental issue, and that eventually, it will have concurrency tools that are both as ergonomic as Go’s and use Rust’s thread safety rules.

1. 5

I’ve found it straightforward enough to just not use objects after sending.

This is easy if the object is not aliased, but that requires you to have the discipline of linear ownership before you get near the point that sends the object, or to only ever send objects allocated near the sending point. Again, the Go type system doesn’t help at all here, it lets you create arbitrary object graphs with N pointers to an object and then send the object. The (safe) Rust type system doesn’t let you create arbitrary object graphs and then gives strong guarantees on what is safe to send. The Verona type system is explicitly designed to allow you to create arbitrary (mutable or immutable) object graphs and send them safely.

2. 9

And using modules through Cargo is just so nice, while Go modules is kind of a messy hack in comparison.

I have always found Rust’s module system completely impenetrable. I just can’t build a mental model of it that works for me. I always end up just putting keywords and super:: or whatever in front in various combinations until it happens to work. It reminds me of how I tried to get C programmes to compile when I was a little kid: put more and more & or * in front of expressions until it works.

And of course they changed in Rust 2018 as well which makes it all the more confusing.

1. 3

Yeah, I’ve had the same experience. Everything else about Cargo is really nice, but modules appear to be needlessly complicated. I have since been told that they are complicated because they allow you to move your files around in whatever crazy way you prefer without having to update imports. Personally I don’t think this is a sane design decision. Move your files, find/replace, move on.

1. 2

And of course they changed in Rust 2018 as well which makes it all the more confusing.

One of the things they changed in Rust 2018, FYI, was the module system, in order to make it a lot more straightforward. Have you had the same problem since Rust 2018 came out?

2. 6

For me Go is the continuation of C with some added features like CSP. Rust is/was heavily influenced by the ML type of languages which is extremely nice. I think ML group is superior in my ways to the C group. ADTs are the most trivial example why.

1. 4

I generally agree. I like ML languages in theory and Rust in particular, but Rust and Go aren’t in the same ballpark with respect to developer productivity. Rust goes to impressive lengths to make statically-managed memory user-friendly, but it’s not possible to compete with GC. It needs to make up the difference in other areas, and it does make up some of the difference in areas like error handling (?, enums, macros, etc and this is still improving all the time), IDE support (rust-analyzer has been amazing for me so far), and compiler error messages, but it’s not yet enough to get into a competitive range IMO. That said, Rust progresses at a remarkable pace, so perhaps we will see it get there in the next few years. For now, however, I like programming in Rust–it satisfies my innate preference to spend more time building something that is really fast, really abstract, and really correct–but when I need to do quality work in a short time frame in real world projects, I still reach for Go.

1. 9

To me Go seems like a big wasted opportunity. If they’d only taken ML as a core language instead of a weird C+gc hybrid, it would be as simple (or simpler) as it is, but much cleaner, without nil or the multi-return hack. Sum types and simple parametric polymorphism would be amazing with channels. All they had to do was to wrap that in the same good toolchain with fast compilation and static linking.

1. 2

Yeah, I’ve often expressed that I’d like a Go+ML-type-system or a Rust-lite (Rust with GC instead of ownership). I get a lot of “Use OCaml!” or “Use F#”, but these miss the mark for a lot of reasons, but especially the syntax, tooling, and ecosystem. That said, I really believe we overemphasize language features and under-emphasize operational concerns like tooling, ecosystem, runtime, etc. In that context, an ML type system or any other language feature is really just gravy (however, a cluster of incoherent language features is a very real impediment).

1. 1

Nothing is stopping anyone from doing that. I’d add that they make FFI to C, Go, or some other ecosystem as easy as Julia for the win. I recommend that for any new language to solve performance and bootstrapping problem.

2. 3

Then, you have languages like D that compile as fast as Go, run faster with LLVM, have a GC, and recently an optional borrow checker. Contracts, too. You get super productivity followed by as much speed or safety as you’re willing to put in effort for.

Go is a lot easier to learn, though. The battle-tested, standard libraries and help available on the Internet would probably be superior, too.

1. 4

I hear a lot of good things about D and Nim and a few others, but for production use case, support, ecosystem, developer marketshare, tooling, etc are all important. We use a lot of AWS services, and a lot of their SDKs are Python/JS/Go/Java/dotnet exclusively and other communities have to roll their own. My outsider perspective is that D and Nim aren’t “production ready” in the sense that they lack this sort of broad support and ecosystem maturity, and that’s not a requirement I can easily shrug off.

1. 2

I absolutely agree. Unless easy to handroll, those kind of things far outweigh advantages in language design. It’s what I was hinting at in 2nd paragraph.

It’s also why it’s wise for new languages to plug into existing ecosystems. Clojure on Java being best example.

1. 6

“With an open-source implementation, you see what you get”

Just wanted to note this is not true at all for hardware. The synthesis tools, usually two in combination, convert the high-level form into low-level pieces that actually run. They’re kind of like Legos for logic. Like with a compiler, they might transform them a lot to optimize. They use standard cells that are usually secret. Then, there’s analog and RF functionality that might have errors or subversions with fewer experts that know anything about it. Finally, there’s the supply chain from masks to fab to packaging to you.

With hardware, you have no idea what you actually got unless you tear it down. If it’s deep sub-micron, you have to one or more other companies during the tear-down process. This excludes the possibility that they can make components look like other components in a tear-down. Idk if that’s possible but figure I should mention it.

When I looked at that problem, my solution was that the core or at least a checker/monitor had to be at 350nm or above so a random sample could be torn up for visual inspection. The core would be designed like VAMP with strong verification. Then, synthesis (eg Baranov’s) to lower-level form with verified transforms followed by equivalence checks (formal and/or testing). The cells, analog, and RF would be verified by mutually-suspicious experts. Then, there were some methods that can profile analog/RF effects of onboard hardware to tell if they swap it out at some point. Anyway, this is the start with open (or vetted + NDA) cells, analog, and RF showing up overtime, too. Some already are.

1. 7
1. 2

I’m not a big fan of making critiques based on stuff that is explicitly outside of their security model. From my understanding, the formal verification of side channel for RISC-V would catch Spectre-style attacks: researchers implemented Spectre-like vulnerabilities into RISC-V designs which still conformed to the specification.

Yes, you can backdoor compilers, microcode, and hardware. But that’s not far from the generic critique of formal methods based on Godel’s incompleteness theorem. seL4 is the only operating system that makes it worth our time to finally start hardening the supply chain against those types of attacks.

1. 3

I normally agree. However, they were pushing seL4 on ARM as a secure solution. You cant secure things on ARM offerings currently on market. So, it’s a false claim. The honest one is it gives isolation except for hardware attacks and/or faults. For many, that immediately precludes using it. I’d rather them advertise honestly.

A side effect is that it might increase demand in secure hardware.