What about an extra slash after the item type?
Gopher URLs have a specific format, where the first character is the item type and isn’t transmitted. The part after that is the selector.
My Gopher URL encodes the following selector, which is the correct one:
2017-02-02
Your Gopher URL encodes the following selector, which is incorrect; some servers do have selectors beginning this way, but mine doesn’t:
/2017-02-02
My Gopher server is currently too lenient on what it accepts, but I expect to change this, so selectors that begin with ‘’/’’ and requests that, say, use Line Feed instead of Carriage Return and Line Feed will stop working at that point.
Indeed!
We can see it here (at 2.1. Gopher URL Syntax
)
A Gopher URL takes the form:
gopher://<host>:<port>/<gopher-path>
where <gopher-path> is one of:
<gophertype><selector>
<gophertype><selector>%09<search>
<gophertype><selector>%09<search>%09<gopher+_string>
[edit]: I suppose this is the gopher server? http://verisimilitudes.net/gophershell
Simple and efficient!
I would love to read the implementation but I think there’s some encoding problem. None of the APL symbols are rendered properly for me. For example I see â
where I would expect ⍝.
The link is written in this way:
<a charset="UTF-8" href="masturbation.apl">implementation</a>
This didn’t correct it, however. Your browser probably gives you the option to change the character encoding of a document manually, but I’ll change the link to behave properly if it’s a matter of changing this tag.
Your server isn’t sending a content type or encoding header with the page itself. The charset
attribute on the anchor isn’t supported by any browser. I don’t know of a way to change the encoding client-side in mobile Safari, but you are right it can be changed in most desktop browsers.
As @spc476 said, the best way to correct it is to configure Apache to deliver files with .apl
extension with a Content-type: text/plain; charset=utf-8
header.
Another way to fix it with Apache is to add a AddDefaultCharset directive to the configuration.
I find RISC misguided. The RISC design was created because C compilers were stupid and couldn’t take advantage of complex instructions, and so a stupid machine was created. The canonical RISC, MIPS, is ugly and wasteful, with its branch-delay slots and large instructions that do little.
RISC, no different than UNIX, claims simplicity and small size, but accomplishes neither and is worse than some ostensibly more complex designs.
This isn’t to write all RISC designs are bad; SuperH is nice from what I’ve seen, having many neat addressing modes; the NEC V850 is also interesting with its wider variety of instruction types and bitstring instructions; RISC-V is certainly a better RISC than many, but that still doesn’t save it from the rather fundamental failings of RISC, such as its arithmetic instructions designed for C.
I think, rather than make machines designed for executing C, the ’‘future of computing’’ is going to be in specialized machines that have more complex instructions, so CISC. I’ve read of IBM mainframes that have components dedicated to accepting and parsing XML to pass the machine representation to a more general component; if you had garbage collection, bounds checking, and type checking in hardware, you’d have fewer and smaller instructions that achieved just as much.
The Mill architecture is neat, from what I’ve seen, but I’ve not seen much and don’t want to watch a Youtube video just to find out. So, I’m rather staunchly committed to CISC and extremely focused CISC as the future of things, but we’ll see. Maybe the future really is making faster PDP-11s forever, as awful as that seems.
“The RISC design was created because C compilers were stupid and couldn’t take advantage of complex instructions”
No. Try Hennesy/Patterson Computer architecure for a detailed explanation of the design approach. I
Patterson:
we chose C and Pascal since there is a large user community and considerable local expertise.
You:
The RISC design was created as a compensation of sorts for the fundamentally poor C language.
Ok. Because Pascal == C ?
Patterson
Another reason why RISC is outperforming the VAX is that the existing C compilers for the VAX are not able to ?>exploit the existing architecture effectively.
You
I want to specifically mock this sentence; this was true for C compilers, but most languages are well-designed and >didn’t have this issue:
Oh that’s great to hear. Can you point me to either 1980s studies of “well designed” languages that were able to exploit VAX brilliantly? Or even current studies on how “well designed languages” exploit CISC instructions. Love to know. Patterson’s argument at the time was that compilers, in general, could not exploit the complex instructions of e.g. VAX. VAX had a single instruction to evaluate polynomials: love to know which “well designed languages” exploited POLY and what performance improvement they showed.
Maybe you could even go as far as naming those anonymous “well designed languages” that you have in mind? In 1980 maybe you mean COBOL, ADA, Bliss, PL/1 , Snobol?
Currently, maybe you want to point to the blazing CISC performance of ??? Java? Python? Haskell?
The Itanium designers had similar ideas to what you seem to have. On the other hand, all modern CPUs are RISC like (x86 translates CISC into RISC in the instruction decode step )
Ok. Because Pascal == C ?
To be honest, pretty much. Some details differ, but they’re both procedural languages with similarly unsafe memory management. C may have more weird stuff with pointers (pointer arithmetic and function pointers), but they have a lot in common.
Now Modula and Oberon (which by Wirth’s owns words should have been named “Pascal 2” and “Pascal 3”), may be different.
On the other hand, all modern CPUs are RISC like (x86 translates CISC into RISC in the instruction decode step )
Reading the Reddit thread disabused me of that notion. Microcode and Micro ops are nothing like RISC, or CISC, or any other ISA for that matter. That internal stuff tends to be highly regular (even more regular than RISC instructions), and they often are about doing stuff that’s internal to the CPU, without necessarily a direct correspondence to the programming model of the CPU. Register renaming for instance is about mapping user-visible aliases to actual registers. I guess that micro operations are about routing data to an actual register, not its user-visible alias.
Ok. so if you say RISC was designed with procedural Algol/FORTRAN like languages in mind, I’ll agree.
In an instruction set like the x86, where instructions vary from 1 byte to 15 bytes, pipelining is considerably more challenging. Recent implementations of the x86 architecture actually translate x86 instructions into simple operations that look like MIPS instructions and then pipeline the simple operations rather than the native x86 instructions! - Hennessy-Paterson Computer Organization and Design.
Remember that the whole philosophy behind all modern x86 CPUs, since the P6, is to decode x86 instructions into RISC-y micro-ops which are then fed to a fast RISC backend; the backend then schedules, issues, executes and retires the instructions in a smooth RISC way. - https://www.anandtech.com/show/1998/3
There is an example in https://www.agner.org/optimize/microarchitecture.pdf
I wouldn’t trust the journalism piece too much, but that manual looks mighty interesting, thanks.
And of course instructions are broken down into simpler parts. x86 is crazy no matter how you look at it, it’s not surprising steps are taken to make it less crazy. I’d even go as far as guess that it makes more sense to cache decoded x86 instructions than it does caching decoded RISC-V instructions: the latter is just so much easier to decode it may cost less to decode them several time than keeping them in expanded form in the instruction cache.
Just saying that from what I’ve heard, there’s a difference between RISC-like instructions and how an instructions (of any kind) is broken down for pipelining and such. To be honest though I just don’t know. I have yet to seriously check it.
This isn’t a bad article, as I like how it suggest restructuring to avoid the issue, but I feel this is still a much better analysis of the particular issue.
I really didn’t like the article you linked when I read it some time ago. I don’t see where it analyzes the issue like you say, and the advantage it advertises can be gained just by linting the C code and forcing braces. Extra braces are a compile error in C just like an extra end if
is in Ada. I really like Ada, but this has to be the worst reason I ever read to use it.
Firstly, I try to avoid having accounts at all. This Lobsters account had been the first one I’d made in several years, I believe. Accounts that I’d made in the past and regard as useless I’ve deleted rather to the average of my ability, not that there were many to start with. You probably don’t have a list of accounts you could go through and manage, do you?
Anyway, I host my own email on my own domain, but this isn’t what I use AFK. I’m a Free Software Foundation member and so I give others, including businesses, my name@member.fsf.org email address and this has served me well, as I can very easily have it point to any address, which allowed me to seamlessly transfer it from one email provider to my self-hosting without any issue.
So, my advice boils down to have an email address you can point to any other email address and start using that one, in brief.
Genuinely curious: You claim to have very few accounts. Do you just not use online services/sites?
Do you just not use online services/sites?
That’s exactly what I do. In general, I won’t use something if it requires me to make an account. I have a list of online accounts I have that are still active, including government accounts and whatnot, and it’s roughly ten or so, most of which simply haven’t been killed yet.
So, using domain names as if they’re Twitter hashtags is continuing, I see.
Again, there’s not a mention of Gmail and its malicious ways here.
After this individual stepped in to provide this “service”, the ability to contribute to the mailing lists using e-mail with HTML ceased.
That’s hilarious.
Anyway, I find it amusing this article is written without HTML, to counter that other one. Say, perhaps I should register a domain and participate in this dumbassery. It looks like an easy way to get attention, since everyone loves to share their opinions and personal preferences and it’s just technical enough for people to feel smart while discussing it. No, I won’t.
So, using domain names as if they’re Twitter hashtags is continuing, I see.
It’s way easier to remember “stop-gatekeeping.email” than “www.example.com/doc/wp/2019-07-24/stop-gatekeeping-email-6655321.html”.
Say, perhaps I should register a domain and participate in this dumbassery. It looks like an easy way to get attention, since everyone loves to share their opinions and personal preferences and it’s just technical enough for people to feel smart while discussing it.
You’re rude.
I think the argument that other’s arguments are all nonsensical tribalism is probably worth rejection also. You should use whatever width you like in your projects.
I like 80 columns because I can fit six vertical terminals of that size side by side across my display with a reasonable font size, and because that’s generally about the right number of characters per line for comfortable reading. I think it helps to pick a standard width, and 80 is wide enough to be useful but not so wide as to force some users on smaller displays or when using many splits to have to scroll or rewrap. Makes 3 way diffs easier to fit on screen too.
That some punch cards, typewriters, line printers, and video terminals, also happened to be that wide is somewhere between coincidence and interesting trivia at this point. Many were also wider or narrower.
Perhaps I should write an article distilling my ideas concerning this.
A good programmer is likely going to be a hacker (not a cracker). A hacker is someone with a nice sense of aesthetics and creativity, as two qualities. A hacker is going to implement software he designed himself, likely for his own use. You can be a good programmer, just writing the same things others do, with libraries others wrote, and other such things, but I’d be inclined to classify that as average, rather than good, which I’m considering above average.
A hacker is probably going to design and implement libraries for his own purposes, rather than reuse something someone else wrote, but this is debatable. A hacker should have a genuine interest in the topic, so a hacker is likely going to know a wide variety of languages, learning more as mastery of one is achieved. I’d be inclined to argue a hacker will work with languages such as Lisp, APL, and machine codes more than Go, C++, and Java; note that the former group is filled with variety, whereas the latter group is roughly the same language.
You can be a hacker in isolation, but a hacker is likely going to have some manner of home group of sorts. A hacker probably spends all or most errant thought mulling over the topics of interest. If you’re a programmer just for your job and you don’t think of it much or at all outside, then you can be an average programmer, but not a hacker.
Tying in with the creativity and whatnot mentioned earlier, a hacker is going to create novel things and be interested with potentially obscure things. I’m a hacker and I have a tiny little esoteric language and work with CHIP-8 a good amount; I work on a novel machine code development tool I’ve designed. This isn’t boasting, but merely examples.
As you can guess, I’ve described a good bit of myself in this message. I won’t claim to have distilled the essence of being a hacker in this message and probably not in any articles I write about it, either, but I do believe this is at least a good general idea. If you’ve not done so, you could bother RMS with this question. That’s all.
This is neat. Also, I found it amusing that this Python program is mostly Fortran, if I understand what Numpy is.
This was a good intro into NumPy,
https://lobste.rs/s/e9m52i/visual_intro_numpy_data_representation
Ryan Levick, Principal Cloud Developer Advocate
Sorry, knowing Ryan personally, and he’s a kick-ass programmer. He’s also a kick-ass community organiser and is super good at deciding his path through the programming world.
Ad hominem attacks of that kind are really not what I want to see in this community.
Your rant, full of exaggerations, is unwarranted and doesn’t help to have a reasoned discussion about this.
I’m not Ada expert, but as far as I can tell, the “why” mentioned in the post applies also to Ada. Other languages rely on garbage collector or reference counting to guarantee memory safety. AFAIK Ada is just looking at adapting the ownership model: https://blog.adacore.com/using-pointers-in-spark
Ugly ad hominem aside, they do have a point, though. Ada is not mentioned once in this article (and neither is D.) When an article on “Why X is the best of subset A” simply fails to mention multiple major members of subset A, one starts to get the sense that the author never was interested in an objective analysis and comparison of the available options, but rather had decided ahead of time what they wanted to support, and just looked for reasons to do so.
They’ve said:
Languages which achieve memory safety through garbage collection are not ideal choices for systems programming because their runtimes can lead to unpredictable performance and unnecessary overhead.
This doesn’t require listing all of the languages that are disqualified this way.
D’s -betterC
is better than C, but doesn’t offer the safety guarantees they’re looking for.
This article is not comparative writing, nor does it document a whole process. So there’s absolutely no need to consider Ada in this post.
Also, I’m pretty sure that Microsoft had ample exposure to Ada before, especially given the amount of research they pour into programming languages, especially in the UK.
Ada has downsides as well. For example, there is no (widely used) package manager.
Ultimately, it depends on the use case. If you program a real-time safety-critical micro controller then Ada wins over Rust easily. If you develop cloud back end stuff then Rust wins over Ada.
Over time a lot things can be fixed. Rust is in a positive growth spiral and it will improve. For Ada, I’m not sure if the community is growing or shrinking.
Over time a lot things can be fixed. Rust is in a positive growth spiral and it will improve. For Ada, I’m not sure if the community is growing or shrinking.
There’s quite some people in the Ada community that see Rust as a chance of breaking up the status quo and put them on the table again. From my point of view, I’m very happy about that, user choice is a benefit at a grand scale.
but I don’t want to create a comprehensive list.
That’s a shame – can you at least toss a “these are the top n
‘incorrect things’ I see in this document” for the purpose of those newbies/experienced users to digest?
The main issue that comes to mind is recommending C libraries when unnecessary, such as Ncurses.
You can’t write Common Lisp expecting certain performance characteristics, although you can inspect them, and I recommend against heavily optimizing some code for a specific implementation, unless you’re specializing code for several. Also, in writing programs that interface with other systems, you need to be aware of portability concerns such as character sets.
However, I stress that recommending C libraries is the main issue that currently comes to mind. There are other libraries I see as superfluous, but that’s more arbitrary than this note.
This is mostly with regards to libraries and development practices and other such things, but I don’t want to create a comprehensive list.
Honestly, when I was getting into CL, I found the abundance of seemingly equal libraries more annoying – having a list where someone says “use these” is a first step towards something that makes more sense than just a list of names, none meaning anything – and all of this, even if the person was giving bad advice!
Discord is pretty famous for abusing user trust and being anti-software freedom. They will ban your account if they find out that you have even been discussing using 3rd-party clients.
I discussed this with him, but he wasn’t keen on writing it under the AGPLv3, so that went nowhere.
I don’t get this remark. He didn’t want to write it under AGPLv3, at your request, so he didn’t write it?
I offered to collaborate with him, but he wasn’t interested in collaboration if the result was licensed under the AGPLv3.
Got it. What is the perceived value in LSP for Common Lisp, anyway? As far as I can tell, LSP doesn’t have a standard way, or any way(?), to provide remote REPL like services, so only code introspection seems likely…but doesn’t SWANK give you all that, AND, remote REPL, remote debug, etc?
I can see the value in learning one set of keybindings for all languages, fwiw, but I wouldn’t adopt LSP over geiser for my Racket / Scheme stuff…
doesn’t SWANK give you all that
Yes and no. Yes there is some overlap in functionality between in LSP and Swank. For CL, Slime is still ahead in the features it provides. I wouldn’t switch to using an LSP client for CL in a million years
No in the sense that an LSP server would allow you to leverage the work of hundreds of developers to provide support for vscode, atom and other editors more popular with the webdev crowd. While SWANK is primarily intended to be used in conjunction with SLIME using Emacs (as superior as Emacs may be).
Also while LSP doesn’t have a standard for all the features of SLIME, it does have a standard, and it is being extended with new features every day. The lack of a standard is not a theoretical problem, nrepl was developed because when Clojure used swank as their back-end the protocol would change and their client would break.
And while sexpr-based format would be a better choice for serialization than JSON (I would have preferred they using transit), the Swank protocol is less than ideal. For example the way the cursor position is specified for eldoc purposes is to insert ==> x <==
around the argument the cursor is at.
Also the author of the article doesn’t use Emacs nor Swank. They have implemented part of the nrepl protocol and use that for their day to day.
I’m reluctant to continue replying, since Lobsters requires me to enable JavaScript for this, so I won’t be responding to any more replies, note.
I choose the AGPLv3 because it’s the strongest copyleft license available and I also find it nice that many companies are afraid to touch it. I reject permissive licenses, because I want my work to benefit the Free Software world and not enable proprietary software. There are other reasons, but I’m being brief.
You can enable email notifications to which you can reply and it’ll show up here.
Mailing list mode can be enabled per-user to receive all new stories (including their plain-text content as fetched and extracted by Diffbot) and user comments as e-mails, mirroring discussion threads offline. This makes it easy and efficient to read new stories as well as keep track of new comments on old threads or stories, just like technical mailing lists or Usenet of yore. Each user is assigned a private mailing list address at this domain which allows them to reply to stories or comments directly in their e-mail client. These e-mails are then converted and submitted to the website as comments, just as if the comment was posted through a web browser.
To claim to know a standardized language without knowing the standard is foolhardy and wrong.
But he says “ It’s a useful tool to have, but not the only one you’ll need.”…So he is saying you should read it, at some point, right?
I always read the standard document
This is interesting. I generally will at some point hit the standard document, but usually only after consuming 2 or 3 introductions (I guess you may do so as well, since you said in either order). But I often find standards documents bloated and boring. Are there particular ones you like?
I know what you mean, but CLtL2 isn’t really that kind of “standards document” — it is pretty pleasant to read, has a sense of humor, and is well-organized.
So he is saying you should read it, at some point, right?
Without looking back, I believe he was recommending not reading it end-to-end, which is bad advice, also tying in to what I told @owen .
I generally will at some point hit the standard document, but usually only after consuming 2 or 3 introductions (I guess you may do so as well, since you said in either order).
The standard document is either the first or second text I read when learning a language.
But I often find standards documents bloated and boring. Are there particular ones you like?
The Common Lisp standard is pleasant, with its detailed descriptions of facilities, such as FORMAT. APL is a good example of reading an introductory text first. I don’t recommend reading ISO 13751 (This is a gzip file, not a PDF.) before having a loose understanding of APL, as the standard is well-written and rather concise, but gives the most general descriptions of primitives that can be difficult to understand without knowing behavior for one-dimensional and two-dimensional parameters and is generally best understood by walking through the descriptions with such parameters in mind.
I’m learning Ada 2012 and, while I started by reading the Ada Reference Manual, I was advised to read ’‘Programming in Ada 2012’’ by John Barnes by others and, having largely made my way through that, I’m rereading the standard, intending to finish it this time, and it is more comprehensible now that I’m reading with knowledge of what’s to come and so the earlier material is much more approachable now.
I found it queer, learning others don’t learn in this way. I never feel the need to ask questions about standard language behavior as others do, though, either.
Thanks!
I found it queer, learning others don’t learn in this way. I never feel the need to ask questions about standard language behavior as others do, though, either.
I agree, just often I don’t hit the standard right away. Probably because I mostly fiddle with things for months or years before doing a deep dive. Thanks for the tips!
I’m squarely in that group that simply avoids using Ncurses. I find Ncurses to be a baroque API, from what I know, and as with many UNIX APIs it seems so often I learn of some new quality and can’t tell if it’s a joke or not, at first.
My advice is to simply use ECMA-48; every terminal emulator you’re likely to come across supports a subset and it’s easy to use, being free from ’‘color pairs’’ and other nonsense. The only issue is sending different sequences to terminals that don’t support the subset being used, or finding a common subset, but there is a point where this ceases to be reasonable and one adopts a ’‘If you’re terminal doesn’t even support this, I don’t care if it works.’’ attitude.
Writing a library in most any language that sends ECMA-48 codes is simple, so you can work in whichever language you’re actually using instead of binding with a decrepit C library. It’s also important to stress that people only really use Ncurses and whatnot still because the graphical APIs of UNIX are far worse to cope with.
Now, to @Hales :
If one digs deep enough into the history of computing, one lears that what’s modern is distinctly worse than prior systems. There were easy graphical systems on computers decades ago, but UNIX is still stuck sending what it thinks are characters to what it thinks is a typewriter and using X11 lets you pick your poison between working with a directly (Wayland is coming any year now, right?) or using a gargantuan library that abstracts over everything. It’s a mess, I think, don’t you agree?
Also, @Hales , you’re note on UTF-8 reminded me of something I found amusing from the Suckless mailing list, when I subscribed to it trying to get proper parsing of ISO 8613-6 extended color codes into st. Simply put, UTF-8 is trivially transparent in the same way color codes are transparent, in that there are many cases where invariants are violated and programs misbehave, commonly expressed as misrendering. There was an issue with the
cols
program, that shining example of the UNIX philosophy: it didn’t properly handle colors in columnating its output; to properly handle this, the tool would need to be made aware of color codes and how to understand them, as it would otherwise assume each would render as a single character would, ruining the output; the solution, according to one in the mailing list, was that colors are bloat and you don’t need them.Don’t you agree that’s amusing?
I agree with you. 100%. HOWEVER. Ncurses comes with pretty much every mainstream unix, making it a very attractive target for applications. I think that ncurses’ limitations are holding back innovation of TUI interfaces. If every unix comes with an adequate TUI widgeting library, this encourages quality TUI widget-based systems.
It’s important to understand things in context. At the time it came out, X was very useful and its complexity was every bit justified. Each bit of bloat was entirely justifiable when added, it’s just in hindsight, when looking at the whole, that it appears bloated. Yes, xorg is a mess. (Incidentally, wayland is also a mess and I hope that it doesn’t come at all. Much prefer arcan.) However, I think it’s a mistake to say that in the dawn of computing they understood simple, sensible interfaces and nowadays it’s all just pointless abstractions and bloat. Sure, Unix, BeOS, and C were all clean and well-made; while javascript, c++, and windows are all currently a mess. So what? There’s still significant amount of innovation and progress being made in the modern world of software and to ignore it would be a mistake.
Yes. The entire concept of a TTY is a mess. I am going to work on replacing that. Other people are working on replacing that. (The solution, obviously, is to make the code that generates the text and the code that displays the text work on the same level of abstraction.) That doesn’t change the fact that we’re stuck with ttys for the time being; is it wrong to try to improve our experience with them until we can really leave them?
The stuff we’ve kept from the dawn of computing is basically all either ‘simple, sensible interfaces’, or ‘backwards compatible monstrosity’. Those are, after all, the two reasons to keep something - either because it’s useful as a building block, or because it’s part of something useful.
Have you looked at the TUI client API for Arcan? and the way we deal with colour specifically? If not, in short:
Do you allow the client to set arbitrary 24bit colours to the grid?
Does the TUI API work with text-based but not-fixed-width interfaces (e.g. emacs, mlterm)?
Thank you for posting, I hadn’t heard about arcan until today but have just read a chunk of your blog with interest :)
colors: both arbitrary fg/bg (r8g8b8) and semantic labels to resolve (such as “give me fg/bg pair for alert text”) shaped text: yes (being reworked at the moment to account for server- side rendering), but ligatures, shaped is there as a ‘per-window’ attribute for now, testing out per line.
Thanks for the reply.
I think I would want to be able to use different fonts (or variations on a font) for different syntax highlighting groups in my editor. This looks quite nice in emacs and in code listings in latex. Perhaps you consider this to be an advanced use where the application should just handle their own frame buffer, though.
While I have your ear, what’s the latency like in the durden terminal and is minimising latency through the arcan system a priority?
in principle, multiple fonts (even per line) is possible, and that’s how emojii works now, one primary font for the normal glyphs and when there is a miss on lookup, a secondary is used. There is an artificial limit, that’ll loosen over time. Right now, the TUI client is actually handling its own framebuffer, and we are trying to move away from that, which can be seen in the last round of commits. The difficulty comes from shaped rendering of ligatures, where both sides need to agree on the fonts and transformations used; doing it wrong creates round-trips (no-no over the network), as the mouse-selection coordinate translation needs to know where the cursor position actually became.
Most of this work is towards latency reduction, removal of terminal protocols fixes synchronization, moving rendering server-side allows direct-to-scanout buffer racing-the-beam rasterization, or at least, entirely on-gpu for non-fullscreen cases.
When you say “both sides” do you mean the client on e.g a remote server and a “TUI text packing buffer” renderer on e.g. a laptop?
Sounds like you could just send the fonts (or just the sizes of glyphs and ligature rules) to the server for each font you intend to use and be done with no round trips. Then you just run the same version of harfbuzz or whatever on each side and you should get the same result. And obviously the server can cache the font information so you’re not sending it often (though I imagine just the sizes and ligatures could be encoded real small for most fonts).
Do you have any plan about what to do about the RTT between me pressing a key on my laptop, that key going through arcan, the network, remote arcan and eventually into e.g. terminal vim and then me getting the response back? I feel like mosh’s algorithm where they make local predictions about what keypresses will be shown is a good idea.
Sounds exciting! I don’t know what you mean by “moving rendering server-side”, though. Is the server here the arcan server on my laptop? And moving the rendering into arcan means you can do the rendering more efficiently?
Is arcan expected to offer an advantage in performance in the windowed case compared to e.g. alacritty on X11? Or is the benefit more just that anything that uses TUI will be GPU accelerated transparently whereas that’s more of a pain in X11?
Right now (=master) I am tracking the fonts being sent to the client on the server side, so both sides can calculate kerning options and width, figure out sequence to font-glyph id etc. The downsides are two: 1. the increased wind-up bandwidth requirement when you use the arcan-net proxy for network transparency, 2. the client dependency on freetype/harfbuzz.
My first plan for the RTT is type-ahead (local echo in ye olde terminal speak), implemented on the WM level (anchored to the last known cursor position, etc.) so that it can be enabled for other uses as well, such as input-binning/padding for known-networked windows where side channel analysis (1 key, 1 packet kind of a deal) is a risk.
Both performance and memory gains. Since the actual drawing is being deferred to the composition stage, windows that are partially occluded or clipped against the screen would only have its visible area actually being processed - while alacritty has to both render into an offscreen buffer (that is double buffered) that then may get composed. So whereas alacritty would have to pay for (glyph atlas texture, vertex buffer, front-buffer, back-buffer) on a per pixel basis in every stage, the cost here will only be the shared atlas for all clients (gpu mem + cache benefits), rest would be a ~12b / cell + vertex buffer.
It has been here for a while. GNOME and KDE support it natively, a few popular distros ship with Wayland enabled by default. Firefox is a native wayland app. What makes you think it’s “any year now” ?
:D
Colours are not the only invisible control codes that I’d expect cols to have to handle. Alas I can’t see a “simple” solution to this. You pretty much have three options:
Of the crappy options I can see: #1 does seem the most like something suckless devs would like.
Suckless makes some great stuff, but some of their projects are a bit too minimal for me. Take for example their terminal emulator st:
I love my scrollwheel, whether it’s a real one or two-fingers on my cheap little laptop’s touchpad. That and being able to use Shift+PageUp/Down. I guess everyone draws the line somewhere differently, and I have more things about my current term (urxvt) that I could moan about.
I don’t have any raw X11 experience. I’ve primarily used SDL, which yes indeed abstracts that away for me.
I’m not completely convinced that wayland is going to be the answer: from everything I read it seems to be solving some problems but creating entirely new ones.
From a user perspective however: Xorg is wonderful. It just works. You have lots of choice for DE, WM, compositor (with and without effects), etc. The open source graphics drivers all seem to have a TearFree option, which seems to work really well. I’d love to see latency tested & trimmed, but apart from that the idea of having to change a major piece of software I use scares me. I don’t want to give up my nice stack for something that people tell me is more better (or more “modern”).
You forgot one:
Interpreting ECMA-48 for this isn’t that bad, I have Lua code that does such [1]. And a half-assed approach would be to assume all of C0 (0-31, 127) [3] as 0-width, all of C1 (127-169) as 0-width, with special handling for CSI (155) and ESC (27). For ESC, just suck up the next character (except for ‘[’) and for CSI (155 or ESC followed by a ‘[’) just keep sucking up characters until a character from 64 (@) to 126 (~) is read, and that will catch most of ECMA-48 (at least, the parts that are used most often).
[1] It uses LPEG. I also have a version that’s specific to UTF-8 [2] but this one is a bit easier to understand in my opinion.
[2] https://github.com/spc476/LPeg-Parsers/blob/master/utf8/control.lua
[3] 127 isn’t technically in C0 or C1, but I just lump it in with C0.
Flying on a modern airliner is also distinctly worse than passenger jet travel in the 1950’s. But I can still afford to go anywhere in the world in a day.
Worse in experience? I am almost certainly safer flying on a modern passenger jet now than in the 1950s.
https://i.huffpost.com/gen/1840056/thumbs/o-56021695-570.jpg
Enough said. :-P
(I am aware this is not a conclusive statement.)
Heh, you can still get that (actually, much better) on international business class. Hmm, I’m curious if the cost is comparable between 1950’s tickets like those in your picture (inflation adjusted) and international business class today…
Hmm, depends what you mean by consistency.
Moderation of human beings is, like any other social endeavor, fundamentally a subjective act. So if you mean consistency as a programmatic ruleset, a flowchart for decision-making, well, that’s misguided. Bad faith actors and trolls will always win that arms race, so the only winning move there is not to play.
Instead, platforms must embrace that invariant subjectivity, and moderate not by a flowchart, but by reacting to each situation individually, and according a consistent set of ethical principles, ideally rooted in the reduction of suffering. If that’s the kind of consistency you mean, I’m on board.
This is a motte and bailey. It balance precariously between two meanings, one of which is trivial and true, the other of which is earth-shattering but unproven.
One of the interpretations is obviously true, to the point where it really isn’t worth bringing up: because both of them are incorporated in the United States, they are under the jurisdiction of that countries laws. So are Lobsters and prgmr.com, according to whois. Most websites are under the control of some government or the other.
The other interpretation, which would be very important if it could be proven, is that they are a CIA psy-op or something: this implies that large swaths of recent public history are a lie, to the point where their supporters and their critics are both wrong about the nature of what they’re arguing about. Most websites are not being secretly run by the government.
Prostitution doesn’t harm anybody if done right (i.e. no coercion & regular checks for STIs).
I want to write a machine code development tool.
I’ll write it in Common Lisp.
I need to be able to exit the program (nonportable) and send terminal control codes.
I spend a day researching all of the different exiting S-expressions before collecting them into a library.
I don’t like Ncurses or binding with C at all, so I’ll write my own terminal library.
Using this terminal library isn’t all that pleasant, so I’ll write an abstract terminal library on top of that.
That underlying terminal library doesn’t depend on ASCII, but does detect it and optimize for it, I could turn character-set detection into its own library.
This tool should be customizable, but I don’t care for arbitrary customization languages, so I’ll make it customizable in a machine code.
The target, CHIP-8, is entirely unsuited to this, so I’ll create a Meta-CHIP-8.
My Meta-CHIP-8 virtual machine needs to be able to access the primitive routines of the system in a well-designed fashion, so I must spend time designing each one where all behavior makes enough sense.
I don’t have a Meta-CHIP-8 development environment, so I’ll just use a hex editor.
This Common Lisp program, with its own machine code customization language, really isn’t that pleasant to read through, even though it’s efficient and works, sans a few minor flaws in some more advanced functionality I’ve simply yet to correct. It also consumes megabytes of memory, since it runs in a Common Lisp environment. I could rewrite it in chunks, again, and start removing Meta-CHIP-8, but I’m so exhausted by it at this point.
I’ll write a new version in Ada, which will use far less memory.
I’ll start learning Ada.
Now that I’ve largely learned Ada, I really should start implementing some of my Common Lisp libraries in Ada so I can enjoy some of the same abstractions.
That’s roughly where I’m at, currently. Some of this isn’t strictly hierarchical, but you get the idea. Something pleasant to do is occasionally take a break and work on something else, such as how I’ll be working on some server software in Common Lisp and Ada, to compare the two. However, I need my own networking library in both, since I’m dissatisfied with what’s available, and I dislike the Common Lisp multi-threading options, so I’ll need to write my own there. You understand the general trend.
That’s great. The CL and Ada thing especially. I once suggested on Schneier blog that mixing the most powerful and most constrained languages together might make for interesting experience. Maybe mock up Ada in CL with extraction-to-Ada tool. Then, you get CL’s rapid prototyping and metaprogramming with Ada’s safety checks and compiler performance. Never tried it since they were both pretty complicated. ZL guy is closest doing C/C++ in Scheme.
This multiple levels of escaping is nonsense, in my opinion, although this is nothing against the author.
I’m disappointed that a better option, my preferred option, wasn’t mentioned: The beginning and ending character can be doubled to provide for a single instance. Follows is an example:
In APL, this evaluates to a string with a single quote. Follows is another example:
In Ada, this is a string containing a single double quote.
Phrased differently, this is my preferred way to consider it, this is the simple idea that a string containing the string character itself can be represented by two strings juxtaposed, with the joining point becoming the string character in a new string that is the combination.
It’s disappointing Common Lisp uses escaping instead of this much nicer way.
I too prefer doubling the quote character over backslash + quote, since I personally find it more intuitive and aestetic.
However, this is still escaping, at least in the way I used the term in the article—the quote character does not represent itself but instead is a dispatch character. It will still suffer some downsides of escaping, e.g. copying the text between the start and end quote characters will not put the actual string in your clipboard.
The author appears to be aware of this method of escaping quotes:
I first saw this kind of escaping in Plan 9’s rc(1):
I dislike how Github is being used as a blog by so many; I dislike Github entirely, however, so that’s just one of many points I could make.
Use Tor. It’s telling the author never mentions Tor; I’m inclined to believe the author isn’t qualified to write about this topic.
Tor requires more discipline to get what you want out of it, and many exit nodes are on block lists, making day to day browsing kind of troublesome.
VPN companies, on the other hand, often sponsor popular YouTube channels, and make other large ad buys, giving them reach far beyond tech audiences who might understand better how to work around some of the Tor problems. In other words, targeting VPN providers provides far greater bang for your, proverbial and literal, buck.
The amount of discipline required to “get what you want” out of Tor is all the same as the discipline required to get the same thing from any other VPN provider: anonymity in the face of a determined adversary. It’s just that Tor’s documentation and default configuration are designed for such an adversary, while most VPN’s are designed for an apathetic adversary that’s willing to store your IP, but not willing to deal with the hassle and potential false positives of fingerprinting.
If “what you want” == “just hide the dang home IP address”, it doesn’t require much. Just start it and set it as the proxy in Firefox. And, yes, it requires a bit more patience for the captchas and crap, but it’s really not that bad.
I don’t believe “hide my IP address” is what most people are told they want out of a VPN.
Well, people are told very vague “privacy” and “encrypt your internet” stuff but realistically, as a normal person (i.e. “Not Snowden”) you want two things
As some advice to the author, I originally mistook this for a submission of a much older article, since I didn’t look at the submission information; my point is that perhaps drastically different imagery would’ve prevented this, as I believe the article I mention used at least one of the same images. This is tangential and an arguable point, however.
Anyway, the article mentions UNIX and is tagged with that, but that doesn’t mean ASCII is actually related to UNIX.
This article would be improved by mentioning ECMA-48 by name. Anyway, I was disappointed, but not surprised, to see the UNIX method portrayed as the only solution. Follows is an excerpt from ’‘The UNIX-HATERS Handbook’’:
Now, returning back to your article:
Yes, this complicates advanced key-chords; a good test is seeing if Emacs differentiates; if not, then it’s unlikely it can reasonably be done.
Yes, I’d to explain this to one fellow in teaching him a design for a terminal control library I’ve written. This is another disadvantage of using control.
Yes, that’s the last main issue with it. It’s noteworthy that the Meta or Alt key avoids this issue; you can configure some terminals to set the eight bit or prefix with the Escape character, but those that can’t be configured use the latter convention; this is one of many issues with comprehensively parsing more advanced terminal input, but it does have the nice property of lacking these special cases the Control key has. Mentioning Meta or Alt would’ve perhaps been a good idea.
I’m not sure which older article you mean? I just wrote this because it’s a common question/source of confusion, and frustration with how shit asciitable.com is (still first hit on Google :-/).
I linked to https://en.wikipedia.org/wiki/ANSI_escape_code for now.
The page isn’t intended to give a full and comprehensive overview of all the history; it explicitly mentions that “many aspects have been omitted”. My only goal was to provide exactly enough information for people to understand why CTRL+I sends a tab character, and why they can’t remap it in Vim etc. Nothing more.
This is also why it just talks about Unix. Unix still exists and is what many people use. VMS and ITS? Not so much…
It’s mentioned briefly (“This is also how the Alt key works: Alt+a is
<Esc>a
.”). To be honest, I was tempted to not even include the entire section about escape sequences at all, since it doesn’t directly pertain to the question I wanted to answer. I mainly included it to make sure people wouldn’t be confused and think F1 or the arrow keys are control codes.I wonder what the legality of that is.
Anyway, this makes me think about a practice I’m soon to begin, which is similar but not quite this parallel implementations. That is, having multiple implementations with different strengths, but written in different languages and so one is not intended to supplant the other, but more or less compare the two languages for the basic problem, likely with certain additions attuned to the strengths of each.
I don’t know about the legality but Carmack himself was pretty happy the article was saved: https://twitter.com/ID_AA_Carmack/status/1156639168002428931
The frustration of the author in their Lobste.rs bio is palpable:
Marketing yourself is difficult, but having distain for your audience because they will not recognize your obvious intelligence is a trap. Congrats on discovering that a clever project name will get you some self-gratifying attention, but for long term success have some humility and respect for your fellow man.
People may be expecting to see some moderator action here…
I honestly don’t know what to make of the article. I don’t find the jokes in it funny, nor do I find them appropriate for this forum (“homo” is not simply a silly word; it causes real harm). I don’t encourage personal attacks in all but the most exceptional circumstances, but I did find that your comment provided helpful context, and you were pretty restrained. Ultimately, I think I’m glad that you commented as you did. For the sake of civility, I want to encourage you not to get drawn into back-and-forth about this; I think your top-level comment stands well on its own.
Isn’t this referring to homoiconicity? The author states his language has this property. “Homo” has uses beyond the offensive type you’re referring to, it’s a latin prefix - homogeneous, homophone, homoiconicity.
I’m not trying to say it’s inoffensive to everyone - I just don’t draw the conclusion that the author using it in this way.
I felt the entire joke of the first paragraph was that it’s using a bunch of funny-sounding words that are often considered inappropriate, and claiming they’re being used solely for their technical meanings.
How about engaging with the content rather than bringing in the author’s off-topic profile text? Congrats on fulfilling their expectations.
Thanks, but I’m going to let my comments stand. The tone and purpose of the article I think is best explained by his bio.
This article is not a serious attempt at coding. If you think I’m off-topic, fine. But software is more than just code. I’ll take clarity and respect over cleverness and contempt every time.
One (albeit vague) critique of your program, and you bemoan them for “looking down” on APL programs. In a non-sequitur you condemn a vague intersection of JS and Rust programmers and users of popular web services as “idiot hipsters” that “infest” Lobsters. Who is looking down on whom?
If you’re not willing to peacefully cohabit a space that doesn’t homogeneously subscribe to the identity politics of the web-chaste APL hacker, why are you on this website? It’s entitled and insulting to so consistently lash out towards the community and at the same time expect others to provide feedback on your work.
I’m not sure owning copyright entitles you to require a publisher to remove your comments.
I’d like you to stay, personally, but I also think your communication would be improved by not implicitly or explicitly insulting your readership. You may think WWW is “nonsense” but it’s very much a fringe position.
Your angry and contemptuous comments (of which I’ve seen a few now) aren’t exactly a positive contribution to Lobste.rs either though… Perhaps a look in the mirror is in order.