Alls I can say is… Go easy on Microsoft. They are a small indie dev….
That made me laugh.
But at the same time there’s a grain of truth to that: the company may be huge, and not fixing that kind of bug may be utterly unacceptable at their level… but the actual money and staff they devote to their terminal may be much smaller, and those people are humans.
Had to realised this the hard way when I was talking trash on the Windows terminal on reddit (let’s summarise this as “Casey Refterm fast Windows Terminal slow Lol hurr hurr”). Turns out I had a couple facts wrong, and someone one Windows terminal dev (or at least major contributor) replied there and kindly (yes, kindly) reminded me that kind of trash talk kind of hurts, and corrected my errors. And then answered all my questions.
Or, which I’ve also seen, a small number of their customers care deeply about a problem, but the vast majority aren’t even aware of it, much less care.
(Analogous to some of the problems with democracy as implemented through national voting; who knew that Bertrand Russell was writing about product development pathologies in the early 1900s :) ).
This is why open source tools are so important. It’s much easier to get oxygen to issues like these if you’re able to fix them yourselves.
I remember switching from the .NET ecosystem to Ruby for work in 2011. It was a glorious breath of fresh air to go from handling bugs by “report through Microsoft Connect and pray they’ll fix it in a few years in the next release”, to “ debug it myself and submit a patch”.
But this wasn’t a problem with C strings at all. You could reproduce it in any language: The problem was that the Windows terminal didn’t interpret the text as UTF8.
I was just about to say, this has never been a problem with C on Linux. You paste the characters in and those ytes get sent to the TTY and atleast on Linux is knows how to render those bytes accordingly to UTf8.
So when I saw that windows required all these weird workarounds I knew it had to be a terminal interpretation shananigan.
It was very much a problem with C on Linux in the 2000’s, before UTF-8 locales were endemic and all terminals supported them. :-P See anordal’s comment about signed chars for an example.
Just finished (and started today) adding casting support for primitive types to my compiler, parse -> dep tree -> typecheck and code gen -> emit. whole thang, was fun.
From my cold dead hands :sunglasses_emoji: pew pew.
Also a D enjoyer here, not a Pascal trend enjoyer although I enjoy messing with Pascal now and then I much prefer aligned types (textually) and in general I am not an inference fan.
Personally always found types on the right harder to read. Easier to parse for the computer maybe, but not necessarily for the human. Maybe it’s that there’s something between the variable name and its value: (…) x = 1 seems easier to understand than x (…) = 1.
It’s not even an addition, it’s just taking advantage of what compilers were already doing from the start. Say we have this declaration:
var x : <type> = <expr>
The compiler needs to make sure type of <expr> matches <type>. What do they do most of the time? They infer the damn type first, and then they compare it. Language that support omitting the type merely skip the second part, and the type of x becomes whatever was inferred first.
Well, for local inference at least. Inferring the type of functions is a bit more involved, and does count as an addition when it’s implemented.
That’s for C++ auto. There are various levels of inference, even within functions. For example:
let x = Vec::new();
x.push("hello");
The first line does not know the element type, but with a Hindley–Milner type inference the type on the first line can be deduced from the second line.
I have never declared such a monstrosity and if I had to then it’d be auto’d, I could argue against the clarity or rather lack-therefof resulting in clutching to inference.
I’d argue that C++ is good at adding noise, too. HashMap<String, (String, Vec<String>)> is worse than it needs to be, but not too monstrous. I think in Haskell it would look something like Map String (String, [String]) ?
Like the OP, I like having my variable names line up. That promotes readability.
Haskell does this (lets you align your names in the same column), without doing the thing you don’t like, putting a large type expression between the name and the =. Instead, you put the declaration on a separate line from the definition. Types are still on the right.
IMO there’s several interlocking reasons here. Type inference is the killer feature, but type inference is also usually desired and implemented by people who have some background in ML/Haskell, so they use the syntax they are familiar. I suspect they are also a bit more willing to break conventions to get things done.
Types on the right are also easier to parse, especially when the type is complex. Doing int *foo; is technically ambiguous but still pretty easy to parse consistently, but when you have more complicated types like Map<Warehouse, List<OrderItem>> items then getting the generic parsing right is More Fiddly. Knowing that your parsing rule starts with something simple like an identifier, or even better a keyword like var or let, means there’s a lot less room in your syntax for things to randomly explode. Java didn’t start out with these sorts of complicated types in the early 90’s so it wasn’t much of a concern, so they just went with what looked like C. But by the early 2000’s C++ was more pervasive than in the 1990’s, and people wanted some of its features without its well-known parsing headaches. So D’s templates didn’t use anything like C++‘s template syntax, for example, while C# decided “looking mainstream” was important enough to go through the extra effort. (That bit is my guess, I’d love to know more details.)
We’re slowly (finally) evolving away from “looks like C” being a requirement for a mainstream language, so when Go had an opportunity to both simplify things and have complex nested types they chose a type syntax that was much simpler and more consistent than C’s. Scala and Rust did the same and became reasonably mainstream, and now others have the courage to pick up the trend because it won’t immediately label their language as Some Weird Thing.
TL;DR: In addition to what the article states, there’s some minor technical reasons types on the right is better, and the people who are writing these new languages are familiar with them and more willing to break the mold. I could go on, but I probably shouldn’t; suffice to say that there’s similar reasons why Rust’s .await is a postfix operator.
I can agree to the variable declaration parsing simplification as long as you begin with var, but honestly then you could just have:
var <type> <name>, it the initial entry token that simplifies it - however this wouldn’;t look as nice as the Pascal syntax which I have a sweet spot for, so you do bring atleast one very objective point into the parsing domain with your argument sir.
Many a lookahead may be averted with such a var technique.
Hmm, I never considered var <type> <name> and I’m not really sure why not. Maybe with type inference where you can have <type> <name> or just <name> it still risks ambiguity? Though you have the var token so you know you’re in a var decl already, so it should work fine. Weird.
Compiler work! More specifically, dependency generation for for-loops - which I think I finished round about 4 hours ago, had a beer and called it a day!
So many questions! A compiler for what language? What is the target architecture? Is it a hobby project or for work? What are the design goals of the language and compiler?
I wasn’t even aware they had a proxy service (Google), as I am not a Go user. However, that is a neat way of them actually building a database of available packages, doing caching of course.
Count on the Go team to have something which follows the UNIX philosophy instead of some clunky service. That’s really neat!
Perhaps we should find names for this pattern, so that we stop thinking of it as a good thing. I see the same pattern that you do: Eve convinces people to use their proxy, and gathers metadata about the network without active participation. We don’t see how Google is using this data, and it’s clear that they will not allow cleanly opting out of the proxy feature without disrupting the ecosystem.
The fact you need to do this, and I bet 99% of everyone doesn’t even realize it’s there or why it might be desirable, is a symptom of how bad this pattern is.
Features in other languages and ecosystems? Awesome! The user has control! Features in the Go toolchain and ecosystem? Clearly a symptom of how bad the problem is.
“Features” is too vague to be useful in this conversation. If GCC has a feature to let you statically analyse your code, that can be good, even if it’s bad that Go has an enabled-by-default feature which proxies all traffic through Google servers. If Rust adds a feature to make your head explode if it detects a typo, that would also be a bad feature.
Fair enough. It’s fine to have a more nuanced conversation about the merits of the feature/architecture.
I think the proxy architecture is superior to centralized architectures with respect to being able to opt out of centralized collection. Consider that in Rust (and every other package manager I can think of off the top of my head) the default is to get all of your packages through their centralized system, and the way to opt out is to run a copy of the system yourself which is much more difficult than setting an environment variable (you almost certainly have to set one either way to tell it which server to connect to) and still doesn’t allow you to fully avoid it (you must download the package at least once).
You may have rational or irrational fears of Google compared to other faceless corporations, but that’s not an indictment of the feature or architecture. Additionally, the privacy policy for the proxy even beats crates.io in some ways (for example, crates.io will retain logs for 1 year versus 30 days).
From Google’s perspective it’s reasonable to use opt-out here, otherwise nobody would have configured those GOPRIVATE, GONOPROXY, GONOSUMDB, making the sumdb and package index pretty pointless. However, from a user perspective I feel that opt-in would have been the friendlier option.
Not everything is that black and white. While I don’t think Google would pass on using any metadata they can get their hands on, there are also benefits for the community. They list some here in How Go Mitigates Supply Chain Attacks
Having witnessed several drama stories in the Node and Python package ecosystem, I think I prefer Google’s approach for now.
I don’t have a complete concrete proposal. I am currently hacking on a tacit expression-oriented language which is amenable to content-addressing and module-free code reuse; you could check out Unison if you want to try something which exists today and is similar to my vision.
I’ve tried to seriously discuss this before, e.g. at this question, but it seems that it is beyond the Overton window.
$ man <(curl -s https://skiqqy.xyz/skiqqy.1)
/dev/fd/63
/dev/fd/63
fgets: Illegal seek
Error reading man page /dev/fd/63
No manual entry for /dev/fd/63
Free Pascal is my objective too. I know enough Python and C to get things done (personal projects). So far, Free Pascal seems incredibly clear and intuitive. I like the way the blocks are laid out and the concepts seem very familiar. I’m really enjoying it.
Given that it is available for several major platforms, creates small binaries, and allows for quick GUI creation if you need that, I’m really enthusiastic about shifting to it for most projects. I’m sure that when I start actually making use of it, I’ll find the odd stumbling block, but so far, when I have researched the objectives that I want to accomplish, there’s a way…
Don’t take this a criticism, I really ask out of genuine curiosity, but: I learned free pascal in 2008/2009, in introduction to programming classes in college, and everyone thought it was pre- historical then. Why do you two want to learn it now, in the year of our lord of two thousand and twenty three?
Is there a niche that uses pascal a lot? There has been a renascence I’m unaware of? Y’all just into vintage vibes?
Oh. So I’ve been creating Windows desktop utilities for my work (to automate processes). It’s the kind of thing that you can usually do with batch scripts (or PowerShell) and I’ve even thrown together a few with a GUI (using The Wizard’s Apprentice, https://wizapp.sourceforge.net/).
Using the Lazarus IDE, you can create more customized GUI’s with more features very quickly and easily. As far as the capabilities of the language go, it can make system calls and manipulate text files, which covers the bases for me. It’s also a lot faster than a batch script.
The coolest thing this article taught me, and I have to give it a more full read next time, is how the Expose event works. It explains why you get those duplication of windows when you drag it over an application which is running slowly, therefore responding to regional update requests very slowly that you actual see an unupdated frontbuffer (is that the right word?) for some time.
The one thing missing for me (skimming some of the later bits, I might have missed it) in this was discussion of the DAMAGE extension. This confused me when I read the docs (15 years ago, they might be better now) because there’s a conceptual jump: Damage is the opposite of expose, but in my mental model it was the same and I couldn’t see how it worked. Composite, render, and damage are typically all used together as a group and the article did discuss the first two.
So if I have Firefox fullscreen and I move a terminal over it. My example then was expose in the case of X requesting firerfox to provide the region update for where the terminal window was.
Then damage is more like firefox asking, hey what’s infront of me?
No damage is firefox saying ‘someone typed in this text field, so this bit needs redrawing, I have updated the Picture, you need to redo the compositing’.
Oh that makes sense and now I can see why the terminology sounds confusing and would make sense being the other way around. It would make sense for window above me damaged me, redraw region, hey I have a region I want to update, expose this to x
With a compositing display server, you don’t need expose notifications because the application draws an entire window and it’s up to the compositing manager to display the exposed parts (often by sending all of them as textures to the GPU and just drawing them on rectangles on top of each other). You do, however, need to tell the compositing manager when you have updated part of your own window, because otherwise it doesn’t know that it needs to redraw. You can do this by tracking updates to the layers in the window transparently, but sometimes it’s a lot more efficient to be told ‘the only part of this window that’s changed is this little tiny bit’. For example, consider a flashing cursor in a text box: only a handful of pixels change, so you can avoid a lot of network bandwidth for remote displays or a chunk of PCIe bus bandwidth locally by sending only a small update to a texture rather than the whole thing. That’s what the damage extension is for.
Yeah, another good example is screen sharing/recording applications, being told what to find instead of screenshotting the whole display X times a second and trying to find differences.
Working on my compiler’s dependency generator
What is a dependency generator? is it some sort of dependency resolver?
Anyone? Why is this never answered?
That made me laugh.
But at the same time there’s a grain of truth to that: the company may be huge, and not fixing that kind of bug may be utterly unacceptable at their level… but the actual money and staff they devote to their terminal may be much smaller, and those people are humans.
Had to realised this the hard way when I was talking trash on the Windows terminal on reddit (let’s summarise this as “Casey Refterm fast Windows Terminal slow Lol hurr hurr”). Turns out I had a couple facts wrong, and someone one Windows terminal dev (or at least major contributor) replied there and kindly (yes, kindly) reminded me that kind of trash talk kind of hurts, and corrected my errors. And then answered all my questions.
The devs care. Their boss might not.
Or, which I’ve also seen, a small number of their customers care deeply about a problem, but the vast majority aren’t even aware of it, much less care.
(Analogous to some of the problems with democracy as implemented through national voting; who knew that Bertrand Russell was writing about product development pathologies in the early 1900s :) ).
This is why open source tools are so important. It’s much easier to get oxygen to issues like these if you’re able to fix them yourselves.
I remember switching from the .NET ecosystem to Ruby for work in 2011. It was a glorious breath of fresh air to go from handling bugs by “report through Microsoft Connect and pray they’ll fix it in a few years in the next release”, to “ debug it myself and submit a patch”.
I think windows cares more about C++, which has a string module that’s not as bad.
But this wasn’t a problem with C strings at all. You could reproduce it in any language: The problem was that the Windows terminal didn’t interpret the text as UTF8.
The printing error was just one bit of the article, most of it was about the terrible strcpy etc. functions in C.
I was just about to say, this has never been a problem with C on Linux. You paste the characters in and those ytes get sent to the TTY and atleast on Linux is knows how to render those bytes accordingly to UTf8.
So when I saw that windows required all these weird workarounds I knew it had to be a terminal interpretation shananigan.
It was very much a problem with C on Linux in the 2000’s, before UTF-8 locales were endemic and all terminals supported them. :-P See anordal’s comment about signed chars for an example.
I consider the complaint from the OP to be user error: if you use the Unicode print function, it works just fine and has for over 20 years.
(Well, kinda user error - the C lib makes doing the right thing unnecessarily difficult, like most string things in C.)
Are you talking about Windows Terminal (the app) or console host (the system component)? In my experience Windows Terminal is pretty nice.
The app.
My poggers setup
Everything is dracula themed and using i3, i3blocks etc
Beer included 🍺️🍺️
This post converted me from right-wing to left-wing very quickly.
Just finished (and started today) adding casting support for primitive types to my compiler, parse -> dep tree -> typecheck and code gen -> emit. whole thang, was fun.
This trend saddens me. As a D user, you can have my left-hand types when you take them from my cold dead hands.
From my cold dead hands :sunglasses_emoji: pew pew.
Also a D enjoyer here, not a Pascal trend enjoyer although I enjoy messing with Pascal now and then I much prefer aligned types (textually) and in general I am not an inference fan.
Doesn’t work
Hangs
Wonder if skiqqy’s machine is down mmmmh
Personally always found types on the right harder to read. Easier to parse for the computer maybe, but not necessarily for the human. Maybe it’s that there’s something between the variable name and its value: (…) x = 1 seems easier to understand than x (…) = 1.
If you have something like:
it’s hard to even find the variable name.
That’s why languages have added type inference, so you basically never write complex types for variables.
Rust also supports partial inference when you need to hint some of the type, but not all, e.g.
It’s not even an addition, it’s just taking advantage of what compilers were already doing from the start. Say we have this declaration:
The compiler needs to make sure type of
<expr>
matches<type>
. What do they do most of the time? They infer the damn type first, and then they compare it. Language that support omitting the type merely skip the second part, and the type ofx
becomes whatever was inferred first.Well, for local inference at least. Inferring the type of functions is a bit more involved, and does count as an addition when it’s implemented.
That’s for C++
auto
. There are various levels of inference, even within functions. For example:The first line does not know the element type, but with a Hindley–Milner type inference the type on the first line can be deduced from the second line.
I have never declared such a monstrosity and if I had to then it’d be
auto
’d, I could argue against the clarity or rather lack-therefof resulting in clutching to inference.I’d argue that C++ is good at adding noise, too.
HashMap<String, (String, Vec<String>)>
is worse than it needs to be, but not too monstrous. I think in Haskell it would look something likeMap String (String, [String])
?But how could you possibly know what
String
it is if you don’t have a namespace telling you? /sLike the OP, I like having my variable names line up. That promotes readability.
Haskell does this (lets you align your names in the same column), without doing the thing you don’t like, putting a large type expression between the name and the
=
. Instead, you put the declaration on a separate line from the definition. Types are still on the right.IMO there’s several interlocking reasons here. Type inference is the killer feature, but type inference is also usually desired and implemented by people who have some background in ML/Haskell, so they use the syntax they are familiar. I suspect they are also a bit more willing to break conventions to get things done.
Types on the right are also easier to parse, especially when the type is complex. Doing
int *foo;
is technically ambiguous but still pretty easy to parse consistently, but when you have more complicated types likeMap<Warehouse, List<OrderItem>> items
then getting the generic parsing right is More Fiddly. Knowing that your parsing rule starts with something simple like an identifier, or even better a keyword likevar
orlet
, means there’s a lot less room in your syntax for things to randomly explode. Java didn’t start out with these sorts of complicated types in the early 90’s so it wasn’t much of a concern, so they just went with what looked like C. But by the early 2000’s C++ was more pervasive than in the 1990’s, and people wanted some of its features without its well-known parsing headaches. So D’s templates didn’t use anything like C++‘s template syntax, for example, while C# decided “looking mainstream” was important enough to go through the extra effort. (That bit is my guess, I’d love to know more details.)We’re slowly (finally) evolving away from “looks like C” being a requirement for a mainstream language, so when Go had an opportunity to both simplify things and have complex nested types they chose a type syntax that was much simpler and more consistent than C’s. Scala and Rust did the same and became reasonably mainstream, and now others have the courage to pick up the trend because it won’t immediately label their language as Some Weird Thing.
TL;DR: In addition to what the article states, there’s some minor technical reasons types on the right is better, and the people who are writing these new languages are familiar with them and more willing to break the mold. I could go on, but I probably shouldn’t; suffice to say that there’s similar reasons why Rust’s
.await
is a postfix operator.I can agree to the variable declaration parsing simplification as long as you begin with
var
, but honestly then you could just have:var <type> <name>
, it the initial entry token that simplifies it - however this wouldn’;t look as nice as the Pascal syntax which I have a sweet spot for, so you do bring atleast one very objective point into the parsing domain with your argument sir.Many a lookahead may be averted with such a
var
technique.Hmm, I never considered
var <type> <name>
and I’m not really sure why not. Maybe with type inference where you can have<type> <name>
or just<name>
it still risks ambiguity? Though you have thevar
token so you know you’re in a var decl already, so it should work fine. Weird.I mean it could possibly solve both camps, C devs just prepending
var
and then Pascal devs just swapping around and dropping the:
.Love and use Erlang. Wonder if new tech, such as HVM will make it obsolete.
HVM? Is that something new?
Doubt, completely different thing.
strange to see vim and minimal software advocacy on the same page
I literally cannot tell if you’re being facetious or not.
ed is the standard editor.
Out of genuine interest what do you use (I am not a vim user but aware of it)
I’ve been doing this for ages! apg.7
man <(curl http://apgwoz.com/apg.7)
You write markdown-style links in man? Why? man(7) has
UR
:and mdoc(7) has
Lk
.TIL, I guess. :) I’ll have to update it when I get a chance! Thanks for the pointer!
This is awesome. Also, as for the latter part of your “Conforming to” - well done brother.
Update:
This kinda makes me want to go back to messing around with gopher. Reading text in my terminal is so pleasing honestly.
Compiler work! More specifically, dependency generation for for-loops - which I think I finished round about 4 hours ago, had a beer and called it a day!
So many questions! A compiler for what language? What is the target architecture? Is it a hobby project or for work? What are the design goals of the language and compiler?
I was baited into this being a cooking recipe.
I wasn’t even aware they had a proxy service (Google), as I am not a Go user. However, that is a neat way of them actually building a database of available packages, doing caching of course.
Count on the Go team to have something which follows the UNIX philosophy instead of some clunky service. That’s really neat!
It’s neat to do full git clones of every git repository hundreds of times per hour across a ton of servers instead of a simple HTTP service? Really?
It’s the pinnacle of anti-efficiency, antisocial, “just throw more money at hardware” anti-elegant solution I’ve seen.
Perhaps I misunderstood then from what I read further…
I meant I found the Google Proxy cool
It sounds like a cool idea, but badly implemented.
Perhaps we should find names for this pattern, so that we stop thinking of it as a good thing. I see the same pattern that you do: Eve convinces people to use their proxy, and gathers metadata about the network without active participation. We don’t see how Google is using this data, and it’s clear that they will not allow cleanly opting out of the proxy feature without disrupting the ecosystem.
You can personally opt out of the proxy system by setting the
GOPROXY=direct
environment variable.The fact you need to do this, and I bet 99% of everyone doesn’t even realize it’s there or why it might be desirable, is a symptom of how bad this pattern is.
Features in other languages and ecosystems? Awesome! The user has control! Features in the Go toolchain and ecosystem? Clearly a symptom of how bad the problem is.
Most other languages and ecosystems aren’t backed by companies that literally exist to harvest peoples’ information.
“Features” is too vague to be useful in this conversation. If GCC has a feature to let you statically analyse your code, that can be good, even if it’s bad that Go has an enabled-by-default feature which proxies all traffic through Google servers. If Rust adds a feature to make your head explode if it detects a typo, that would also be a bad feature.
Fair enough. It’s fine to have a more nuanced conversation about the merits of the feature/architecture.
I think the proxy architecture is superior to centralized architectures with respect to being able to opt out of centralized collection. Consider that in Rust (and every other package manager I can think of off the top of my head) the default is to get all of your packages through their centralized system, and the way to opt out is to run a copy of the system yourself which is much more difficult than setting an environment variable (you almost certainly have to set one either way to tell it which server to connect to) and still doesn’t allow you to fully avoid it (you must download the package at least once).
You may have rational or irrational fears of Google compared to other faceless corporations, but that’s not an indictment of the feature or architecture. Additionally, the privacy policy for the proxy even beats crates.io in some ways (for example, crates.io will retain logs for 1 year versus 30 days).
Another exception would be deno: https://deno.land/manual@v1.29.2/basics/modules#remote-import
More evidence that the issue people are upset about here is psychological and has almost nothing to do with the facts at hand.
It should be opt-in, not opt-out.
From Google’s perspective it’s reasonable to use opt-out here, otherwise nobody would have configured those GOPRIVATE, GONOPROXY, GONOSUMDB, making the sumdb and package index pretty pointless. However, from a user perspective I feel that opt-in would have been the friendlier option.
Not everything is that black and white. While I don’t think Google would pass on using any metadata they can get their hands on, there are also benefits for the community. They list some here in How Go Mitigates Supply Chain Attacks
Having witnessed several drama stories in the Node and Python package ecosystem, I think I prefer Google’s approach for now.
Maybe packages were a mistake. Maybe there are better ways to share software.
Sounds compelling. Do you have any suggestions?
I don’t have a complete concrete proposal. I am currently hacking on a tacit expression-oriented language which is amenable to content-addressing and module-free code reuse; you could check out Unison if you want to try something which exists today and is similar to my vision.
I’ve tried to seriously discuss this before, e.g. at this question, but it seems that it is beyond the Overton window.
How is this situation any worse than a registry based system like npm or cargo?
Man’s power is probably out, try again. ZA woes…
Free Pascal, or finish doing so rather.
Also, more meta-programming and the art of dating.
Free Pascal is my objective too. I know enough Python and C to get things done (personal projects). So far, Free Pascal seems incredibly clear and intuitive. I like the way the blocks are laid out and the concepts seem very familiar. I’m really enjoying it.
Given that it is available for several major platforms, creates small binaries, and allows for quick GUI creation if you need that, I’m really enthusiastic about shifting to it for most projects. I’m sure that when I start actually making use of it, I’ll find the odd stumbling block, but so far, when I have researched the objectives that I want to accomplish, there’s a way…
I hope your year goes well!
Don’t take this a criticism, I really ask out of genuine curiosity, but: I learned free pascal in 2008/2009, in introduction to programming classes in college, and everyone thought it was pre- historical then. Why do you two want to learn it now, in the year of our lord of two thousand and twenty three?
Is there a niche that uses pascal a lot? There has been a renascence I’m unaware of? Y’all just into vintage vibes?
Oh. So I’ve been creating Windows desktop utilities for my work (to automate processes). It’s the kind of thing that you can usually do with batch scripts (or PowerShell) and I’ve even thrown together a few with a GUI (using The Wizard’s Apprentice, https://wizapp.sourceforge.net/).
Using the Lazarus IDE, you can create more customized GUI’s with more features very quickly and easily. As far as the capabilities of the language go, it can make system calls and manipulate text files, which covers the bases for me. It’s also a lot faster than a batch script.
There’s actually some nice work that is being done in Pascal. I use this app a lot: https://github.com/tobya/DocTo
Goes to show that any claims of “language X is dead” must be taken with a lot of grains of salt.
I actually liked Pascal, back when I learned it. It was certainly better than C as a teaching language.
Well worth reading. This is probably the clearest and most approachable introduction to X11 that I’ve read.
The coolest thing this article taught me, and I have to give it a more full read next time, is how the
Expose
event works. It explains why you get those duplication of windows when you drag it over an application which is running slowly, therefore responding to regional update requests very slowly that you actual see an unupdated frontbuffer (is that the right word?) for some time.That for me was groundbreaking knowledge :)
The one thing missing for me (skimming some of the later bits, I might have missed it) in this was discussion of the DAMAGE extension. This confused me when I read the docs (15 years ago, they might be better now) because there’s a conceptual jump: Damage is the opposite of expose, but in my mental model it was the same and I couldn’t see how it worked. Composite, render, and damage are typically all used together as a group and the article did discuss the first two.
So if I have Firefox fullscreen and I move a terminal over it. My example then was expose in the case of X requesting firerfox to provide the region update for where the terminal window was.
Then damage is more like firefox asking, hey what’s infront of me?
No damage is firefox saying ‘someone typed in this text field, so this bit needs redrawing, I have updated the Picture, you need to redo the compositing’.
Oh that makes sense and now I can see why the terminology sounds confusing and would make sense being the other way around. It would make sense for window above me damaged me, redraw region, hey I have a region I want to update, expose this to x
Thanks for the clarification!
With a compositing display server, you don’t need expose notifications because the application draws an entire window and it’s up to the compositing manager to display the exposed parts (often by sending all of them as textures to the GPU and just drawing them on rectangles on top of each other). You do, however, need to tell the compositing manager when you have updated part of your own window, because otherwise it doesn’t know that it needs to redraw. You can do this by tracking updates to the layers in the window transparently, but sometimes it’s a lot more efficient to be told ‘the only part of this window that’s changed is this little tiny bit’. For example, consider a flashing cursor in a text box: only a handful of pixels change, so you can avoid a lot of network bandwidth for remote displays or a chunk of PCIe bus bandwidth locally by sending only a small update to a texture rather than the whole thing. That’s what the damage extension is for.
Thank you for explaining David :)
Yeah, another good example is screen sharing/recording applications, being told what to find instead of screenshotting the whole display X times a second and trying to find differences.
I’m really fond of this language so I could see the benefits in writing in D here, also not to mention the metaprogramming capabilities.