I was expecting this to knock the stock price down a bit but strangely that did not happen. Was it somehow already priced in?
Reminds me of an old saying: “Bloat” is any feature I don’t use right now.
It’s impossible to say something is “bloated” based only on its size. You don’t know why all of that code is there, and you don’t know if there are smaller ways to solve all the same problems. (Stated differently, if you do know there are smaller ways to solve all the same problems, propose them to the Linux people.) And it’s dishonest to cut things out of the problem in the name of cutting them out of the solution: If your project doesn’t solve all the same problems, it’s meaningless to compare it to something which does. It would be like saying a Nissan is bloated compared to a go-kart.
If you want the smallest binary possible, you’ll have to cut out a lot of abstractions (libc? dynamic linking? nobody needs that!), employ tricks like these, make use of programming styles that are unorthodox even for assembly programmimng, and use specialised packers and unpackers (like this one, this one, this one, …).
Of course, the project maintainers will call you nuts and not accept your patches.
Besides, the extra size (and worse performance, etc. etc.) doesn’t always happen because of adding features, but rather because of (I think unnecessary) abstraction layers piled on top of one another, mostly things that can be done at compile-time. Here’s an essay describing a thought experiment making it a little more clear.
I don’t think any of those tricks should be necessary for targetting a 1.5 meg base system. After all, 1.5 meg base systems were extremely common twenty-five years ago, produced by regular C compilers from regular C code by people who weren’t making any particular effort to trim them down.
You can do incredible things in 4k if you employ those tricks. But, 4k is the size of a completely empty file on unix!
I have done some audio programming, and am studying engineering, so I guess I have some knowledge about it. There are many who are better than me, though. I hope this isn’t too mathematical, but you need to have some grasp on differentiation, integration, complex numbers and linear algebra anyway. Here’s a ‘short’ overview of the basics:
First of all, you need to know what happens when an analog, continuous signal is converted to digital data and back. The A->D direction is called sampling. The amount of times the data can be read out per second (the sampling rate) and the accuracy (bit depth) are limited for obvious reasons, and this needs to be taken into account.
Secondly, analysing a signal in the time domain doesn’t yield much interesting information, it’s much more useful to look analyse the frequencies in the signal instead.
Fourier’s theorem states that every signal can be represented as a sum of (co)sines. Getting the amplitude of a given freqency is done through the Fourier transform (F(omega) = integrate(lambda t: f(t) * e^-omega*j*t, 0, infinity)). It works a bit like the following:
f(t) by e^-omega*j*t, omega is the pulsation of the desired frequency, i.e. omega = 2pi*f, and j is the imaginary unit. j is used more often than i in engineering.)(Note: the Fourier transform is also known as the Laplace transform, when substituting omega*j with s (or p, or z, they’re “implicitely” complex variables), and as the Z-transform, when dealing with discrete signals. It’s still basically the same, though, and I’ll be using the terms pretty much interchangably. The Laplace transform is also used when analyzing linear differential equations, which is, under the hood, what we’re doing here anyway. If you really want to understand most/everything, you need to grok the Laplace transform first, and how it’s used to deal with differential equations.)
Now, doing a Fourier transform (and an inverse afterwards) can be costly, so it’s better to use the information gained from a Fourier transform while writing code that modifies a signal (i.e. amplifies some frequencies while attenuating others, or adding a delay, etc.), and works only (or most of the time) in the time domain. Components like these are often called filters.
Filters are linear systems (they can be nonlinear as well, but that complicates things). They are best thought of components that scale, add, or delay signals, combined like this. (A z^-1-box is a delay of one sample, the Z-transform of f(t-1) is equal to the Z-transform of f(t), divided by z.)
If the system is linear, such a diagram can be ‘transformed’ into a bunch of matrix multiplications (A, B, C and D are matrices):
state [t+1] = A*state[t] + B*input[t]output[t ] = C*state[t] + D*input[t]with state[t] a vector containing the state of the delays at t.
Analyzing them happens as follows:
Z{x(t)}=X(z)) and the output signal (Z{y(t)}=Y(z)).Y and X is a (rational) function in z, the transfer function H(z).A.However, if the poles are outside of the unit circle, the system is ‘unstable’: the output will grow exponentially (i.e. “explode”). If the pole is complex or negative, the output will oscillate a little (this corresponds to complex eigenvalues, and complex solutions to the characteristic equation of the linear differential equation).
What most often is done, though, is making filters using some given poles and zeros. Then you just need to perform the steps in reverse direction.
Finally, codecs simply use that knowledge to throw away uninteresting stuff. (Eg. data is stored in the frequency domain, and very soft sines, or sines outside the audible range are discarded. With images and video, it’s the same thing but in two dimensions.) I don’t know anything specific about them, though, so you should look up some stuff about them yourself.
Hopefully, this wasn’t too overwhelming :). I suggest reading Yehar’s DSP tutorial for the braindead to get some more information (but it doesn’t become too technical), and you can use the Audio EQ Cookbook if you want to implement some filters. [This is a personal mirror, as the original seems to be down - 509.]
There’s also a copy of Think DSP lying on my HDD, but I never read it, so I don’t know if it’s any good.
The amount of times the data can be read out per second (the sampling rate) and the accuracy (bit depth) are limited for obvious reasons
Interesting post. I wanted to highlight this part where you say it’s limited for “obvious reasons.” It’s probably better to explain that since it might not be obvious to folks trained to think transistors are free, the CPU’s are doing billions of ops a second, and everything is working instantly down to nanosecond scale. “How could such machines not see and process about everything?” I thought. What I learned studying hardware design at a high-level, esp on the tools and processes, was that the digital cells appeared to be asleep a good chunk of the time. From a software guy’s view, it’s like the clock signal comes as a wave, starts lighting them up to do their thing, leaves, and then they’re doing nothing. Whereas, the analog circuits worked non-stop. If it’s a sensor, it’s like the digital circuits kept closing their eyes periodically where they’d miss stuff. The analog circuits never blinked.
After that, the ADC and DAC tutorials would explain how the system would go from continouous to discrete using the choppers or whatever. My interpretation was the digital cells were grabbing a snapshot of the electrical state as bit-based input kind of like requesting a picture of what a fast-moving database contains. It might even change a bit between cycles. I’m still not sure about that part since I didn’t learn it hands on where I could experiment. So, they’d have to design it to work with whatever its sampling rate/size was. Also, the mixed-signal people told me they’d do some components in analog specifically to take advantage of full-speed, non-blinking, and/or low-energy operation. Especially non-blinking, though, for detecting things like electrical problems that can negatively impact the digital chips. Analog could respond faster, too. Some entire designs like control systems or at least checking systems in safety-critical stuck with analog since the components directly implemented mathematical functions well-understood in terms of signal processing. More stuff could go wrong in a complex, digital chip they’d say. Maybe they just understood the older stuff better, too.
So, that’s some of what I learned dipping my toes into this stuff. I don’t do hardware development or anything. I did find all of that really enlightening when looking at the ways hardware might fail or be subverted. That the digital stuff was an illusion built on lego-like, analog circuits was pretty mind-blowing. The analog wasn’t dead: it just got tamed into a regular, synthesizable, and manageable form that was then deployed all over the place. Many of the SoC’s still had to have analog components for signal processing and/or power competitiveness, though.
You’re right, of course. On the other hand, I intended to make it a bit short (even though it didn’t work out as intended). I don’t know much about how CPUs work, though, I’m only in my first year.
I remember an exercise during maths class in what’s probably the equivalent of middle or early high school, where multiple people were measuring the sea level at certain intervals. To one, the level remained flat, while for the other, it was wildly fluctuating, while to a third person, it was only slightly so, and at a different frequency.
Because of the reasons you described, the ADC can’t keep up when the signal’s frequency is above half the sampling frequency (i.e. the Nyqvist frequency).
(Interestingly, this causes the Fourier transform of the signal to be ‘reflected’ at the Nyqvist frequency. There’s a graph that makes this clear, but I can’t find it. Here’s a replacement I quickly hacked together using Inkscape. [Welp, the text is jumping around a little. I’m too tired to fix it.])
The “changing a bit between cycles” might happen because the conversion doesn’t happen instantaneously, so the value can change during the conversion as well. Or, when converting multiple values that should happen “instantaneously” (such as taking a picture), the last part will be converted a little bit later than the first part, which sounds analogous to screen tearing to me. Then again, I might be wrong.
P.S. I’ll take “interesting” as a compliment, I just finished my last exam when I wrote that, so I’m a little tired now. Some errors are very probably lurking in my replies.
I’ll take “interesting” as a complimen
You were trying to explain some hard concepts. I enjoy reading these summaries since I’m an outsider to these fields. I learn lots of stuff by reading and comparing explanations from both students and veterans. Yeah, it was a compliment for the effort. :)
Even though I learned about the Fourier transformation in University this video gave me a new intuition: https://www.youtube.com/watch?v=spUNpyF58BY
Thanks very much for your detailed reply :). The math doesn’t scare me, it’s just very rusty for me since a lot of what I do doesn’t have as much pure math in it.
I appreciate the time you put into it.
Speaking specifically of Fourier transform: it behaves well for infinite signals and for whole numbers of periods of strictly periodic signals.
But in reality the period usually doesn’t divide the finite fragment we have (and also there are different components with different periods). If we ignore this, we effectively multiply the signal by a rectangle function (0… 1 in the interval… 0…) — and Fourier transform converts pointwise multiplication into convolution (an operation similar to blur). Having hard edges is bad, so the rectangle has a rather bad spectrum with large amplitudes pretty far from zero, and it is better to avoid convolution with that — this would mix rather strongly even frequencies very far from each other.
This is the reason why window functions are used: the signal is multipled by something that goes smoothly to zero at the edges. A good window has a Fourier transform that falls very quickly as you go away from zero, but this usually requires the spectrum to have high intensity on a wide band near zero. This tradeoff means that if you want less leak between vastly different frequencies, you need to mix similar frequencies more. It is also one of the illustrations of the reason why a long recording is needed to separate close frequencies.
It has already been posted a year ago: https://lobste.rs/s/su9hon/how_64k_intro_is_made
But it’s a good writeup anyway.
If you haven’t already, you really should watch it (or, if possible, download and run the actual binary).
As someone that got an autograph from professor Rijmen when he was my algebra professor
For those who are interested how it looks like (I’m also a student at KUL).
This article doesn’t list any true “hacks”, but very standard stuff (but you should definitely know them).
Here’s a much more interesting page, imo. “Round up to the next highest power of 2 by float casting” is quite delightful. The FastInvSqrt trick and creating tiny ELF files might be counted as a “low level bit hacks” as well.
Here’s one of my own: y[i] = u[i] + f_c * (y[i-1] - u[i]) is a lowpass filter (cutoff frequency f_c), which can be implemented using fixed-point maths quite easily, and it’s much lighter than the standard biquad filter. Though, it’s a first-order FIR filter, so the quality might be poorer.
creating tiny ELF files
Yeah, but that’s the shit in a way that also saves people bandwidth. It could potentially have wide, positive impact. :)
Unless I’m much mistaken, isn’t that just an exponential moving average (with alpha=1-f_c) and therefore an IIR filter?
y[i] = (1 - f_c) * u[i] + f_c * y[i-1]
If you try to build it yourself, their build script also appears to download and inject additional code into the built artifact from marketplace.visualstudio.com during the build. I opened one two three issues.
My guess is that these practices that disrespect or completely ignore users privacy, like requiring an internet connection (whether to build or just to use a piece of software - even the OS itself) are so deeply baked into Microsoft’s mental culture now there’s no going back. It’s just a given now that they assume they are entitled to grab and record whatever information they want from your machine just in order to use their software.
It’s not just Microsoft, many companies seem to be jumping on the same or similar bandwagon.
The same thing happens when you try compiling coreclr (called from here), there’s no way to properly bootstrap it. (And of course, there’s some “telemetry” in there as well, enabled by default during the build, before you have a chance to turn it off.)
The same thing happens when you try compiling coreclr
Wow, now I wonder whether this pattern happens in other Microsoft “open source” projects.
And you gotta love this comment:
# curl has HTTPS CA trust-issues less often than wget, so lets try that first.
Reminds me of these blog posts by viznut:
He sometimes sounds a bit crazy, but he has some valid points nonetheless.
It’s clear that he’s a smart guy who’s familiar with marxist critiques. The ‘resource leak bug’ post doesn’t seem like anything new, but he’s described it pretty lucidly. Thanks!
For a teenager and an aspiring computer programmer, the 00s were a great time to learn.
You can replace ‘00s’ with ‘80s’ or ‘90s’ and this would still be true (I’ve heard this sentiment many times). Perhaps not the 70s or earlier, since home computers were not really a thing then.
I think the key point in this discussion/rant is that computers are mostly an appliance and a consumption device. Tools for creating things with them have gotten harder to work with over time. Part of that has to do with the complexity of the systems, but part of it is also the paucity of profits that come from providing such tools. The solution, such as it is, seems to be adding more options to our C compilers.
Also: it turns out Mastodon is just as bad for long threads as Twitter.
I don’t think inherent complexity has much to do with why systems have gotten less flexible over time. I think we’re mostly looking at the result of changing norms.
Personal computers in the early 80s were marketed along the lines of “master for loops with this machine and you can take control over your life”. This wasn’t necessarily totally accurate (a lot of those machines had 8 bit integer arithmetic, so using them even for personal finances could be tricky), but it at least made clear that the point of the machine was that you’d set aside a half an hour with the manual and gain control over the machine in turn.
There was a concerted effort, spearheaded by Jobs, to turn general-purpose computers into single-function computing appliances, hide programming from non-technical users, and force the hobby community to turn itself into a much more professionalized “microcomputer software industry”.
I don’t think any of those things were really necessary – we used to have a distinction between workstations (big expensive machines used by professionals for important work and paid for by the company) and micros (small, buggy machines without memory managers, where commercial software was thin on the ground and you were really expected to write everything yourself even if you weren’t a programmer), and that division was really empowering for both sides (even as, by clocking in or out, you could cross the boundary between small computing and big computing).
I think you are misrepresenting the state of advertising and computers in the 80s.
Look at this ad for a TRS-80.
It’s not selling programming, it’s not selling for loops–it’s selling the software that solves the types of problems the user has, writing stuff and balancing the checkbook.
The distinction between workstations and micros is similarly incorrect. Workstations–say, HP or Sun or SGI boxes–were outnumbered by cheap PC-clones or IBM-ATs or Apple boxes or whatever, running boring business applications.
There’s this desire to say “Ah, but in the golden age of computing, where every user was a programmer-philosopher-king!”, but that just isn’t borne out by history.
There’s this desire to say “Ah, but in the golden age of computing, where every user was a programmer-philosopher-king!”, but that just isn’t borne out by history.
Absolutely. Don’t get me wrong: just because I argue that dev tools were easier to work with in the 80s doesn’t mean they were good. It’s a subtle difference that some fail to take into account.
I’m slightly exaggerating for the sake of emphasis. There was never a golden age of programmer-centric home computers, but there was a silver age of home computers that expected that most non-programmers would do a little programming, and catered to the middle-ground between novice and developer in a way that didn’t require a mission statement and career goals. (If you had a computer in your home, you probably wrote a little bit of code, and nobody was forcing you to write more.)
There were business-specific ad campaigns that focused on existing shrink-wrapped software, and were essentially ads for the software in question. But, those same manufacturers would have programming related campaigns. And, if you weren’t in the market for spreadsheets, you’d quickly find that if you wanted your computer to be anything more than an expensive paperweight, you’d need to either play a lot of games or learn simple programming concepts.
Sure, workplaces could use micros to run dedicated business apps. But, if someone bought that same machine for their own home, programming would be presented to them, by the machine and the machine’s documentation, as the primary way of interacting with the machine unless they bought third party shrink-wrapped software.
We’ve moved to a programmer/user division as part and parcel of a privileging of workplace deployment needs over exploration (as the thread mentioned) – a situation where users aren’t expected to be in full control over their machines anyway because they have sold their time to Moloch and must use their machine in only Moloch-approved ways. It’s fine that such systems exist (we all sacrifice our 40 hours a week into Moloch’s gaping maw), but it’s pretty stupid to have even the machines in our bedrooms set up like they expect our house to have a professional sysadmin and an internal dev team (to write new applications or edit the code of licensed stuff to meet our needs).
The ability, as a non-programmer, to write hairy messy code in the comfort of your own home, and the understanding that it’s expected of you to write code for yourself and nobody else, is really important. The alternative is that everybody who masters for loops thinks that they’re ready to work for IBM, and they end up using plaintext databases for password storage at a fortune 500 company because they don’t understand the difference between big computing and small computing.
The ability, as a non-programmer, to write hairy messy code in the comfort of your own home, and the understanding that it’s expected of you to write code for yourself and nobody else, is really important.
Why? Why does this matter to non-programmers?
I have an app that streams movies and porn. I have an app that lets me tweet at other people who feel that tweeting is important. I have a web browser and an app to file taxes. I have an app to go look at my bank transactions and remit rent. I have an app to collect e-books I never get around to reading.
What problems do I have that require programming or number crunching that are not solved, better and more easily, by just using something somebody else wrote and leases back to me for the convenience?
And if I need to do something really weird, why not just hire a coder to solve it for me?
~
I’m not even being facetious here. The argument for everybody knowing how to program is, increasingly, like the argument for everybody knowing how to cook, how to debate properly, how to shoot, or any other thing we used to be able functioning adults to be able to do.
It doesn’t matter anymore. It’s not required. Federation of skills and services is inefficient.
It’s obsolete.
First off, spreadsheets are the most popular programming environment ever created. Yes, spreadsheets. It’s a form of programming, only it’s not called programming, so people do it. [1]
Second, people not exposed to “programming” are often unaware of what can be done. A graphic designer is given 100 photos to resize. Most I fear, would, one at a time, select “File”, then “Open”, then select the file, then “Okay”, then select “Image”, then “Resize”, then type in a factor, hit “Okay”, then “Save” and then “Okay”. One Hundred Times.
I think there’s an option in Photoshop to do that as a batch operation but 1) that means Photoshop has to provide said functionality and 2) the user has to know about said option.
As a “programmer”, I know there exist command lines tool to that and it’s (to me) a simple matter to do
for image in src/* do; convert -resize 50% $image dest/`basename $image`; done
(I think I got the syntax right) and then I can go get a lovely beverage while the computer chugs away doing what the computer does best—repetitive taks. It’s a powerful concept that many non-programmers even realize.
[1] It’s scary how much of our economy depends upon huge spreadsheets passed around, but that’s a separate issue.
Oh, I’m quite aware of spreadsheets–but that’s not programming by the some people’s definition, because they don’t come with a bunch of manuals that non-programmers can cavort through.
As for your second point, I see what you’re getting at–but the majority of people will keep doing things the dumb, slow way because they don’t think that learning a new way (programming or not) is easy enough or because there is simply no incentive to be more efficient.
The argument for everybody knowing how to program is, increasingly, like the argument for everybody knowing how to cook
Yes, it is. If you know how to cook, then you aren’t at the mercy of McDonalds.
I’m not really arguing for everybody “knowing how to code” in the sense that some people use that phrase.
Every UI is a programming language. We’re stuck with shitty UIs because we think our users are unable to cross some imagined chasm of complexity between single-use and general-purpose, but that chasm is mostly an artifact of tooling we have invented in order to reproduce the power structure we benefit from.
There’s no technical reason that you need to learn how to program in order to program – only social reasons. And, I don’t consider that acceptable.
It’s fine if people decide to remain ignorant of programming (even when it’s literally easier to learn enough to automate some problem than to solve it with shrink-wrapped software). It’s not fine that the road to proficiency is being hidden.
Ultimately, if a non-programmer requests a programmer to write some code, it’s typically done for money. It’s done for money because there’s a gap between the professional class of programmers (who write professional code with professional tools for money) and the non-programmer (who must embark upon a quest to become a programmer, typically with shades of career-orientation, before writing a line of code). But, the ability for a non-programmer to say “I’m not willing to pay you to spend five minutes writing this code; I’m going to spend twenty minutes and do it myself” is missing, because it’s not possible with current tooling to go from zero to novel-yet-useful code in 20 minutes. So, programmers (as a class) get to overvalue their services by using tools that require more initial study to use.
Everybody knows how to cook (delta some tiny epsilon – very rich people who can eat out every night, small children, the institutionalized). Most cooks are not chefs (or even line cooks) – their success in cooking hinges on whether or not they are willing to eat the food they make, and so they don’t need to live up to the standards of paying customers. This provides a steady stream of people who already know how to cook enough to know that they like it, who can graduate on to professional cooking jobs, but it also provides a built-in competition for those professionals. And, it’s something that is only so widespread because there is an expectation that everyone can learn, an understanding that everyone benefits from doing so, and a wide variety of tools and learning materials covering the entire landscape from absolute novice to world-famous expert. Nobody mistakes being able to boil an egg for being able to stuff a deboned whole chicken.
When there is no place for absolute amateurs, everybody with a minimum competence gets shunted into the professional category. This is a problem when there’s no licensing system. It’s a huge problem with the tech industry. We need to stop it. The easiest way to stop it is to make it easy for non-programmers to compete on relatively even ground with professionals – which isn’t as hard as it looks, because users have many needs that are too rare to be met by a capitalist system.
(To give a cooking example: I like to put cinnamon and nutmeg in my omlettes. No professional cook would ever do that. So, if I want that wonderful flavor combination, I need to do it myself. Every user has stuff like that, where they would like their software to work in a particular way that no professional programmer will ever implement.)
ooh ooh pick me pick me
Most of the stuff we use is created by companies, which are trying to make it maximally useful for given effort, so they up covering, like, 90% of use cases. That could mean every person can get 90% of their stuff done with it, but it could also mean that it’s perfect for five people and only 80% useful for the other five. Programming can help (not fix, but help) patch up that 20%.
In practice, though, most programming languages aren’t suited for duct-taping consumer apps. When I say “everybody would benefit from learning to program”, I’m thinking things like spreadsheets, or autohotkey, or maybe even javascriptlets.
Yeah. There’s a tooling issue, in that most programming languages these days are made for programmers, and the ones that aren’t don’t play nice with the ones that are. This is a huge gap, and one that benefits capital exclusively.
Tools for creating things with them have gotten harder to work with over time.
Really? Every modern browser has a built-in development environment!
This is a very good point. I was looking at this issue in the light of my own experience, which has been with personal computers of various vintages. But most new users come in to contact with computers through phones and tablets now!
(I have copied program listings from magazines into my ZX Spectrum, to date myself).
If we confine ourselves to MacOS/Windows, even these have good scripting environments that can be capable programming environments - PowerShell beats bash in this regard, I think.
As an aside, in last year’s Advent of Code, a post was made on the subreddit complaining that an assignment built on a previously solved assigment (i.e. the code was to be reused). It turns out that this person solved the assignments on their mobile device and discarded the code after submitting a correct answer.
Every modern browser has a scripting language sandbox with a giant, awkward, broken, poorly-documented API, which you need internet access and a guide to even start on. No editor either. And then, to share your work, you need to buy an account on somebody else’s web server and learn to use SFTP. Most users don’t even know that writing javascript is something a regular person can do.
In comparison, early home computers (including the IBM PC) ran BASIC by default at boot-up. You would need to go out of your way to override this by putting in a cartridge or floppy before you started the machine, and in many cases the machine didn’t come with any software other than BASIC. And, your machine would ship with a beginner’s guide intended to teach BASIC to people who could barely read, an “advanced BASIC” guide for people who couldn’t code but had read the beginner’s guide already, full API documentation, schematics for the machine, the source code for the BASIC interpreter (sometimes), and BASIC code for a handful of demo programs. An effort was made to ensure that every machine showed a clear, easy to follow path from end user to mastery over one programming language (and, typically, you got an only-slightly-muddier path in the documentation itself for progressing to a basic grasp of assembly language or machine code).
For most people who have a web browser, programming is still “something somebody else does”. For anybody with an Apple II, Vic-20, C-64, TRS-80, Sinclair, BBC Micro, PC-8300, or really any home computer manufactured between 1977 and 1983 save the Lisa, programming is “something I could do if I spent a couple hours with these manuals”.
This is incorrect. Firefox give you an editor in the form of the scratchpad. MDN documents almost all of the web APIs currently supported by browsers, and if that doesn’t float your boat the W3C spec + caniuse works as well. There are issues with the web, yes. Ease-of-entry is not one of them.
Also while new and experimental features are buggy, by and large browsers are not buggy or awkward from a web developer or consumer’s POV.
MDN documents almost all of the web APIs currently supported by browsers, […]
That’s exactly what enkiv2 is saying:
[…] which you need internet access and a guide to even start on.
There are issues with the web, yes. Ease-of-entry is not one of them.
I disagree: you need to know what you’re doing in order to start making things. Most people don’t know how to open the devtools. (EDIT: and then, there’s the “ecosystem”, a huge pile of overengineered abstraction layers causing nothing but bloat.)
by and large browsers are not buggy or awkward from a web developer or consumer’s POV.
Iceweasel (from the Parabola repositories) has a bunch of bugs (search doesn’t work in the address bar, …), and is awfully bloated (takes a while to launch, uses half a GiB of RAM for 2 tabs, …), in my opinion.
Iceweasel (from the Parabola repositories) has a bunch of bugs (search doesn’t work in the address bar, …), and is awfully bloated (takes a while to launch, uses half a GiB of RAM for 2 tabs, …), in my opinion.
I should clarify. If you run a “normal” browser on a “normal” OS you won’t run into many issues. Also compared to the vintage computers the OP is referring to (especially the Apple II which had it’s startup sound specifically engineered to sound more pleasant since it crashed so often), the web is solid as a rock.
Facebook & twitter are slow as molasses & glitchy on stock chrome on a stock windows 10 install on a brand new machine.
I would argue that’s the developer’s fault. On vintage machines (and calculators), it’s just as easy or easier to produce a badly optimized solution that runs horribly. The current trend in web development is to force the client to do all the work which causes issues on less powerful machines.
Also that’s anecdotal evidence. My experience with the facebook and twitter web experience using chrome, windows 10 on a Thinkpad T540p has been pretty good. Unless you have solid evidence that the web in general is slow and glitchy that statement has no backing.
Man, if you’re going to consider a systemic problem (like “almost every major web app is slow and glitchy, and most of the minor ones too”) as though it’s a cluster of unrelated particulars and ask for proof of every one, I don’t know what to tell you. Using the web at all is pretty good evidence that the web is slow and glitchy, and the experience of writing web apps explains why they would be expected to be slow and glitchy in a pretty convincing way.
I mean, maybe you just have really low standards? But, I don’t think it’s OK to cater to low standards in a systematic way, even if you can get away with it.
Do you consider lobsters slow and glitchy? What about most blogs? Stack overflow? I can name tons of sites that get it right. The ones that don’t in my experience are few and far between. Facebook is the only popular site I can think of at the moment, but I really don’t think that counts since their native mobile app sucks just as much or more. Which would imply it’s facebook’s fault, not the web’s.
News sites are generally bad but that’s an issue with ads, not the web itself. There are cultural problems in web development but from a purely technical pov I don’t think the web is a bad platform.
Do you consider lobsters slow and glitchy?
It took in excess of 20 seconds to load this comment, on a broadband connection. What do you think?
What about most blogs?
The only blogs that have what I would consider acceptable overhead are non-CMS-based minimally-formatted static HTML sites like prog21. The average blogger or medium blog takes tens of seconds to load. Depending on the platform, sometimes a blog page becomes a problem in the middle of reading an article, causing the tab to crash. (This isn’t necessarily an ad thing – it’ll happen on medium, which has no ads and no third-party or user-supplied scripts.)
Stack overflow?
Stack overflow has, on occasion, taken more than 10 minutes to load a single page on my machines.
So, from my perspective, most web sites do not have acceptable performance. Even fast sites are slower than they could be, given absolutely minimal effort. (And, this is not even considering the embarassing level of bloat introduced by web standards – just using HTML and HTTP expands the number of bytes that need to be transferred across the network to render a static page by a factor of eight or more over markdown+gopher.) In other words, even if performance was acceptable from a user perspective (and I’m a professional developer with a newish machine that’s been tuned to improve performance – anything that’s slow for me is a hundred times slower for the proverbial grandmother), there’s a lot of low-hanging-fruit in terms of improvement.
Firefox give you an editor in the form of the scratchpad.
Hidden deep enough in menus that, unless you knew it existed and were looking for it, you would never find it.
MDN documents almost all of the web APIs currently supported by browsers, and if that doesn’t float your boat the W3C spec + caniuse works as well.
Doesn’t ship with every offline browser. Isn’t linked to from the default home page.
Ease-of-entry is not one of them.
I think I’ve made my case that the web doesn’t do a fraction as much work to ensure that every end user finds it easy to get on the road to being a programmer as every mom-and-pop computer shop did in 1981.
by and large browsers are not buggy or awkward from a web developer or consumer’s POV
I disagree completely. Web developers are constantly complaining about things being awkward, inconsistent, or buggy – and front-end and back-end developers who switch to working with web standards for a project or two have every reason to sympathize.
Just because web development has become marginally easier since 2006 doesn’t mean it was ever acceptable, in terms of effort/reward ratio.
I think I’ve made my case that the web doesn’t do a fraction as much work to ensure that every end user finds it easy to get on the road to being a programmer as every mom-and-pop computer shop did in 1981.
This is flagrantly false, between Stack Overflow, MDN, MSDN, W3Schools, and others.
There is so much more information out there, better presented and better organized and better indexed and at lower cost, than there ever was in 1981.
If you need to be told that it exists, then it isn’t accessible to people who identify as non-programmers.
I’m not talking about the ease with which someone who has already determined that they would like to become a professional programmer can find documentation. That, obviously, has improved.
I’m talking about the ease with which a completely novice user can wander into programming without any particular desire to learn to program, and learn to program despite themselves.
(Some people in very particular fields still do learn to program despite themselves. Those people are mostly research scientists. I don’t consider that an improvement.)
I’m talking about the ease with which a completely novice user can wander into programming without any particular desire to learn to program, and learn to program despite themselves.
They only have to Google “How do I build a website”, “How do I write a website”, like this.
Just because they aren’t rifling through thick manuals they bought with their micro doesn’t mean that non-programmers don’t have equivalent (or better!) resources.
Just because they aren’t rifling through thick manuals they bought with their micro doesn’t mean that non-programmers don’t have equivalent (or better!) resources.
Exactly. I get nostalgic about my Casio fx9750’s programming manual, but you won’t find me claiming it was a better resource than anything you could have found online. There really isn’t a good beginner alternative for solid documentation and question and answer sites.
If you find it online, it’s not a piece of documentation you have – it’s a piece of documentation you seek, that happens to be free and delivered quickly. You need to know that it exists, and you need to know how to find it, and both of those things are barriers.
For somebody to google “how to I write a website” they need to believe that a website is the appropriate way to solve whatever half-understood problem they have. Their problem may be something more like “how do I sort paid invoices by attachment type in paypal” – in other words, a useful feature missing from a popular service, which is best implemented by a shell script. Searching for this will not teach them how to solve the problem, because they didn’t put anything about programming in the query, because they don’t know that the best way to solve this is by writing some code. They will instead get zero relevant results, and instead of thinking “I should write code to do this”, they will think “I guess it can’t be done”.
“how do I sort paid invoices by attachment type in paypal” – in other words, a useful feature missing from a popular service, which is best implemented by a shell script.
What?!
Paypal will let you export a CSV of invoice summaries containing information about attachment names. So, the sensible way is to export that CSV and use shell tools to sort by attachment extension – in other words, write a couple lines of code to handle a corner case that the original developers of the site couldn’t forsee.
(This particular example is taken from my life. I’ve commissioned a bunch of artworks, and I want to separate those records from other unrelated invoices, so that I know which works have been finished and paid for even though it’s taken the better part of a year for them to be made & they’re not in any particular order.)
Why not write a couple lines of js in a greasemonkey userscript so you don’t have to go to the trouble of exporting as CSV, opening a terminal, and running a shell script?
Because attachments are never listed in the summary page (which also has a very small maximum pagination). Web services are intended for display, and not made accessible for further user-driven hacking – particularly financial systems like paypal – so doing this kind of work in a browser is made even more awkward than it otherwise might be.
Even had we a reasonable page size (say, ten thousand, instead of twenty) and the necessary information, javascript is going to be a much more awkward solution – we need to navigate arbitrarily-labeled tag soup in order to handle what is essentially tabular data. Using shell tools (which are optimized for tabular data) is easier.
Even so, this whole discussion is about what we, as hackers, would do. What hackers would do is basically irrelevant. The problem is that what a non-hacker will do to solve such a one-off problem is see if someone has already solved the problem, find that nobody has, and give up – when the ideal solution is for the non-hacker to be able to hack enough to solve the problem on their own.
Glossing over why sudo would mysteriously not work, this writes a bunch of noise out to the monitor:
> sudo cat /dev/urandom > /dev/fb0
-bash: /dev/fb0: Permission denied
That command does not work because the redirection is performed by the shell which does not have the permission to write to /dev/fb0. The redirection of the output is not performed by sudo.
How to do it here
You can use tee as well:
cat /dev/urandom | sudo tee /dev/fb0
It’s quite funny that person who don’t know how shell and sudo (nb: use doas or su instead) tells us a lengthy story about writing to framebuffers in Linux
nb: use […] su instead
You better don’t, it’s vulnerable to the ages-old TIOCSTI trick::
#include <sys/ioctl.h>
int main() {
char* c = "exit\nid\n";
for (; *c; ++c)
ioctl(0,TIOCSTI,*c);
return 0;
}
(in .bashrc or so)
This can be deterred by using su -P, but it’s marked as experimental (according to the manpage), and I haven’t seen anyone using it.
TIOCSTI
more info here, as I haven’t heard of it previously https://ruderich.org/simon/notes/su-sudo-from-root-tty-hijacking
This probably works for the author if they’re in the video group:
$ ll /dev/fb0
crw-rw---- 1 root video 29, 0 5 apr 15:37 /dev/fb0
EDIT: I’m stupid, didn’t see that part of the article.
The 4k and 64k compos were pretty good. Live recordings can probably be found on the Twitch page.