Yeah, me too. I really love D. Its metaprogramming alone is worth it.
For example, here is a compile-time parser generator:
This is a good point. I had to edit out a part on that a language without major adoption is less suitable since it may not get the resources it needs to stay current on all platforms. You could have the perfect language but if somehow it failed to gain momentum, it turns into somewhat of a risk anyhow.
That’s true. If I were running a software team and were picking a language, I’d pick one that appeared to have some staying power. With all that said, though, I very much believe D has that.
In my opinion, until ocaml gets rid of it’s GIL, which they are working on, I don’t think it belongs in this category. A major selling point of Go, D, and rust is their ability to easily do concurrency.
Both https://github.com/janestreet/async and https://github.com/ocsigen/lwt allow concurrent programming in OCaml. Parallelism is what you’re talking about, and I think there are plenty of domains where single process parallelism is not very important.
You are right. There is Multicore OCaml, though: https://github.com/ocamllabs/ocaml-multicore
I’ve always just written of D because of the problems with what parts of the compiler are and are not FOSS. Maybe it’s more straightforward now, but it’s not something I’m incredibly interested in investigating, and I suspect I’m not the only one.
As of last year it’s open source, joining LDC
with discussions on HackerNews and Reddit.
The content is good, but this kind of UI jitter is really distracting: https://gfycat.com/PotableSameGoldenmantledgroundsquirrel
Thanks for pointing that out. We just pushed up a fix, and the issue should hopefully be resolved now.
Seems that this implements a basic slab cache on top of malloc (or other) using C++ map data structures. Though I think maybe technically a slab cache allocates by dividing up a contiguous block of memory into specifically-sized chunks instead of piecing individual small chunks together in a queue. You can see in your example on your github page that your two 100-byte allocations are not at contiguous memory addresses.
I don’t know much about the speed of C++ things, but it would be nice to see some speed comparisons to malloc, since I think malloc does some binning/caching of its own instead of immediately returning stuff to the system.
Also, in allocate_memory(), if _alloc_fun() returns NULL due to underlying allocator (i.e., malloc) failing, it will still increment _used_mem, which will lead to incorrect accounting. It might also cause some problems when the NULL pointer is freed.
Thanks very much for your comment! I learnt a lot of thing.
Also, in allocate_memory(), if _alloc_fun() returns NULL due to underlying allocator (i.e., malloc) failing, it will still increment _used_mem, which will lead to incorrect accounting.
Yes, this is a bug and I have already fixed it, thanks!
Freeing a NULL pointer shouldn’t be an issue (at least when using free); the C99 standard draft says that:
The free function causes the space pointed to by ptr to be deallocated, that is, made available for further allocation. If ptr is a null pointer, no action occurs.
Yes that is correct according to the standard. The issue in ump was with the accounting inside ump, but nanxiao fixed that.
WWDC is a developers conference, not a consumer/product conference. Why would new hardware be announced in WWDC?
Historically, that’s just how Apple does it. Macworld says:
WWDC kicks off with a big keynote, at which Apple execs typically introduce the latest developments in Apple operating systems (iOS, macOS, tvOS, watchOS) and introduces new hardware products, with a focus on those that developers care about most. That is, primarily Mac computers (especially the Pro lines).
Beyond the historical “they always do it”, it’s also because Apple doesn’t differentiate between software and hardware releases from a developer’s perspective. Apple is a consumer electronics company, and views the entire stack- software through hardware- as a single releaseable unit.
Every single iOS-developer, and every single Mac developer who writes native apps, need a Mac. If Apple for example fucks up the keyboard on their laptops, then every one of those developers who use a laptop have to live with that until Apple fixes it.
(I know it’s technically possible to make an iOS-app on another OS and then only sign it using a Mac, but I think the point still stands; in general, Apple’s developers use Macs.)
The site’s CSS could use some media queries: https://cloud.mort.coffee/index.php/s/DNLt7PdysZw6Ast/preview
“but WebAssembly is designed to run safely on remote computers, so it can be securely sandboxed without losing performance.”
Assuming the hardware works correctly. This assumption is failing more often now. You pretty much have to know what the code is going to do up front to securely run it.
WebAssembly is the new SWF (Adobe Flash) ?!??!. A, potentially, binary format that we expect to run and do “totally not malicious” things on our computers delivered via our web browser.
I think it’s certainly the case that WASM has it’s use cases, and this maybe one of them. But, I’m so much more skeptical of it all than I am excited.
We’re already running untrusted JavaScript. A binary representation of JavaScript would be exactly equally safe. WASM is a binary representation of a language that’s simpler than JS and has the same or fewer capabilities. Whether JS is safe or not can be debated, but WASM isn’t a step down; it’s at worst a lateral move.
You could argue that it’s harder to inspect WASM (you need to translate it into its textual representation, and then read a language much lower level than JS), but really, reading through minified and obfuscated JS, or JS compiled from C, isn’t exactly easy either.
I havent looked at it deeply yet. I have no opinion. Highly skeptical by default due to Worse is Better effect in web tech.
The APIs that come to talk to the environment outside of the waam runtime are going to be the deciding factor here Safely crunching numbers types doesn’t get you very far.
Sure. I’m not arguing for or against WASM. I’m just saying that current wasm doesn’t have any of those APIs at all.
(Astute readers will notice they are all AMD (Socket AM4) motherboards. The whole Meltdown/Spectre debacle rendered my previous Intel system insecure and unsecurable so that was the final straw for me — no more Intel CPUs.)
I mean, it’s not like AMD’s that much better in that regard….
Not entirely true; AMD allegedly didn’t have the bugs which allowed a process to read kernel memory, only the bugs to let users pace applications read each other’s memory. (Though that’s not exactly great either…)
I just wonder, how is it possible to have 124000 employees, yet leave such an important application untouched for so many years? The line ending thing is really important, but one could argue that MS haven’t cared for non-Windows systems, but Notepad has also been broken in other ways; ctrl+backspace inserts a square instead of deleting a word, for example.
MS exhibits this behavior towards other important apps too. Explorer.exe still doesn’t use the “new” APIs to support paths more than a couple hundred characters, meaning people end up with folders they can’t delete using Explorer. Explorer’s text input fields also insert a square when you hit ctrl+backspace, just like Notepad. CMD.exe just recently was updated to let the window be maximized.
I should specify why Notepad is such an important application, because it might not be obvious; everyone who needs a text editor just downloads Atom/VS Code/Sublime/Notepad++, right?
No. There’s lots of people who aren’t programmers, who usually don’t need a text editor, but need to change a config file from time to time. Most people who install mods for games probably need to change a config file, or people who want to play a game but have a slightly unorthodox setup (surprisingly many games hide away settings like disabling the FPS lock in some INI file instead of exposing it through the GUI). Or maybe you just need somewhere to write a Reddit/Lobsters/HN comment or post that’s slightly longer than what’s comfortable to type in the text box. Maybe you need to reboot into BIOS to see some information, and want to save your draft without posting the comment yet. These are the people Notepad is for, and while it’s not important that Notepad is good for those uses, it’s important that it’s not broken.
I often use Notepad for anything that I’m looking at quickly and likely don’t have to interact with. “What’s in this file?” etc. It starts so fast compared to those other editors. it’s not tabbed (yet), so it’s not taking over what ever content I have open. It never starts maximized for me and that creates a mental shift for me that means it’s temporary.
Yeah, especially now that Electron-based editors which have to start an entire instance of Chromium before you get to do anything are so popular, having a really lightweight editor to just check the content of a file and maybe change a value is nice.
Most of the time when I need to do that I’m in Linux, so I just open vim in my popup terminal, but I occasionally use Notepad on windows (and then regret it once I need to delete a couple of words and just insert squares instead).
Vim/Linux is a great comparison actually. I use notepad the same way I use cat, less, nano or vim(default, uncustomized) in the terminal.
I used to use Notepad for looking at untrustworthy stuff since it was easy to sandbox and already light on resources. I also used it as a default editor since it was on every system. WordPad for memos or official looking stuff where possible since RTF was super-light compared to MS Word. Also easier to sandbox.
My first thought seeing it was “Ah, I remember that bug I ran into in high school… 25 years ago.” I guess updating it wouldn’t move a meaningful business metric and their users enjoyed the world’s most robust third-party software market. But few users edited between operating systems, so it wasn’t broken for the users with light editing needs.
I find it a little ironic that after using the open-web browser that I am not able to inspect the sessionstore-backups/recovery.jsonlz4 file after a crash to recover some textfield data, as Mozilla Firefox is using a non-standard compression format, which cannot be examined with lzcat nor even with lz4cat from ports.
The bug report about this lack of open formats has been filed 3 years ago, and suggests lz4 has actually been standardised long ago, yet this is still unfixed in Mozilla.
Sad state of affairs, TBH. The whole choice of a non-standard format for user’s data is troubling; the lack of progress on this bug, after several years, no less, is even more so.
https://bugzilla.mozilla.org/show_bug.cgi?id=1209390#c10 states that when Mozilla adopted using LZ4 compression there wasn’t a standard to begin with. Yeah, no one has migrated the format to the standard variant, which sucks, but it isn’t like they went out of their way in order to hide things from the user.
It was probably unwise for Mozilla to shift to using that compression algorithm when it wasn’t fully baked, though I trust that the benefits outweighed the risks back then.
This will sound disappointing to you, but your case is as edge-caseish as it gets.
It’s hard to prioritize those things over things that affect more users. Note that other browser makers have security teams larger than all of Mozilla’s staff. Mozilla has to make those hard decisions.
These jsonlz4 data structure are meant to be internal (but your still welcome to use the open source implementation within Firefox to mess with it).
I got downvoted twice for “incorrect” though I tried my best to be neutral and objective. Please let me know, what I should change to make these statements more correct and why. I’m happy to have this conversation.
Priorities can be criticized.
Mozilla obviously has more than enough money that they could pay devs to fix this — just sell Mozilla’s investment in the CliqZ GmbH and there would be enough to do so.
But no, Mozilla sets its priorities as limiting what users can do, adding more analytics and tracking, and more cross promotions.
Third party cookie isolation still isn’t fully done, while at the same time money is spent on adding more analytics to AMO, on CliqZ, on the Mr Robot addon, and even on Pocket. Which still isn’t ooen source.
Mozilla has betrayed every single value of its manifesto, and has set priorities opposite of what it once stood for.
That can be criticized.
Wow, that escalated quickly :) It sounds to me that you’re already arguing in bad faith, but I think I’ll be able to respond to each of your points individually in a meaningful and polite way. Maybe we can uplift this conversation a tiny bit? However, I’ll do this with my Mozilla hat off, as this is purely based on public information and I don’t work on Cliqz or Pocket or any of those things you mention. Here we go:
As someone who also got into 1-3 arguments against firefox I guess you’ll always have to deal with criticism that is nit picking, because you’ve written “OSS, privacy respecting, open web” on your chest. Still it is obvious you won’t implement an lz4 file upgrade mechanism (oh boy is that funny when it’s only some tiny app and it’s sqlite tables). Because there are much more important things than two users not being able to use their default tools to inspect the internals of firefox.
Sure, but it’s obvious that somehow Mozilla has enough money to buy shares in one of the largest Advertisement and Tracking companies’ subsidiaries (Burda, the company most known for shitty ads and its Tabloids, owns CliqZ), where Burda retains majority control.
And yet, there’s not enough left to actually fix the rest.
And no, I’m not talking about Telemetry — I’m talking about the fact that about:addons and addons.mozilla.org use proprietary analytics from Google, and send all page interactions to Google. If I wanted Google to know what I do, I’d use Chrome.
Yet somehow Mozilla also had enough money to convert all its tracking from the old, self-hosted Piwik instance to this.
None of your arguments fix the problem that Mozilla somehow sees it as higher priority to track its users and invest in tracking companies than to fix its bugs or promote open standards. None of your arguments even address that.
about:addons code using Google analytics has been fixed and is now using telemetry APIs, adhering to the global control toggle. Will update with the link, when I’m not on a phone.
Either way, Google Analytics uses a mozilla-customized privacy policy that prevents Google from using the data.
If your tinfoil hat is still unimpressed, you’ll have to block those addresses via /etc/hosts (no offense.. I do too).
I won’t comment on the rest of your comment, but this is really a pretty tiny issue. If you really want to read your sessionstore as a JSON file, it’s as easy as git clone https://github.com/Thrilleratplay/node-jsonlz4-decompress && cd node-jsonlz4-decompress && npm install && node index.js /path/to/your/sessionstore.jsonlz4. (that package isn’t in the NPM repos for some reason, even though the readme claims it is, but looking at the source code it seems pretty legit)
Sure, this isn’t perfect, but dude, it’s just an internal datastructure which uses a format which is slightly non-standard, but which still has open-source tools to easily read it - and looking at the source code, the format is only slightly different from regular lz4.
- Because all product decision authority rests with the “Product Owner”, Scrum disallows engineers from making any product decisions and reduces them to grovelling to product management for any level of inclusion in product direction.
i.e. the product has a clear owner, and you build the thing that customers want rather than the thing that engineers want.
2-3,9, most of 11: true, and these things result in better software.
This lack of ownership results in: Poor design Lack of motivation (“It isn’t my thing”, “It was broken when I start working on it”)
That’s the opposite of my experience. Lack of ownership means I don’t need anyone’s permission to improve code I’m working on, and means the design can be evolved by the people actually working on it, leading to better design.
5-7, 14: not my experience.
8, 12: don’t do that then.
Do managers or Product Owners track and estimate every task they engage in
Track and estimate tasks? No, because those are reactive positions - QA folk or people on support duties this week don’t estimate either. The point of estimating is to be able to make prioritisation decisions, so it only really applies when there’s a
with little or no say in what they work on?
Yes? Managers and owners are expected to solve everything people bring to them, they don’t get any chocie.
Are they required to present burn down charts that show that they are on target to finish?
No, nor are developers.
Are they required to do bi-weekly sell-off meeting to justify their activities?
No, again like QA or support people.
13, 14: no actual argument to justify those claims, not my experience.
- Scrum ignores the fact that any task that has been done before in software does not need to be redone because it can be easily copied and reused. So, by definition, new software tasks are truly new territory and therefore very hard to estimate.
Disagree. Scrum does everything reasonably possible to acknowledge that estimation is hard: estimating in officially meaningless points, daily standups to give you an assessment point to realise if a task is diverging from its estimate and reasses if need be. At the same time, ultimately you do want some way to have engineering estimates feed into task prioritization decisions - you need some minimal level of estimation to be able to make the “do I do task A or task B or task C this week” decision. Scrum accomplishes that better than anything else I’ve seen.
i.e. the product has a clear owner, and you build the thing that customers want rather than the thing that engineers want.
It’s amazing to me how much bellyaching I hear over this. At $DAYJOB I have people coming to me all the time trying to figure out how to get their priorities done vs what the product owners want …why do you have priorities? Are you paid to build a product or to play with your toys?
Maybe… don’t look at your colleagues like children who don’t want to do work and would rather play with their toys. Maybe look at them like adults who are invested in the product in their own way. Maybe you’d understand them better then.
Disagree. Scrum does everything reasonably possible to acknowledge that estimation is hard: estimating in officially meaningless points
In theory this is true. In practice every single time I’ve seen or used scrum the points have devolved into a meaningful value instead. You can shout as loudly as you want that this is not doing scrum right but human nature says that doesn’t matter. We will inevitably assign a real metric to the points before too many sprints happen. We’ll treat them as time to ship, an estimate of effort required, complexity of problem. It’s pretty much inescapable that this will happen.
I don’t disagree as such. The way I look at it is: given that we want to feed some level of engineering cost estimate into the planning process, what’s the cheapest/least damaging form of estimation we can do that will suffice for making prioritisation decisions?
From this perspective points are a small, incremental improvement over estimating in terms of e.g. programmer-days: they blunt some of the sting of “I only got 2 days’ worth of work done this week”, they make it a little less awkward that Bob gets 25% more done than Jim. They make it a little harder for a manager to breathe down your neck with “you estimated this as 3 days, it’s now 4 days since you started, why isn’t it done?”
They’re not infallible on any of these aspects, but they’re a little better than using a “real” metric directly. The fact that they’re officially meaningless give you a tool to push back with if people start to misuse them.
They make it a little harder for a manager to breathe down your neck with “you estimated this as 3 days, it’s now 4 days since you started, why isn’t it done?”
This sound funny to me, because I’m pretty good at estimating long and complex tasks.
My average error is around 2%, but the bigger the task the more precise is the estimate.
It’s not magic, it’s just that for bigger tasks managers give me more time to explore the matter.
And I’m talking about project that requires small (3-7) to medium (10-20) sized teams.
What’s funny?
Most of managers (that is all non technical ones, except one) prefer a small estimate than a precise one.
They fight back a lot. My usual answer is: “Why did you asked me? Write the numbers you like!”
In one case, some years ago, one did. Guess what happened?
My average error is around 2%, but the bigger the task the more precise is the estimate. It’s not magic, it’s just that for bigger tasks managers give me more time to explore the matter.
Either there’s some “magic” you’re not mentioning, or you’re working in a very strange field for software. As the article puts it:
any task that has been done before in software does not need to be redone because it can be easily copied and reused. So, by definition, new software tasks are truly new territory and therefore very hard to estimate.
Certainly I’ve watched teams put a lot of time and effort into “exploring” and estimating, and come up with numbers that turned out to be orders of magnitude out. So it’s not as simple for ordinary programmers as just “spend some time on it”.
No magic, really.
You just need twenty years of experience in a lot of different projects (with several different stacks), good tools (that I design for my own team) and enough informations.
The only “strange” thing is that we work for big banks.
Well, bully for you, but most working programmers don’t have twenty years of experience, and in most fields you never have “enough” information. (I’m not surprised that projects in big banks would be unusually predictable, but that doesn’t help the rest of us).
I’d rather write Pug, honestly (it’s a Haml-like HTML dialect). Am I the only one? I don’t want to remember all these rules for lists and line breaks and whatnot. Markdown strikes me as both simple and confusing.
If there’s a compelling non-HTML target for writing Markdown, I totally understand; power to them. I never found one myself.
Pug is much more oriented towards layout templates than text documents. I guess you can write articles in Pug, but I wouldn’t want to do that.
LaTeX is a very compelling non-HTML target!
Any time I write anything for university which has to be handed in as PDF, I write in Markdown (with some LaTeX if necessary) and use Pandoc to output to PDF.
Pug doesn’t look that satisfying to myself, but I try to do as much as possible in org-mode as it supports things I care about such as tables.
I was hoping to get some tips I could use, but my use of a CAPTCHA isn’t on the web, it’s to allow legacy Telnet access to my Multics installation. It all started with a MASSIVE amount of automatic cracking attempts, which I linked to the Mirai botnet, but simply have never slowed down!
This is issue is affecting many other legacy system providers as well (see their 07-Jun-17, 01-Dec-17, and 04-Dec-17 updates).
Example of what I’m seeing over a period of about two or three months:
» mlt_ust_captcha
CAPTCHA: 32 passed, 18632 failed.
My solution was to present untrusted connections via legacy methods like telnet a text-based CAPTCHA - I am using only low ASCII characters for numbers and lowercase letters A through F, because at that stage of the connection, I can’t be sure exactly what terminal type the user is connecting with:
Please input the following text to enable access from your host.
You have 4 chances, 15 seconds each, or else you will be banned.
_ _
| |__ __| | __ _ ___
| '_ \ / _` | / _` | / __|
| |_) || (_| || (_| || (__
|_.__/ \__,_| \__,_| \___|
>
I tried various methods for turning the tables and lessening the burden of proof on the human to prove they are human, like examining keystroke timing, but everything I tried seemed to increase the false positive rate unacceptably!
My biggest complaint with this CAPTCHA system is, by it’s nature, it makes my resources inaccessible to computers - which means it also makes things inaccessible to those who depend on computer-based accessibility tools, such as those used by the blind.
For my Multics use-case, it’s OK, because there channels like Mosh or SSH connections that are exempted from the CAPTCHA and won’t affect blind or disabled users. As more and more of the web moves to programmed JavaScript-based pages, I worry that it’s becoming less accessible, or that disabled and blind users will be forced to experience second-rate presentation and content.
Could this visual ASCII art captcha be replaced by a plain string prompt like “please type the following word: peacock” that would work fine from a screen reader? No bot author is actively trying to break it, if I understand you correctly? The hordes of logins are just from a bot that wants to log into crappy IoT kit with exposed telnet and default passwords?
That would probably work pretty well I imagine: bots which just target open telnet ports would fail, but computers could still easily be programmed to automatically log in (if the challenge is always on the form “Please type the following word: (.*)”).
It guess it absolutely could - yes.
My concern and reason for not doing so originally was a concern that such a trivially solvable solution would quickly be trivially solved.
Of course, my concern might be overblown.
Also, in my case - since I offer connections via SSH, Mosh, and VNC I’m less concerned, but, also, if you solve the CAPTCHA just once, that particular IP is exempted from having to solve it ever again.
This is scary. Because even at the time of install, after carefully reading all reviews, and inspecting the network tab everything looks okay. It’s only weeks/months later that the malicious code gets remotely loaded and executed.
Google needs to take a hard look at their Chrome Extension Store.
I just checked, there doesn’t seem to be an official way to stop Chrome extensions from auto-updating. This is madness.
Meanwhile, in Firefox, you can just click the checkbox to disable automatic updates in the add-ons manager (either globally or for a particular add-on).
They also seem to have a pretty decent review process for add-ons; I don’t know whether they’d catch malicious updates, but when I submitted my add-on, I got actual technical feedback about the code (like a suggestion to replace innerHTML with textContent on a particular line).
With how great Firefox Quantum is these days, maybe it’s time to consider switching?
A power user can always download the extension’s source code and install it separately in developer mode. Since that counts as a different “extension” than the one offered in the store, it won’t automatically update.
I’m probably in the minority here, but I think automatic updates as a default is a good idea. Ideally there would be a way to turn this off, but defaulting to automatic updates is better than defaulting to manual updates. The vast majority of updates fix real bugs and the vast majority of users won’t update manually.
I also think automatic updates as a default is a good idea. What I find odd is that you can’t disable auto-update on a per extension basis. That way I can only authorize official extensions to auto-update and then decide for every other extension. It would definitely reduce the risk of a popular extension being bought and re-uploaded with malicious behavior.
When was the last time you productively used an octal number-representation?
Whenever I terminate a C string with a null character.
I’m curious, why do you need octal for that? 0, 0x0, 0b0, and 00 all act the exact same, don’t they?
That’s why even when I worked with an IDE I rarely used the debugger. Instead I prefer to use debug prints.
This author lost all credibility with me here. One simply cannot be effective working with large legacy code bases without using the debugger. So many things are obvious when you can step through the code.
So, you’re right… but when code bases become too large and legacy, you lose the ability to use a debugger.
I was hacking on chromium for a raspberry pi. Starting it with GDB took ages, with many gigabytes of SWAP space (and SWAP on an SD card is not exactly fast…), and GDB wasn’t even terribly useful because of how Chromium does multiple processes (at least I wasn’t able to get much use of it with my current knowledge of GDB and googling skills) Trying to start it in Valgrind would crash it after quite a few minutes, regardless of the amount of swap space. Recompiling from a release build to a debug build resulted in a day lost to waiting for Chromium to compile, and it turned out Chromium’s stack trace feature doesn’t even work properly on ARM (at least that was my experience).
Sometimes, printfs truly are your only choice.
Sometimes, printfs truly are your only choice.
Maybe, but not in your example. One can remote debug with most debuggers, gdb included. I’m not proficient with gdb, google turns up many links though. I’ve been a “hero” more than once remote debugging java app servers for example.
I, and most of my teammates, regularly use printf debugging when we have to debug GCC and the binutils. It works just fine.
Sure printf can be adequate and get the job done, but the difference is drilling straight down to the problem in minutes as opposed the theorizing, sprinkling printfs about, recompiling, reviewing, and repeating n times.
I once read a blog post (which I can’t find now) about a debugging story that really opened my eyes to what was possible. This spurred me to spend time learning the tools in my own domain, which have paid huge dividends over my career.
Use what thou shalt, I’m simply advocating for using the tools available. They really can make a difference.
Don’t get me wrong, it’s not that I don’t use a debugger, it’s just that sometimes it’s a crappy tool for the job. And a lot of those times end up being, say, when you’re trying to debug a compiler optimization that you’re not sure how it occurs. Does it happen right away? On the 50,000th iteration? Only when the register allocator has a specific configuration that is hard to describe? Walking through the execution can take literally hours in these cases (and you can script it, but if you don’t know what you’re looking for, what’s the point?). It’s likely faster to instrument the code, generate a giant log file, study the log file, then fire up the debugger if you haven’t already figured out the problem after searching through the log.
I recently had to debug a linker issue and I didn’t know the structure of the linker very well. I had a rough idea where the problem might be occuring. I fired up the debugger and set a breakpoint on my guess: the breakpoint was never hit. I could have kept doing that, or traced through from main, or I could just look at the code. I opted to look. Studying the code a bit I managed to find the entry point but then I realized I had to understand the data structure it was working with. I tried printing it in the debugger and it was… not illuminating. I studied some more code and once I had a grasp on it, I had a pretty good idea what the problem was. I verified the problem in the debugger shortly thereafter.
An interactive debugger was really useful here, but I can’t say it was the reason I figured things out. They’re good for exploration of the execution space, especially when you have a good grasp on where/when the problem occurs. They’re less useful when you don’t know the basic code structure, and they’re even less useful when you don’t know much about the data structures being manipulated (and your problem is actually a data problem). But for verification, they can be insanely useful.
“it is not acceptable to display a page that carries only third-party branding on what is actually a Google URL”
That seems like an odd position to take. The early Web was filled with people hosting their own content on their ISP using Apache’s /~username feature.
you consider /~username to be an “apache URL” in the same way https://google.com/ is a google URL? or am i misunderstanding?
He considers foobar.com/~username a foobar.com URL, in the same way that the article considers google.com/amp/webpage a Google URL.
that makes sense. i think the point is that when users click in a link from a google search result, the lack of google branding on an amp page may give the impression that the user has left the google bubble, when in fact they haven’t.
I predict some ppl will get overexcited & start rewriting
That really kind of put me off the article… I’ll read it, but I really don’t believe such SMS speak is appropriate for a blog post.
Language evolves, but I don’t recognize ppl as a valid alternative spelling of people yet. Maybe in a few more decades.
I just figured OP meant Probabilistic Programming Language.
I like being aware of this. Sometimes people try to improve things that are perfectly ok already and when you examine why, it’s usually because they feel that they need to do something. It happens outside of software and engineering too,
Isn’t it better though to have an ELF binary which just returns 0, rather than having to start a shell interpreter every time you want to invoke /bin/true? Also, when every other part of a collection of tools (in this case GNU coreutils) follows a convention (i.e that --version prints version information and --help prints usage information), is it really better that /bin/true is the one binary which doesn’t follow that convention?
This seems like a classic case of making the world a little bit better.
Isn’t it better though to have an ELF binary which just returns 0, rather than having to start a shell interpreter every time you want to invoke /bin/true?
I can see an alternate viewpoint where it just seems like bloat. It’s yet another chunk of source to carry around to a distribution, yet another binary to build when bootstrapping a system, and other holdovers.
Also the GNU coreutils implementation of true is embarrassing. https://github.com/coreutils/coreutils/blob/master/src/true.c is 65 lines of C code and accepts 2 command-line arguments, which means that the binary has to be locale aware in those situations.
Yep, I’d call depending on sh to deal with /bin/true bloat, if only due to the overall extra time to exec() sh in general. Times with a warm cache, not cold. Yes this is golfing to a degree, but this kind of stuff adds up. A minimal binary in my opinion is not worse, its seeing the forest over a single sh tree.
$ time sh -c ''
real 0m0.004s
user 0m0.000s
sys 0m0.003s
$ time /bin/true
real 0m0.001s
user 0m0.000s
sys 0m0.002s
Even that though is gnu true, compared to a return 0 binary its also slow due to the locale etc… stuff you mention:
$ time /tmp/true
real 0m0.001s
user 0m0.001s
sys 0m0.000s
I’m Martin (or mort). I’ve been writing about weird C stuff lately, and random technical stuff before that.
Link: https://mort.coffee
Sadly, there’s no RSS feed; it’s something I plan to maybe eventually add, but my static site generator was my first C project, and is something I haven’t touched in many years, so it might take a while.