[Comment removed by author]
Are error messages from parser combinator libraries as bad as the ones from yacc?
Yes. They tend to just give an error for the combinator itself, and provide no larger context.
Did you perhaps mean to comment on this post instead?
This is an interesting device in the era of smart phones. This is aimed at college students for particular exams. Are they not allowed to have their smart phones in these exams? A smart phone app selling for $5 would have been much more cost effective. Instead of using a phone that has already been paid for, now they have to pay for specialized hardware that costs $100 - more than some smart phones.
Are they not allowed to have their smart phones in these exams?
Generally they aren’t, as the networking would make it very easy to cheat.
In my experience, the good calculator apps on phones are simply emulators of physical calculators. There’s always wolfram alpha, but I haven’t seen a good native calculator app.
That being said, I enjoy using physical calculators much better than emulators. There’s something about a device which is designed for one purpose and doesn’t have to make compromises…
It’s in my category of things which are an interesting idea, but which should be a tablet app instead of a piece of hardware.
I believe one of their main markets is use by HS students and for major exams such as the SAT or ACT.
This is perhaps the most interesting graphing calculator to have come out since I started following the industry. The developer is French, and there hasn’t been very much discussion at all on English-language forums.
Features:
Shortcomings:
A lot of these issues are in software, and they’ve come a long way since their initial release (which didn’t even include log (!)). There is still quite a ways to go, but the open platform makes me excited about the future for this calculator.
What am I supposed to do with that json file? edit: … oh, it renders completely differently on desktop…
The most glaring omission on the post is Infer from Facebook. I woud rate Infer as the most impressive open source C/C++ static analyzer, by far.
ugh, I’ve been trying to package it for arch and it’s such a pain in the ass. It uses a bunch of ocaml libraries that didn’t previously have packages and it bundles a custom version of clang with its own modifications and extensions. Oh, and due to requiring a custom clang, builds can be over half an hour before anything goes wrong.
Whoa, if that thing does what it says on the tin, I’m super interested.
I hope it does.
Cppcheck did not.
EDIT: A nasty nest of segfaults is all I can get out of it. Maybe I’ll check back next year.
I used a Samsung ARM Chromebook for about three years, Arch Linux ARM. It taught me that there are two facets to support:
Generally #1 is lower when you use less popular platforms, and I was prepared for that. As someone who is used to fixing everything from hardware (scopes, solder and rosin inhalation) to software (they were sending ARPs how fast?) I was also prepared for #2.
Or so I thought.
Whatever you do: do NOT get a device that demands a signed bootloader. The bootloader on this device was forgetful, easily corrupted, rude (“You’re not running Chrome OS! Please press space twice so I can wipe everything!”) and most of all: not replaceable. Only Google had the keys, and I had to live with it.
Initially I had a second stage of u-boot installed that I had control over. This was brilliant. The Arch Linux ARM install instructions for this device used to have this as part of the recommended setup. That changed at a later point – a story that involves a post-install script for a newer kernel package dd’ing over my root partition.
I was stuck on a very old kernel version provided in the ALARM repos, I believe the one the original ChromeOS shipped with. This caused me many dramas, including making systemd flat out refuse to boot my device. “You don’t have cgroups support? Pah! I’m on strike. You didn’t need your computer to boot anyway, did you?”.
I have a couple of stories written up about my experiences:
A quick summary of the fun side of things used to live in my /etc/issue:
Welcome to Clusterlizard, chamber \\l. Please praise your selected deity(s)
Summary of interesting failure stories since December 2013:
- 'debug' on kernel command line -> systemd becomes very verbose -> random race conditions due to text output -> crash
- filesystem death on SSD by power-cycling too rapidly whilst debugging other issues
- btrfs failure from trying to compile firefox ("ran out of inodes" even though btrfs does not use inodes)
- systemd update -> required kernel support for xattrs on folders -> refused to boot
- empty Chinese take-away containers + bag + laptop -> cracked the screen
- systemd update -> prevented kernel from loading userspace wifi firmware
- occasional bootloader corruption, fails to work until battled into a soft-reset after many attempts
- upgraded many packages, write-cache filled memory, OOM -> system hung, filesystem left inconsistent
- updated kernel -> post-install script dd'd bootloader over my root filesystem
- updated kernel -> shorter wifi scan argument list support -> can't connect to wifi at uni, too many networks
Misc self-inflicted
- removed systemd-sysvcompat
- custom init system: spawned gettys with wrong arguments -> no logins
- repeat episodes of systemd firmware issues
Happy birthday Clusterlizard! (2014 and 2015 and 2016)
Having many things go wrong with systemd whilst on a remote Greek village mountainside lead me to write my own init. But that’s another story.
What finally forced me to stop using the laptop was a failure of the inbuilt storage coupled with bugs in the evil bootloader. The onboard flash storage (MMC) became very slow over time (down to ~2MB/s write toward the end), but this wasn’t my major concern. What really hurt was when the the laptop started hardlocking when I did heavy disk I/O, such as updating.
Updating became a dangerous game. I’d try to workaround the problem by constantly interrupting the process and forcing disk sync, but this was tedious and slow. Worst of all: when the laptop locked up during updating I would find many of my system libraries left zero bytes long.
After about the third or fourth time that I had this happen I decided I needed some change.
Unfortunately this device had no internal connectors for replacement storage. The internal MMC was soldered directly onto the motherboard (BGA packages, not easily replaceable). But I thought I could use an SD card or USB stick instead.
USB and SD boot were how I originally installed (and repeatedly fixed) the ALARM installation. But now, out of the blue, the signed bootloader had decided that USB and SD card booting were verboten. “Hmm”, I thought, “perhaps those configuration bits had flipped?”. Not only that, but the configuration region had become write-locked. I have no clue why, the bootloader was still in the “developer” mode, and I tried many things to resolve it.
I found it hard to stop using my Arm laptop. I still occasionally get it out for the smell, it brings back many memories of many places. It has been the lightest and longest lasting laptop I have ever owned. It had many physical construction issues, was held together with hot glue, and the performance was non-existant in the graphics department. But it was different, it was something I could afford, and it was fun.
RIP clusterlizard, 2016
Epilogue: I now use an 11.6” refurbished DELL Latitude 3150. It suffers phantom stuck keys, screen backlight stability issues and I had to warranty the battery a month after buying it. It’s heavier, has a smaller keyboard (complete with a keyboard bezel!) and doesn’t smell anywhere as iconic as my ARM laptop did.
But when it breaks, I have many more options to fix it.
btrfs failure from trying to compile firefox (“ran out of inodes” even though btrfs does not use inodes)
This is very likely a generic error code being converted by a function in the VFS to something which doesn’t make sense for btrfs. Despite being an abstraction layer, VFS still expects a pseudo-ext-style implementation. This is evident in error messages and especially api design. See for example how statfx(2) has fields for free inodes, or “unreserved” blocks, even though those are not working statistics for many filesystems.
I think you’re more ranting about Chromebooks than ARM as a processor architecture. I have an Intel Chromebook and probably the same bootloader.
Somehow, this reminds me of shoutboxes from back in the day.
Yeah, I get those vibes. - also quite like the extremely restrained design which adds to that perception. I wish a lot of webshits could be like this one in that sense.
I recently configured my RSS reader to email me n-gate on a regular basis. It’s not a good idea: help, I’m becoming too cynical…!
Thanks for reminder to check it. The repealing net neutrality one w/ “executive fiat’ was great haha.
Not the specifics, but the over-arching ideas pretty much hold up I’d say.
I was mainly referring to the title claim of “2^(Year-1984) Million Instructions per Second” because OP was asking for a graph.
Looks like still not really cleared up. Here’s a news article from 2017.
rw does not support copying sparse files, a feature found on some operating systems.
Can’t you seek to the end of the input file and then truncate the output to that length?
rw is not yet able to determine the size of block devices on Illumos, Minix, NetBSD, and OpenBSD.
Same question here: does lseek(fd, 0, SEEK_END) not work?
Continuing tinkering on my filesystem driver. It’s looking more and more like I’m going to have to copy generic_file_read/write_iter and mpage_read/writepages wholesale and modify them to correctly deal with file data not being block-aligned. I’m not looking forward to this, as it means keeping track of changes to these functions upstream and porting them back over.
P2P in the browser is done via WebTorrent, which uses WebRTC connections as transport channels to other browsers watching the video. It then uses the BitTorrent protocol for the actual data transfer.
They mention the use of WebRTC on their FAQ: https://joinpeertube.org/en/faq/
“Why broadcast PeerTube videos through peer-to-peer?”
Peer-to-peer broadcasting allows, thanks to the WebRTC protocol, that Internet users who watch the same video at the same time exchange bits of files, which relieves the server.
This is becoming an increasingly severe problem in HPC. To the point where software needs to be written in an explicit fault-tolerant fashion, since errors like these or even hardware failures will happen on nearly every exaflop run. Even petaflop machines that are typical today need to have special handling for hardware failures to avoid crashing constantly.
Would you mind elaborating on the techniques used when attempting to be fault taulerant of bit flips?
One place to start is actually Tandem Computers which were built for fault tolerance, basically by running two computers.
NASA’s guidance system, among other things, has 3 or 4 computers which all compute the same thing then check with each other if they agree.
For systems that require not running the same thing a whole bunch, one can let a checksum of the data flow end-to-end, checking it at various places.
I’m sure other solutions exist, but as a non-expert, those are the ones I’ve come across.
What happens when you get an error? i.e. say computer 4 gets hit by a cosmic ray which flips a bit; what’s the procedure for bringing all computer back into agreement?
If you have multiple computers you can do a quorum. Otherwise, information is lost and it’s up to the situation what you do. You can either fail an tell the user or if there is a backup policy, execute that.
I’m not terribly familiar with this field, but this report should get you started: http://www.netlib.org/lapack/lawnspdf/lawn289.pdf
I second apy’s recommendation of Tandem Computers. I’ll go further with two specific works. The first is by Jim Gray showing how Tandem looked at things systematically to figure out how to eliminate as many error classes as possible. They ended up achieving a five 9’s system. The second is from a competitor, Stratus, covering both hardware and programming techniques for robust systems, including Tandem NonStop.
Why Do Computers Stop and What Can Be Done About It?
Paranoid Programming: Techniques for Constructing Robust Software
Note: First is an old PDF. Second one is a PostScript file from Archive.org since the PDF link is dead with no archive copy.
The points are good, but I certainly don’t want inotify features to be gating the VFS layer. IMO
inotifyis good at what it does. If you want to know about absolutely everything going on for a given filesystem, maybe you want to implement the filesystem itself (fuse, e.g.).IIRC (and I was involved in higher level filesystem libraries when this stuff was going into the kernel - but that was a long time ago) dnotify and inotify were designed with the constraint that they couldn’t impose a significant performance penalty, the logic being that the fs operations were more important than the change notification. If watching changes is as important or more important than io performance another mechanism like a fuse proxy fs or strace/ptrace makes sense.
fuse is how tup keeps track of dependencies, although I think it also will attempt to use library injection when that’s not availible.
Thing is, FUSE is slower, buggy (I’ve had kernel panics) and less flexible. A native way to track file system operations in a lossless manner would be really nice to have on linux.