Previously on lobsters: https://lobste.rs/s/ho3ypn/another_world_source_code_review_2011
Speaking of Delphine Software and art-over-gameplay games, you can chalk me up as probably one of the only people who liked the Flashback sequel, “Fade to Black”.
For a more visceral view of the situation, you can read about Tim Maughan’s visit to Baotou. This is why I support efforts like Right to Repair and EOMA68.
But I note that I’m also part of the problem, as I have a Nexus 5 on my shelf I’d still be using if not for a broken 5-cent switch requiring microsoldering to replace.
That’s the picture I always use when people who often upgrade smartphones tell me they care about the environment. I’m also upfront I’m willing to do a little damage to it to upgrade my tech for a better life experience. I think we can manage that damage better, though. I sure wasn’t in favor of letting companies ignore externalities or offshoring everything to places with no regulations.
Not sure if the original article author was aware of it, but: Hydraulic macroeconomics and MONIAC.
Working on adapting the sample code in Data Laced with History to Rust as part of a prototype. It’s been a great Rust refresher, although translating the Swift paradigms has lead down some rabbit holes.
Along these lines, see David Reed’s memories of UDP “design”, where he notes that he and Steven Kent argued unsuccessfully for mandatory end-to-end encryption at the protocol layer circa 1977.
also the “bubba” and “skeeter” TCP options; a 1991-era proposal for opportunistic encryption of TCP connections (https://simson.net/thesis/pki2.pdf, https://www.ietf.org/mail-archive/web/tcpm/current/msg05424.html, http://mailman.postel.org/pipermail/internet-history/2001-November/000073.html)
Do you plan to add examples for install() (and cpack)? I’m writing a library using CMake and I’m not sure what the best way to go about that is.
Ah, good point. I haven’t had to use them much, but I will at some point, so I may see if I can work up a toy example.
I’ve heard that CMake makes complicated builds more easy to manage, but this particular tutorial doesn’t really show that. In fact, it’s actually much less “work” to write a simple Makefile in this case. As such, I left the tutorial with no reason to care about CMake, and instead of a “woah! you knocked my socks off!”, my reaction was, “so what?” But! I’ve been using make for years. For someone new to everything, I think this shows that CMake is easier to reason about. It just wasn’t enough to sway me from my more familiar make.
Criticism aside, I really liked the style of the tutorial, and how the repo is all inclusive. I hope to see more tutorials in the future adopt this style (instead of the typical blog post that isn’t all encompassing without linking out to somewhere else, etc).
As someone who’s used cmake professionally, I agree! It’s a bit simplistic for the basic use-case, with the exception of showing the link capabilities. It also doesn’t mention the other main advantage: once you’ve written the CMakeLists file, you can take it to other systems and (usually)get a clean make system: linux, *BSD, OSX, even Windows(by generating Visual Studio solution files, or Cygwin).
This is also timely: I’ve been keeping a tool/library evaluation repo for a personal project, using cmake to make sure they all work. Would people be interested in this repo if I added docs similar to this tutorial?
This is also timely: I’ve been keeping a tool/library evaluation repo for a personal project, using cmake to make sure they all work. Would people be interested in this repo if I added docs similar to this tutorial?
Yes, absolutely!
Ok, I’ll clean it out and see if I can get it up in the next few days.
(Also, I’d like to echo apg’s compliments on the style of the tutorial: it is a good introduction for people totally unfamiliar with CMake)
As such, I left the tutorial with no reason to care about CMake, and instead of a “woah! you knocked my socks off!”, my reaction was, “so what?” But! I’ve been using make for years. For someone new to everything, I think this shows that CMake is easier to reason about. It just wasn’t enough to sway me from my more familiar make.
Yup you are right. My intention was not to convince anyone to use a CMake instead of plain Makefile (or any other tool x), but to helps them to quickly get up and running with CMake if they want to use CMake.
Criticism aside, I really liked the style of the tutorial, and how the repo is all inclusive.
Thanks!
At my undergrad CS program (NYU, 2002-2006) they taught Java for intro programming courses, but then expected you to know C for the next level CS courses (especially computer architecture and operating systems). Originally, they taught C in the intro courses, but found too many beginning programmers to drop out – and, to be honest, I don’t blame them. C isn’t the gentlest introduction to programming. But this created a terrible situation where professors just expected you to know C at the next level, while they were teaching other concepts from computing.
But, as others have stated, knowing C is an invaluable (and durable) skill – especially for understanding low-level code like operating systems, compilers, and so on. I do think a good programming education involves “peeling back the layers of the onion”, from highest level to lowest level. So, start programming with something like Python or JavaScript. Then, learn how e.g. the Python interpreter is implemented in C. And then learn how C relates to operating systems and hardware and assembler. And, finally, understand computer architecture. As Norvig says, it takes 10 years :-)
The way I learned C:
Then I didn’t really write C programs for a decade (writing Python, mostly, instead) until I had to crack C back open to write a production nginx module just last year, which was really fun. I still remembered how to do it!
One of the things I loved about my WSU CS undergrad program 20 years ago is that in addition to teaching C for the intro class, it was run out of the EE department so basic electronics courses were also required. Digital logic and simple circuit simulations went a long way towards understanding things like “this is how RAM works, this is why CPUs have so much gate count, this is why you can’t simply make up pointer addresses”
they taught Java for intro programming courses, but then expected you to know C for the next level CS courses (especially computer architecture and operating systems).
It’s exactly like this at my university today. I don’t think there’s any good replacement for C for this purpose. You can’t teach Unix system calls with Java where everything is abstracted into classes. Although most “C replacement” languages allow easier OS interfacing, they similarly abstract away the system calls for standard tasks. I also don’t think it’s unreasonable to expect students to learn about C as course preparation in their spare time. It’s a pretty simple language with few new concepts to learn about if you already know Java. Writing good C in a complex project obviously requires a lot more learning, but that’s not required for the programming exercises you usually see in OS and computer architecture courses.
I think starting from the bottom and going up the layers is better. Rather than being frustrated as things get harder, you will be grateful for and know the limitations of the abstractions as they are added.
Hurriedly preparing a talk on mruby for the local Ruby meetup this week, using How To Prepare A Talk from Deconstructconf.
I’m not giving it nearly enough of the time the article recommends though, because I spent the month getting my home office in order and then discovering that the mruby build system still has a lot of rough edges (which are ending up as part of the talk)
For a related/amusing/useless game history factoid, Bangai-O Spirits on the Nintendo DS from 2008 also used this technique to transfer custom levels without a link cable or Internet. ( technical details here , IIRC it’s ASK)
You can still find a lot of them on YouTube under “bangai-o spirits sound load”
So does the latest model Teenage Engineering Pocket Operator.
I still use jwz’s venerable youtubedown.pl script for archiving YouTube, even though I have youtube-dl installed for mpv. Still works with nary an update.
If a site is small or eclectic enough I’ll spider it with wget, but there’s been a few times where I’ve had to spend a few hours finding a single archive.org link to save. It’s on my to-do list to write automation for going through my pinboard XML to find broken/moved links, and to find forum posts: ltehacks.com went down a few months ago, and now that I’ve got a Calyx hotspot it would be useful to have those posts for reference.
I haven’t used youtubedown.pl ever, but it’s possibly not as configurable as youtube-dl, so I doubt it will change anytime soon. youtube-dl works fine, is quite well maintained, supports a lot of other sites beside YouTube, etc.
Ok, another Pinboard user. It’s like at least half of people commenting here use this service. Isn’t finding broken links already too late (unless you have archival account)? From what I read in other comments it seemed that Pinboard shows if link is no longer reachable, so why your own tool for that?
I have a grandfathered one-time account and no archiving service. Pinboard does not check the links nor add tags in that case, and I have over 20k bookmarks.
Sometimes the content has been moved slightly, (esp. if it’s an academic site, ie transitioning away from tilde-user directories) in which case I can usually manually find the content again. Some of them are also “read later” shortened Twitter/newspaper links from when I’m on my phone on the run, so a dead link in that case is “oh well, delete”.
Fnishing up this security cert before holidays arrive in earnest next week. The live capture-the-flag exercise is turning out to be more entertaining than I thought, since they provide a VPN and suitable target machines.
The OpenBSD 6.2 upgrade broke inteldrm on my UX305 laptop, so now I’ve got to work out how to exfiltrate a panic dump when the screen is blank, the only built-in network hardware is Wifi, and there is no physical serial port.
“Work” has mostly been trying to finish the prepaid SANS cert from my last employer before it expires next month. In between I’ve been seduced by Firefox Developer and its CSS tools to redesign my personal website. Still wanting to fiddle more with Inferno once I put those to bed.
Also, with moving to a new apartment in an old building along with this discussion, I want to try simulating a 4G connection to see if it’d be acceptable to me for home internet, as I’m tempted to donate to Calyx to avoid having to deal with Comcast.
<delurk>
I’ve been experimenting (in the context of making a low-level windows front-end for xi-editor with making a desktop UI that sends layers to the compositor, rather than rendering complete frames. Based on my experiments so far, I’m encouraged that the performance (latency and smoothness) will be qualitatively better than anything else out there. I think performant desktop UI is something of a lost art, all the attention seems to be in other areas.
I’m a little uncertain how much public noise to make, or whether I should just crank on this on my own until it’s a usable client (might be a while, since I’m interleaving this with lots of other things).
I don’t think it’s totally lost, but I really only hear noise about it from the video games space, which has mostly settled on the immediate-mode model for UI. There are a handful of projects like imgui but I think it’s mostly hand-rolled.
Make at least a little noise; marketing doesn’t have to be all-or-nothing. And if you start small, you’ll mostly only attract early adopters happy with incomplete experiments while planting those “oh yeah, I’ve heard of xi, I should check that out” seeds.
Agreeing with “make a little noise while attracting early adopters”. Check out how @crazyloglad has been writing up his work on Arcan.
@work After a surprise restructuring two weeks ago and helping a friend with a college football photoshoot this weekend, this week is trying to identify what work should be now. Looking through Matt Mitchell’s Strange Loop keynote for ideas, along with revisiting my professional network. Finishing a SANS course/cert in the meantime, since it was already paid for.
@home Sorting through projects to decide which one to spend time on: ongoing research into distributed 3d/content creation, experimenting with Inferno on embedded/phone platforms, or scratching an itch and building a PIM server in MirageOS for use as a cloud/AWS image.
Be sure to check out this Inferno project if you havent:
Why have computers at all?
There does not seem to be anything that can be purchased which is not 100% broken.
I guess this is as good of a time as any to bring up RISC-V:
The Talos II website still seems to be taking pre-orders (even though it was supposed to end Sept 15th). They expect to ship at the end of this year! https://www.raptorcs.com/
The CPUs are IBM’s POWER9 architecture. Not quite as open as RISC-V, but quite a lot of source code comes with this motherboard: even the CPU microcode!
I’m really hopping they pull it off since POWER9 is probably the only thing competitive with x86 on desktop. ISA diversity is something I miss from the old days. That said, their software page looks like what you’d expect from the people who shipped the Talos I. ;)
Ha yeah, no software is listed on the new website. I wonder why they’re not using their established website at raptorengineering.com. Perhaps the workstation business is being isolated as a separate entity. They do have software listed on the old site.
I’m pretty hopeful the Talos II will ship. Their Talos I URL advertises that Talos II is “Direct to market, no crowdfunding, same libre ideals,” so there is no crowdfunding campaign to collapse this time.
I remembered them having some kind of software. Forgot it was a different URL. Yeah, they’re probably doing it for branding purposes. The new one looks nice outside the software page. I just thought a software offering of whitespace from company behind some vaporware was kind of funny.
$2,400 at the cheapest (motherboard+cpu), though. As much as I’d like to push for processor diversity, I have nowhere near that amount of money to spend on computers.
It’s still a bit early to get real physical hardware instead of VHDL bitstreams, but I’ve been keeping an eye on the J-Core Project, which is offering clean-room Hitachi SuperH compatible CPUs.
Of particular note in this thread is Landley’s comment that fabs are particularly interested in their SH1 design to offer as cost-equivalent to 8-bit MCUs.
Got some even better ones out there:
https://github.com/freechipsproject/rocket-chip
https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
http://www.gaisler.com/index.php/products/processors/leon3
http://www.oracle.com/technetwork/systems/opensparc/opensparc-t2-page-1446157.html
http://parallel.princeton.edu/openpiton/
Both Leon3 and Rocket are designed for easy customization, too. Of these, Leon3, OpenSPARC, Rocket, and PITON are all silicon-proven in ASIC’s. It’s not like we don’t have CPU cores. We just don’t have individuals, companies, or governments turning them into usable products. The folks behind them did their part making them available. Just need the demand side or product suppliers to do their part.
If you’re looking at the Gaisler cores, there’s LEON4 too: http://www.gaisler.com/index.php/products/processors/leon4
There was also this fork of the OpenSPARC-T1 http://www.srisc.com/
RISC-V Rocket is the one to be excited about, I think. Can’t wait for ASICs to come out.
I have hopes for RISC-V since big players are backing it. Far as Leon4, I left it off since it didnt mention GPL. The Leon3 page explicitly does. I dont know if Leon4 is freely licensed.
Well, if it’s about keeping secrets, I tell people pencil, paper, and trusted couriers are much better. Far as computers, the reason we don’t have more secure computers widely available is people don’t buy them. There are methods for both hardware and software to vastly improve reliability during faults or security (esp isolation or argument checks). There were examples of both in the market esp from the 60’s to 80’s. For reliability & security, the start was Burroughs B5000 which is still available from Unisys without hardware enforcement because customers wanted that:
http://www.smecc.org/The Architecture of the Burroughs B-5000.htm
For system security, there were two approaches either doing MLS or capability-based security:
http://www.cse.psu.edu/~trj1/cse443-s12/docs/ch6.pdf
http://www.cs.washington.edu/homes/levy/capabook/index.html
Customers usually chose whatever had most features (esp complicated), supported legacy software, ran with most raw speed, or was cheapest. The low volume meant providers of such systems had to spend several times more on development and verification while getting scraps in return. Most removed their security assurance, switched to defense only at high unit prices, or bankrupted. CompSci continues to develop things companies could implement but customers won’t buy them so why implement? The only thing that worked in past and present was regulation that forced both building and buying things done like this. Politics usually eliminates that, though.
Btw, here’s one you, err defense contractors, can actually buy that has fault-tolerance and a separation kernel built in with mathematical methods used to prove general correctness and security:
You’re covered if you’re good with a 100MHz processor. Here’s an academic one that could probably be merged with RISC-V Rocket core with some loss of its 1.4GHz. Already runs FreeBSD capability-secure. Nobody building it as always. ;)
If you’re interested in a deep-dive into the history of these techniques, Tim Wu’s “The Attention Merchants” is well worth a read.
There’s so much excellent content on wiki.c2.com it’s ridiculous. Many of the things people rant about on HN, on Facebook, on Twitter, in their blogs, have already been debated to death over a decade ago on the original wiki. (And probably on usenet before that.)
At my first job in the early 00’s, c2.com escaped the workplace net filter since it was low bandwidth, so I’d be browsing it whenever I had a free minute.
I was exposed to more CS concepts in 6mo than the last two years of college. It helped me finally understand functional programming (which I then used on the job with XSLT), and got me learning Ruby&Io(via “what exactly are coroutines?”), which lead directly to my next job.
Any favorites for learning CS?
Continuations and Coroutines was pretty mind-expanding for a first page, as I was trying to understand a Tim Sweeney interview comment that monsters in UnrealScript “all moved at the same time”
This is great to hear someone else had a similar experience. I still (infrequently) point other programmers at it - though wikipedia has lots more of this information now.
I think I’ve read almost all of several sections of that site.
Totally amazing and worth the time, even if some pages are not very good.