1. 6

I can hardly stand reading the site. I have to keep scrolling every 5 seconds…

1. 2

Yeah it’s a bit… Big. Also, why do all these web “book” authors always refuse to have a “next” button for linear navigation through the reading material…

…You know, like a book.

1. 1

It’s a lot more readable if you set your browser’s zoom to 50%.

1. 1

and max-width: 50em (instead of 25) for main for people who use user styles. With 50% font-size and increased width, its pretty nice :)

1. 3

At work we use YYYY.MM.NN for internal software (NN being a 0-indexed release number for that month).

I like this for knowing when something was last updated, but it’s not helpful for identifying major changes vs. bugfixes. Perhaps that’s not such a big deal for software that’s on a rapid release cycle.

1. 2

It’s also not a big deal for software that’s too big or complex for “major change” to be meaningful. If a tiny, rarely used Debian package removes support for a command-line flag, that’s a major change (in the SemVer sense) and since Debian includes that package it’s therefore technically a major change in Debian. But if Debian followed SemVer that strictly, its version number would soon leave Chrome and Firefox in the dust, and version numbers would cease being a useful indicator of change.

1. 7

Isn’t Debian’s solution to this to not include major-version changes in updates to an existing release? So it does wait for the next major version of Debian to be included, usually

1. 1

Yep, and this is where the -backports or vendor repos are really useful - newer packages built against the stable release.

2. 2

It’s why we have to make “stable” releases. Otherwise everyone goes crazy. If someone is updating their SemVer too often, they have bad design or do not care for their developers.

3. 2

There’s a discussion of CalVer and breaking changes here: https://github.com/mahmoud/calver/issues/4

Short version, “public” API is a bit of a pipe dream and there’s no replacement for reading (and writing) the docs :)

1. 2

The concept of breaking changes in a public API isn’t really related to ‘read the docs’, except when it comes to compiled in/statically linked libraries.

If you have dependency management at the user end (i.e. via apt/dpkg dependencies that are resolved at install time), you can’t just say “well, install a version that will work, having read the docs and understood what changes when”.

You instead say “I require X major version of package Foo”, because no matter what the developer does, Foo version X.. will always be backwards compatible - new features might be added in a X.Y release, but thats not a problem if theyre added in a backwards compatible manner (either theyre new functionality that has to be opted in for, or e.g. they don’t require extra options to work).

Yes, I know that things like Composer and NPM have a concept of a ‘lock’ file to fix the version to a specific one, but that’s not a solution for anything but internal projects. If you’re installing tools that you aren’t directly developing on yourself using NPM or Composer, you’re doing it wrong.

1. 1

I really don’t see what that has to do with the linked thread. In the very first line, you mention a “public” API. The point is that there’s much less consensus on what constitutes a public API than developers assume. So, you end up having to write/read the docs about what would constitute a “semantic” version change. (Not that docs are a silver bullet, they’re just a necessary part of healthy software development.)

1. 1

The point is that there’s much less consensus on what constitutes a public API than developers assume.

A comment by you making that same claim on GitHub isn’t really evidence of a lack of consensus. What possible definition is there for “public API” besides, “something that will be consumed my someone outside the project”.

So, you end up having to write/read the docs about what would constitute a “semantic” version change.

The decision tree for SemVer is two questions, with 3 possible outcomes. And you’ve still ignored my point. Adherence to semver means you can automatically update dependencies independently of the developer.

So, for instance, if the developer depended on a shared library, that happens to have a security vulnerability, when the library author/project releases a new patch version, end-users can get the fix, regardless of what the tool/app developer is doing that week.

1. 1

The automatic updates work until they don’t. Here is a recent example where Python’s pip broke with many assumptions about public APIs. Your point has not been ignored, I’ve written about it extensively, in the linked thread and in the linked site (and links therein).

As for your closing comment, I’m noticing an important assumption I’m working to uproot: current date is not the only possible date in the release version. manylinux2010 came out in 2018, and is named as much because it’s backwards compatible to 2010.

The Teradata example on the CalVer site also highlights maintaining multiple releases, named after their initial release date. At the consumer level, Windows 98 got updates for years after 2000 came out.

1. 1

That isn’t a failing of semver, it’s a failing of the developers who didn’t properly identify they had a breaking change.

The same thing would have happened under calver, they would have marked it as a patch release with compatibility to the previous version, regardless of the date component.

Expecting people to just forget about the possibility of automatic dependency updates is like suggesting people forget that coffee exists after they’ve had it daily for 10 years.

1. 3

Very cool - I really like lower-power equipment like this. However, I think it’s a terrible idea for security since it looks like its an unencrypted video stream which would make eavsdropping trivial.

1. 6

I’d hesitate to call it eavesdropping if you have to get within 8 ft of the camera to do it

1. 3

It’s sort of the digital equivalent of corner mirrors in stores and on streets.

1. 0

Well that just about kills most of the uses for this product :/

1. 3

No? It’s perfect for home security cameras or on-body cameras.

Sign me up.

1. 7

Full details now available at https://efail.de/

1. 7

So as far as I see, this isn’t necessarily a bug with PGP “itself”, when used for signing git commits or used in combination with pass, but rather when sending encrypted emails. Or am I wrong?

1. 2

The first exploit, is definitely not PGP’s fault.

Unfortunately because I don’t know S/MIME, I can’t comment. But it seems like there is some inherent problem with the second attack affecting both it and PGP.

1. 2

CBC and CFB encryption modes use the previous blocks when encrypting new blocks. There are some weaknesses, and of course OpenPGP and S/MIME use them. That seems to be part of the problem. The other part is that stitching together multipart messages is something that email clients have no problem with doing, so shit HTML, can result in a query string that exfiltrates the content of the decrypted parts.

1. 2

OpenPGP mitigates those weaknesses with authenticated encryption (MDC). So it’s still only a problem if a broken MUA ignores decryption errors from gpg (or if the email in question is using a very old cipher. so, the attack may work if you auto-load remote content on encrypted emails from before 2000)

1. 1

OpenPGP mitigates those weaknesses with authenticated encryption (MDC). So it’s still only a problem if a broken MUA ignores decryption errors from gpg (or if the email in question is using a very old cipher. so, the attack may work if you auto-load remote content on encrypted emails from before 2000)

1. 7

I’ve finished the core of my hardware and software Chip-8 emulator, running on an Atmega1284 MCU with an SSD1306 OLED display, buzzer, keypad and SD Card reader. Finally got a launch menu integrated, that allows me to give users options to select emulation quirks using the 4x4 keypad.

Now I just need to go back through the emulator core, double check and test the quirks and I’m ready to implement Superchip support. I’m also going to start looking at laying up the PCB for an initial run. My head says Eagle CAD, as that’s what I’m used to but my heart keeps telling me to use KiCAD. I can fit the emulator in the 1284, but I’m uhmming and ahhing about adding support for a BASIC interpreter, and I really need some external RAM to make the most of that. I’ve been looking at 64k modules, but I might leave that for a later iteration if I can get this to a steady state without the extras.

1. 1

What’s the battery life?

I’d like to make something very similar to this, but no emulator. Just write fast atmega assembly.

1. 1

I haven’t had it running on battery yet, but with some clever tickling of timers and the OLED I’m hoping to get about 48-96 hours on 3 x AA batteries.

1. 1

No way to get it on 1 AA? I think 3 is too many :D

1. 1

A single AA is 1.5v, you’re going to need at least 3.3v to run things properly (although I think I can get away with down to 2.7v). 3 AAs should get me 4.5v, which lets me cheap out a little on power management ICs (everything is 3.3v/5v tolerant), although I might use an MCP1702 3.3v LDO regulator.

For this version of the board 3 AAs will be fine, I’ll look at options for using 2x AA or possibly coin batteries with a step-up converter or boost regulator down the line when I’m ready to build a production version. A lot of people forget the original gameboy used 4 AAs.

1. 5

I’ve heard that one reason microcomputers took off in the West while game consoles took off in Japan is that while you can get a lot of work done in English with a 256×240 display, that resolution isn’t high enough to properly display kanji. Likewise, you can fit two full English alphabets plus assorted punctuation in a few kilobytes of character ROM, but the storage required for a decent set of Japanese characters is huge in comparison. So it wasn’t until the early 90s that PCs became useful to the general Japanese public, by which time cultural trends were already set.

I imagine China had similar limitations, plus the whole “cultural revolution” deal on top.

1. 2

I don’t think this was the case, unless there were consoles that were released before the Famicom in 1986, By 1986 the character limitation problem had been solved 6 years earlier:

In 1980, the GB2312 Code of Chinese Graphic Character Set for Information Interchange-Primary Set was created allowing for 99% of contemporary characters to be easily expressed.

So…I think this reason is just an incorrect theory. Didn’t Japanese have microcomputers anyway?

1. 3

Character encodings like GB2312 are just an enumerated list of characters, I’m talking about storage space.

The Commodore 64, to pick a reasonably popular Western microcomputer, displayed 8×8 pixel characters. Wikipedia says it had 164 codes with “visible representations”, so it needed to store 8×8×164 = 10496 bits, or 1312 bytes of character data.

It’s not practical to display Chinese characters in 8×8 pixels, I believe 12×12 is the practical minimum. GB2312 has 6,763 characters, which means a Chinese microcomputer based on the same principles as the C64 would need to store 973,872 bits, or 121,734 bytes, or ~119KiB. Not only is that nearly two orders of magnitude larger (with the corresponding increase in production cost), it’s nearly twice the amount of memory that the C64’s 6510 CPU could address, meaning a hypothetical Chinese C64 would also require a much more expensive CPU. Either way, it wouldn’t have been economically possible for such devices to have been as popular as microcomputers were in the West.

1. 1

I think you are right. Even though they may be able to be “expressed”, they are probably talking about theoretically and not actually on a computer.

1. 1

Trying to pull foreign-language information out of Google seems to be a lot more difficult now than it used to be, not sure why. I’ve been trying to find the keyboard pinout and protocol for the Japanese PC88 8-bit microcomputer for a few weeks now. Even with Japanese turned on in the (very poorly designed) “languages” screen, it’s not doing a great job at relevance.

edit: as soon as I complain, there it appears: https://electrelic.com/electrelic/node/597

1. 1

It makes me think we are in need of a less biased search engine. My main concern would be storage space to archive all these sites. Speed would not really matter as long as we can do it in a year’s worth of time. Actually I guess archive.org is sort of like this?

1. 2

One of the things you can do is to plug foreign-language technical sources like this and your own findings into archive.org to make sure they are saved. Obviously their crawler can’t be everywhere, so stuff like this (especially in Japan where it seems like the best technical info is available on free ISP webspace) can get lost so easily.

1. 6

linux on the desktop is 45% stockholm syndrome, 15% wishful thinking, 15% undergrad code shambles, 15% cargo cult microsoft aping, and 10% cynical corporate complexity to sell support contracts.

1. 6

What’s your preferred alternative? The walled, proprietary gardens of Apple or Microsoft? OpenBSD?

It’s a serious question, Debian Stable as a desktop OS is working reasonably well for me. I wan’t a unix-like system - No Windows - which offers broad choices of hardware - no OS X - and it should be free software - one of them. I’d switch to OpenBSD for most of my work, but I need stuff like docker for work and want resonable gaming support at home. I could switch between different OSes for different tasks, but why bother? Debian truly is “the universal operating system” for me, even with all it’s faults.

1. 2

FWIW, I use FreeBSD on my Laptop. I know it is not ideal but I choose to do it and work through the pain because I can and because I think it’s good to support options.

1. 1

What is the preferred DE on *BSDs? GNOME?

1. 2

I just use i3. TrueOS is pushing for Lumina. Gnome is basically Linux only at this point with all of its systemd coupling, from what I understand.

1. 3

OpenBSD has good Gnome3 support, see here, although note that the instructions mentioned are out of date, it’s best to follow the readme that is installed when you pkg_add gnome.

1. 2

“systemd coupling” is mostly logind. It’s only really necessary for starting gnome-shell as a Wayland compositor. Someone should try either reimplementing logind for *BSD (there were such projects but I don’t think anyone got it completely working) or adding support for something like my little loginw thing to gnome-shell :) same for kwin_wayland.

I actually use Weston right now, and going to write my own libweston-based compositor eventually… (loginw was created for that)

For X11, both gnome-3.26 and plasma5 should work.

1. 1

Do you write your own scripts for stuff like volume/backlight control, locking etc? Having used I3 for over a year, this was the least enjoyable part for me because sometimes stuff would break/change/rename and I’d have to fiddle with my scripts.

1. 1

Yes I’ve been writing my own scripts. I haven’t had any issues with it. But like I said, I’m explicitly deciding to add some pain in my life to support something I think is bigger, so it’s not for everyone. Lumina, though, is a full DE AFAIK so that should handle the things you’ve brought up.

2. 1

Huh. Because my Linux desktop is peerlessly stable, bears no resemblance to anything Microsoft has released in the past thirty years, and is community developed and supported. I in fact find that the commercial desktop environments are unstable, unusable buggy garbage, and I’ve had the misfortune to have to use both of them fairly significantly.

Don’t confuse “Linux on the desktop” with “GNOME on the desktop” (or, for that matter, “intentionally using unstable software on the desktop”).

1. 14

The Cambridge Analytics scandal has prompted me to delete Facebook and be much more aware of my privacy. I know that deleting Facebook is now a “cool” thing to do now, but it’s been a difficult decision. I still had many friends there that I have no other means of contacting. Ads have gotten much scarier recently, perfectly retargeted among services, so I was getting mentally ready for this. But stealing data for political purposes is where I draw the line.

I’ve also replaced google with DuckDuckGo, and am planning on changing my email provider too. But I don’t know if it’s going to be futile. I still shop on amazon and use many other irreplaceable services like google maps.

Again, I’m not a privacy freak. I try to find a middle ground between convenience and privacy, so these changes are hard for me

Any recommendations for a balanced solution?

1. 6

Whereas I’m about to have to get back on Facebook after being off quite a long time. I’ve simply missed too many opportunities among local friends and family info since they just refuse to get off it. Once it has them in certain numbers, they find it most convenient to post on it. That’s on top of the psychological manipulations Facebook uses to keep them there. I’ll still use alternatives, stay signed out, block JS, etc for everything I can. I will have to use it for some things for best effect.

The most interesting thing about leaving, though, was their schemes get more obvious. They tried to get me back in with fake notifications that had nothing to do with me. They’d look like those that pop up when someone responds to you but you’re not in the thread at all. They started with an attractive, Hispanic woman I’ve never seen from across the country that some friend knew. Gradually expanded to more attractive women on my Facebook but who I haven’t talked to in years or rarely like (not in my feed much). The next wave involved more friends and family I do talk to a lot. Eventually, the notifications were a mix of the exact people I’d be looking at and folks I’ve at least Liked a lot. I originally got nearly 100 notifications in (a week?) or something. Memory straining. Last time I signed in, there was something like 200-300 of them that took forever to skim with only a handful even real messages given folks knew I was avoiding Facebook.

So, that whole process was creepy as hell. Especially watching it go from strangers I guess it thought I’d like to talk to or date to people I’m cool with to close friends. A lure much like the Sirens’ song. Fortunately, it didn’t work. Instead, the services’ grip on my family and social opportunities locally are what might make me get back on. The older forms of leverage just in new medium. (sighs)

1. 3

It kind of depends on what you are trying to prevent. There are some easy wins through

1. As of March 2017 US ISPs automatically opt you in to Customer Proprietary Network Information. ISPs can sell this information to 3rd parties.. You can still opt out of these.
Look for CPNI opt out with your ISP.
https://duckduckgo.com/?q=cpni+opt+out&t=ffab&ia=web

2. uBlock Origin / uMatrix are great for blocking tracking systems.
These do affect sites who make they’re money based on ads however.

3. Opt out of personalized adverting when possible

4. Make your browser difficult to fingerprint.
EFF has a tool called panopticlick that can show you how common your browser’s fingerprint is. I locked down what I could (there should be instructions on panopticlick’s site), and added an extension that cycles through various common user-agents. It might sound like overkill, its not onerous to do.

I actually disabled this mostly. I still blocked for 3rd parties, but first party cookies are allowed now. Using a hardware key or password vault makes signing easy, but ironically the part that killed this for me more sites supporting 2FA. I use Cookie AutoDelete for Firefox.

I don’t have a good suggestion for this one. I use quad-9, but I don’t really know enough to say whether or not I trust them.

1. 2

Unlike an email or web server, setting up a resolving only DNS server is quite painless. I do this at home and rarely have issues. And if I do, I can reset it at whim instead of trying to fight with tech support.

2. 1

I pay $40/year for Protonmail. It is fantastic. As for Facebook, why delete? It is actually a benefit to have an online presence for your identity, but you need to be careful with what about yourself you share. If you don’t take your online identity, someone else will. This is exactly why I’ve registered my name as a domain and kept it for years now. It is just another “string of evidence” that I am who I say I am on the internet. My FB is just a profile picture now and nothing else. I have set my privacy settings to basically super locked down. When it comes to socializing, there is little you can do to not be tracked. The only thing you can do is “poison the well” with fake information and keep important communication on secure channels (i.e. encrypted email, encrypted chat applications). 1. 1 I removed Facebook about 6 years ago and recently switched to Firefox beta and DDG. Gmail has had serious sticking power for me, though. I’ve had several fits and starts of switching email over the years but my Gmail is so intertwined with my identity nothing else has ever stuck. It is possible to switch, I’m sure, but in my case, I have never committed quite enough to pull it off. 1. 3 When I got off gmail, it took about two years before I wasn’t getting anything useful forwarded to my new identity. Setting up forwarding was quite painless and everything went smoothly otherwise. The sooner you start… 1. 2 When I looked into it, everone was suggesting FastMail if the new service needs longevity and speed. It’s in a Five Eyes country but usually safest to assume they get your stuff anyway if not using high-security software. The E2E services are nice but might not stick around. Ive found availability and message integrity to be more inportant for me than confidentiality. People can always GPG-encrypt a file with a message if they’re worried about confidentiality. Alternatively, ask me to set up another secure medium. Some do. 1. 17 Trying to finish a long running project: my e-ink computer. 1. 3 Amazing! Please keep us posted! Are you documenting the project anywhere else besides sporadic tweets? 1. 4 Yes, I document everything along the way. I do not like to publish about ongoing projects as I tend not to finish them when I do that :). Both the code and the CAD designs will be open sourced once the project will be finished. I also plan to write a proper blog post about it. I still need to figure out the proper way to do partial refresh with this screen and it should be more or less done (the wooden case still needs some adjustments).  Typos. 1. 1 Same, I would definitely be interested in following the project progress. 2. 2 Nice! What screen are you using, and how are you controlling it? Have you written any blog posts? 1. 2 It seems to be this one, same marks on the bottom corners and the shield looks the same: https://www.waveshare.com/wiki/7.5inch_e-Paper_HAT 2. 1 Is that a raspi it’s hooked up to? Where did you buy the screen? There is another guy doing e-ink stuff on the internet recently. You should go search for him. He is researching how to get decent refresh rates too. Instead of creating a laptop-like enclosure, you should make a monitor-like enclosure. It will look way better and more reusable. 1. 1 So, one of the thigns that annoys me about this world is how we don’t have e-ink displays for lots of purposes that nowadays get done with a run of the mill tablet. You don’t need a tablet for things like a board that shows a restaurant menu, or tracking buses in the area. So why can’t I find reasonably sized E-ink displays for such purpses? 1. 1 Entirely agree with you. I guess it can be explained by the fact that LCD screens have a better brightness, they are better to catch human eye attention. The eink technology is bistable on the other hand, making it highly energy efficient for such applications - when no frequent updates are needed. Energy is cheap nowadays, we don’t really care about energy consumption anymore. But I guess this might change past the peak oil. I guess these techs will start developing as soon as energy becomes scarce and expensive. 1. 35 I’ll bite. General industry trends • (5 years) Ready VC will dry up, advertising revenue will bottomout, and companies will have to tighten their belts, disgorging legions of middlingly-skilled developers onto the market–salaries will plummet. • (10 years) There will be a loud and messy legal discrimination case ruling in favor of protecting political beliefs and out-of-work activities (probably defending some skinhead). This will accelerate an avalanche of HR drama. People not from the American coasts will continue business as usual. • (10 years) There will be at least two major unions for software engineers with proper collective bargaining. • (10 years) Increasingly, we’ll see more “coop” teams. The average size will be about half of what it is today, organized around smaller and more cohesive business ideas. These teams will have equal ownership in the profits of their projects. Education • (5 years) All schools will have some form of programming taught. Most will be garbage. • (10 years) Workforce starts getting hit with students who grew up on touchscreens and walled gardens. They are worse at programming than the folks that came before them. They are also more pleasant to work with, when they’re not looking at their phones. • (10 years) Some schools will ban social media and communications devices to promote classroom focus. • (15 years) There will be a serious retrospective analysis in an academic journal pointing out that web development was almost deliberately constructed to make teaching it as a craft as hard as possible. Networking • (5 years) Mesh networks still don’t matter. :( • (10 years) Mesh networks matter, but are a great way to get in trouble with the government. • (10 years) IPv6 still isn’t rolled out properly. • (15 years) It is impossible to host your own server on the “public” internet unless you’re a business. Devops • (5 years) Security, cost, and regulatory concerns are going to move people back towards running their own hardware. • (10 years) Containers will be stuck in Big Enterprise, and everybody else will realize they were a mistake made to compensate for unskilled developers. • (15 years) There will still be work available for legacy Rails applications. Hardware • (5 years) Alternative battery and PCB techniques allow for more flexible electronics. This initially only shows up in toys, later spreads to fashion. Limited use otherwise. • (5 years) VR fails to revitalize the wounded videocard market. Videocard manufacturers are on permanent decline due to pathologies of selling to the cryptobutts folks at expense of building reliable customer base. Gamers have decided graphics are Good Enough, and don’t pay for new gear. • (10 years) No significant changes in core count or clock speed will be practical, focus will be shifted instead to power consumption, heat dissipation, and DRM. Chipmakers slash R&D budgets in favor of legal team sizes, since that’s what actually ensures income. ~ I’ve got other fun ones, but that’s a good start I think. 1. 7 (5 years) Security, cost, and regulatory concerns are going to move people back towards running their own hardware. As of today, public cloud is actually solving several (and way more than people running their own hardware) of these issues. (10 years) Containers will be stuck in Big Enterprise, and everybody else will realize they were a mistake made to compensate for unskilled developers. Containers are actually solving some real problems, several of them already were independently solved, but containers bring a more cohesive solution. 1. 1 Containers are actually solving some real problems, several of them already were independently solved, but containers bring a more cohesive solution. I am interested, could you elaborate? 1. 1 The two main ones that I often mention in favor of containers (trying to stay concise): • Isolation: We previously had VMs on a virtualization level but they’re heavy, potentially slow to boot and obscure (try to launch xen and manage vms your pet server), and jail/chroot are way harder to setup and specific to each of your application and do not allow you to restrict resources (to my knowledge). • Standard interface: Very useful for orchestration as an example, several tool existed to deploy applications with an orchestrator, but it was mostly executables and suffered from the lack of isolation. Statically compiling solved some of theses issues, but not every application can be. Containers are a solution to some problems but not the solution to everything. I just think that wishing they weren’t there, probably means the interlocutor didn’t understand the benefits of it. 1. 2 I just think that wishing they weren’t there, probably means the interlocutor didn’t understand the benefits of it. I’ve been using FreeBSD jails since 2000, and Solaris zones since Solaris 10, circa 2005. I’ve been writing alternative front-ends for containers in Linux. I think I understand containers and their benefits pretty well. That doesn’t mean I don’t think docker, and kubernetes, and all the “modern” stuff are not a steaming pile, both the idea and especially the implementation. There is nothing wrong with container technology, containers are great. But there is something fundamentally wrong with the way software is deployed today, using containers. 1. 1 But there is something fundamentally wrong with the way software is deployed today, using containers. Can you elaborate? Do you have resources to share on that? I feel a comment on Lobsters might a be a bit light to explain such a statement. 2. 1 You can actually set resource isolation on various levels; classic Unix quotas, priorities (“nice” in sh) and setrusage() (“ulimit” in sh); Linux cgroups etc. (which is what Docker uses, IIUC); and/or more-specific solutions such as java -Xmx […]. 1. 2 So you have to use X different tools and syntax to, set the CPU/RAM/IO/… limits, and why using cgroups when you can have cgroups + other features using containers? I mean, your answer is correct but in reality, it’s deeply annoying to work with these at large scale. 1. 4 Eh, I’m a pretty decent old-school sysadmin, and Docker isn’t what I’d consider stable. (Or supported on OpenBSD.) I think this is more of a choose-your-own-pain case. 1. 3 I really feel this debate is exactly like debates about programming languages. It all depends of your use-cases and experience with each technologies! 1. 2 I’ll second that. We use Docker for some internal stuff and it’s not very stable in my experience. 1. 1 If you have <10 applications to run for decades, don’t use Docker. If you have +100 applications to launch and update regularly, or at scale, you often don’t care if 1 or 2 containers die sometimes. You just restart them and it’s almost expected that you won’t reach 100% stability. 1. 1 I’m not sure I buy that. Out testing infrastructure uses docker containers. I don’t think we’re doing anything unusual, but we still run into problems once or twice a week that require somebody to “sudo killall docker” because it’s completely hung up and unresponsive. 1. 1 We run at$job thousands of container everyday and it’s very uncommon to have containers crashing because of Docker.

2. 1

Easier local development is a big one - developers being able to quickly bring up a full stack of services on their machines. In a world of many services this can be really valuable - you don’t want to be mocking out interfaces if you can avoid it, and better still is calling out to the same code that’s going to be running in production. Another is the fact that the container that’s built by your build system after your tests pass is exactly what runs in production.

3. 7

(5 years) VR fails to revitalize the wounded videocard market. Videocard manufacturers are on permanent decline due to pathologies of selling to the cryptobutts folks at expense of building reliable customer base. Gamers have decided graphics are Good Enough, and don’t pay for new gear.

While I might accept that VR may fail, I don’t think video card companies are reliant on VR succeeding. They have autonomous cars and machine learning to look forward to.

1. 2

(10 years) No significant changes in core count or clock speed will be practical, focus will be shifted instead to power consumption, heat dissipation, and DRM. Chipmakers slash R&D budgets in favor of legal team sizes, since that’s what actually ensures income.

This trend also supports a shift away from scripting languages towards Rust, Go, etc. A focus on hardware extensions (eg deep learning hardware) goes with it.

1. 1

(10 years) Containers will be stuck in Big Enterprise, and everybody else will realize they were a mistake made to compensate for unskilled developers.

One can dream!

1. 2

Would you (or anyone) be able to help me understand this point please? My current job uses containers heavily, and previously I’ve used Solaris Zones and FreeBSD jails. What I see is that developers are able to very closely emulate the deployment environment in development, and don’t have to do “cross platform” tricks just to get a desktop that isn’t running their server OS. I see that particular “skill” as unnecessary unless the software being cross-platform is truly a business goal.

1. 1

I think Jessie Frazelle perfectly answer to this concern here: https://blog.jessfraz.com/post/containers-zones-jails-vms/

P.S.: I have the same question to people that are against containers…

2. 1

(5 years) Mesh networks still don’t matter. :( (10 years) Mesh networks matter, but are a great way to get in trouble with the government.

Serious attempts at mesh networks basically don’t exist since the 200#s when everyone discovered it’s way easier to deploy an overlay net on top of Comcast instead of making mid-distance hops with RONJA/etc.

It would be so cool to build a hybrid USPS/UPS/Fedex batch + local realtime link powered national scale network capable of, say, 100mB per user per day, with ~ 3 day max latency. All attempts I’ve found are either very small scale, or just boil down to sending encrypted packets over Comcast.

1. 1

Everyone’s definition of mesh different, but today there are many serious mesh networks, the main ones being Freifunk and Guifi

2. 1

(10 years) There will be at least two major unions for software engineers with proper collective bargaining.

What leads you to this conclusion? From what I hear, it’s rather the opposite trend, not only in the software industry…

(5 years) All schools will have some form of programming taught. Most will be garbage.

…especially if this is taken into account, I’d argue.

(10 years) Some schools will ban social media and communications devices to promote classroom focus.

Aren’t these already banned from schools? Or are you talking about general bans?

1. 1

I like the container one, I also don’t see the point

1. 1

It’s really easy to see what state a container is in because you can read a 200 line text file and see that it’s just alpine linux with X Y Z installed and this config changed. On a VM it’s next to impossible to see what has been changed since it was installed.

1. 3

ate a container is in because you can read a 200 line text file and see that it’s just alpine linux with X Y Z in

I just check the puppet manifest

1. 2

It’s still possible to change other things outside of that config. With a container having almost no persistent memory if you change something outside of the dockerfile it will be blown away soon.

2. 1

Containers wont be needed because unikernels.

3. 1

All schools will have some form of programming taught. Most will be garbage.

and will therefore be highly desirable hires to full stack shops.

1. 1

I would add the bottom falling out of the PC market, making PCs more expensive as gamers and enterprise, the entire reason why it still maintains economies of scale, just don’t buy new HW anymore.

1. 1

I used to always buy PCs, but indeed the last 5 years I haven’t used a desktop PC.

1. 1

If it does happen, It’ll probably affect laptops as well, but desktops especially.

2. 1

(5 years) All schools will have some form of programming taught. Most will be garbage.

My prediction: Whether the programming language is garbage or not, provided some reasonable amount of time is spent on these courses we will see a general improvement in the logical thinking and deductive reasoning skills of those students.

(at least, I hope so)

1. 5

I like s-q-l but I hear sequel a lot more

1. 2

In my university, and at school, (Germany) I’ve never heard anyone say sequel, only SQL. Probably because it is interpreted as an acronym.

1. 1

In real world office talk, always s-q-l but in academics, “sequel”. I think there is a pattern here.

Some acronyms are meant to be pronounceable but I don’t think that’s SQL’s case.

1. 1

The guy’s confusion in the comments is exactly what I’m encountering.

Clearly, \bar{f} is computable, so it should belong in set A (in fact, it should be included by the “infinite function generator”).

I’m not sure how this post proves anything.

I want to understand ;_;

1. 2

So I think there is a sneaky self-referential bit in there because $\bar{f}$’s form depends on the input. The form of the input informs the computation.

The form of the proof is a bit like the “Say P is the largest prime” and then you go on to show, “but wait, if it was, how can I pull this other, bigger, prime out of my back pocket?!”

Here we start by saying here’s a table of all the possible functions in A and their input values. My table is infinite and I say I have all possible functions in it.

But then you come in and pull out $\bar{f}$ from your back pocket and say “Is that SO?! Then explain THIS!” and it turns out for some given input $\bar{f}$ is guaranteed to give a different output than ANY function in my INFINITE table, so POOF, it must mean my infinite table is not enough.

1. 1

I guess that is the problem I am having. I knew it was self-referential, but…to me, infinity is also somewhat self-referential.

It is really hard for me to describe it. But, lets say I define an infinity that is the sum of all natural numbers.

Lets say our “infinity” stopped (I know this is not right to say…) at 3. 1 + 2 = 3. Now lets go 1 step forward. 1 + 2 + 3 = 6. 3, the previous definition of infinite sum, is now part of the sequence.

This is kind of how I see this example.

Basically this guy’s table T would be double the size, but really, that’s how it should have always been.

1. 0

I’m always on the lookout for how I can win my kids’ cooperation in novel and fun ways. Bonus points if it involves getting to tinker on an engineering project myself.

So “engineering” basically means nothing now.

1. 2

Gotta make yourself feel good some how.

I like to go by “Software creationist” these days. Separates me from everyone else.

/s

1. 3

Haha, very cool.

I just skipped to the end to see what their conclusion was:

The major conclusion of the study is that of the three honing methods studied, the best method for removing thebur and setting the edge angle is clearly a final polish on leather loaded with a polishingcompound such as the chromium oxide or diamond compounds used here.

1. 1

I’m in this person’s shoes as well. Everything they say is pretty much spot on.

Engage in group conversations in slack, make sure there is a #random channel or similar for “watercooler talk”.

Take a trip once or twice a year to the head office, and organize a group outing.

1. 2

Is IMAP considered the “cleanest” implementation of email possible?

1. 3

No, but JMAP probably is.

IMAP is like a filesystem with poor performance and doesn’t allow batching of commands

1. 1

FWIW IMAP also seems to have its own host of layers of cruft

1. 3

Allows me to easily create binary based formats and write tail/head unix-like programs. I use this for a couple personal services and saves me tons of space and is very efficient. I write data every second. Using tools like gzip, fswatch, parallel, and others, I can compress my data and manipulated in parallel with ease.

1. 1

As software people, maybe we need to stop relying on ever-more-complex OoO hardware to make our code faster over time, and design software that can run optimally on simpler in-order CPUs instead.

I’ve had this idea for awhile now. Revert to simple CPUs. The raspberry pi is more than enough performance for all common day needs, and even most work. I’ve written Z80 software and it amazes me what you can do with less than 1 megabyte of RAM and a few dozen megabytes of storage. When I see processors with Ghz speed, it blows my mind, because how can we write such functional software at 4Mhz on a Z80, and still not accomplish the same magnitudes higher? It is just amazing. I was thinking of using RISC OS as my casual driver, but the one thing stopping me is support. You are essentially all alone out there.

If you need the big guns, then you outsource your computations to an in-house compute server or external one.

The only issue I see, is gaming. Resource intensive, and requires external rendering hardware. I can’t imagine right now how you’d send out rendering tasks to a render or compute server and return them fast enough.

1. 5

Why discourage others from reading the paper?

Everyone: Read this paper! It’s well written and very accessible if you know the basics of how CPUs work.

1. 2

Yeah the paper isn’t that difficult to understand.

1. 1