1. 1

    This isn’t what ubershader means in practice, right? I thought an ubershader was a shader compiler that simply enumerated the possible comminations.

    1. 9

      eh, ubershader tends to be used to describe any huge shader program that has a lot of dynamic behaviour, like, eg. the ubershader used on modern cards to implement old opengl fixed-function api calls. You compile the shader once and then change its behaviour with uniforms, which is more or less exactly what Dolphin is doing.

      1. 8

        a ubershader isn’t a compiler, it’s a type of (usually very large) shader that’s runtime-configured based on uniforms rather than having many optimized shader variants.

        compare an if/else structure to a templated function.

      1. 8

        I feel like an “inaccurate” tag would be in order.

        1. 5

          It feels slightly odd that there’s an “incorrect” downvote for posts, but no “incorrect” flag for articles.

          1. 2

            I’d add it if it existed :D

          1. 7

            I would wait for confirmation from other sources, since semiaccurate is notoriously… semi-accurate.

            1. 13
              1. 7

                If I understand the Intel advisory correctly, still pretty bad, but won’t apply to most end users. The remote vulnerability only applies to systems that have provisioned Intel AMT (not merely a chip with AMT support, which would be much worse). Provisioning doesn’t happen by default, and is pretty much only done by business users. A secondary vulnerability, disclosed simultaneously, is that local unprivileged users can provision AMT if it’s not already provisioned (presumably for the purpose of subsequently exploiting it remotely), which provides a local privilege escalation route.

                1. 3

                  “The remote vulnerability only applies to systems that have provisioned Intel AMT”

                  Maybe. My prediction had two parts. One is that they hid a backdoor in AMT. The other is that they saved costs by keeping that wiring in a lot of chips with it just not visible to the user of some processors depending on what they paid for. The latter is very common in hardware industry to save costs. Most prominent example being hard disks that all had same sized platter but firmware or something made it pretend to be different sizes. More profitable because it cost same to make but charge people more or less depending on use. Main guy that taught me hardware risks also gave example of mobile SOC’s being reused in other products since the supplier already ggot a discount on it. Just didn’t advertise wireless or other phone features were in it.

                  We don’t know for sure that the hardware or firmware can’t be accessed remotely if the users haven’t explicitly turned it on. Initial reading I did even said it listens while the system is off. It draws little power when in use. The fact that it’s there is reason to continue not trusting Intel CPU’s if you have any worry about backdoors or high-end attacks via network.

                  1. 2

                    Yeah, the whole Intel 80486 SX was just a regular 80486 with a defective (or perhaps marginal) FPU unit.

                    1. 2

                      Once yields improved, they became 486es with working but disabled floating point units. I think later one they went to 486SXes that didn’t have an FPU at all.

                      1. 1

                        Likewise for the AMD Triple Cores being defective Quad Cores but at least they were honest about it. Wonder if they’ll be honest about their management engines being defective secure, management engines. ;)

                  2. 6

                    Intel would like to thank Maksim Malyutin from Embedi for reporting this issue and working with us on coordinated disclosure.

                    That’s not how you spell SemiAccurate. Intel cutting Charlie out of the loop!

                    1. 1

                      much, much better!

                    2. 5

                      Intel’s marketing material saying they added remote access plus their track record on errata and firmware quality told me it was true years ago. Patterns like that just keep repeating. Best to assume it’s insecure until they prove it’s not.

                      1. 7

                        I don’t doubt there’s potential vulnerabilities in the system with varying severities and with varying attack difficulties, but this particular article on this particular vulnerability should be taken with a fairly large grain of salt, particularly regarding any of the details, given the source.

                        Semiaccurate is well-known for posting speculation as fact, but worse, they often have major misunderstandings of the material they report on, leading to errors, incorrect deductions, wild speculation, etc. My favorite example (a real quote, not satire):

                        You probably don’t remember but the Midgard architecture you know and love is a four wide architecture four stages deep. Each cycle one thread, aka a triangle or quad, is issued to the execution units. Since they are four wide they can take a full quad a cycle which is a really good thing. Unfortunately most game developers seem stuck on triangles which tend to use only three of the SIMD vector lanes. This is bad but modern power gating means it won’t consume hideous amounts of power, it just doesn’t utilize the hardware to its maximum potential often. The technical term for this is inefficiency.

                        1. 3

                          Now that’s a good point on source reliability. The triangle thing is hilarious, too. Thanks for that one.

                    1. 47

                      It’s way worse than Flash. Flash never used nearly as much RAM or CPU as any of these apps, and it ran on a Pentium 2 with Windows 98!

                      1. 15

                        That’s funny cuz I was about to reply to @mattgreenrocks quip about frame rate by pointing out Half Life could run on my Pentium 2 (200+MHz) with 64MB of total RAM and Windows 98. Haha. You must have had one, too, back in the day.

                        Heck, I could do so much on that box. Programming, servers, music, movies, MS Office, photo editing, Web, etc. Then these modern apps need a gig for chat or have slow, 2D interfaces? WHAT!?

                        Maybe they were raised on WebTV, had to use it for developing software, and eventually it “upgraded” to Electron. Some crazy shit like that where the apps would seem impressive.

                        1. 25

                          I still game these days, and it is incredible how far graphics have come since HL1.

                          I think that feeds part of my pessimism: by evening, I’ll indulge in these incredibly rich, complex graphical worlds (which aren’t perfect by any stretch), and then by day, I keep hearing about devs that are proud they hit 60fps animating a couple of rectangles on a obscenely-powerful CPU and GPU.

                          I also hate that it feels like I’m out of place in tech for my engineering leanings, despite putting way too many hours of time in HS, college, and after-college. The time honestly feels wasted if expertise is not valued. I know that it is, a bit, but collectively, its all about mouthy Medium thinkpieces and cramming JS into everything and being hailed a hero for making things “accessible,” externalities be damned.

                          1. 8

                            The difference between performance conscious (not even super tweaked code) code and common code is just ridiculous.

                            Slack in Firefox freezes entirely for 5 seconds when you scroll up while it streams and renders a few KB of text, games stream in and decompress hundreds of megabytes per second without a hitch.

                            People write brag posts about serving millions of requests per day on their big distributed AWS architecture. Go’s compiler is considered very fast and a quick google puts it at tens of thousands of lines per second. Drawing high hundreds of millions of triangles per second is easy even on shit hardware.

                            One of the really harmful effects of this is that it makes people think computers are slow. If I have a few thousand things I need to update, I often catch myself thinking “this is going to be slow, I need to make it async/bust out fancy algorithms/etc”. I have to remind myself that if I just write the stupid code it will most likely run instantly. You can even do O(n^2) on hundreds of objects and have it complete in 0ms.

                            1. 2

                              I have to remind myself that if I just write the stupid code it will most likely run instantly.

                              Indeed. And in the few times it is slow, I usually fix it with two more lines:

                              window.SuspendRedraw();
                              // Lots of changes
                              window.ResumeRedraw();
                              
                          2. 7

                            I’ll eat the humiliation to make a point backing you up! :)

                            Here is me recording my desktop in 2007 with an old crappy Sony Ericsson phone camera.

                            https://www.youtube.com/watch?v=4CBWGdxosiY

                            This is desktop 3D effects on a f-ing PIII 600 (256 ram) radeon 9250. I had that machine for >10 years.

                            Did programming on it, gaming, learning. Everything. Finally upgraded to an newer box as web browser became unwieldy.

                            I miss that box and how software was made back in the day.

                            1. 1

                              I never tried doing a 3D desktop. Neat that it handled it so well at 600MHz. Although, I did some VRML back in the day. Was even on VWWW sites & 3D chat thinking 3D might be next, cool thing. Nope. Worse, VRML chats had environment that loaded up piece by painful piece on my 28Kbps. Back to IRC haha.

                              So, some things definitely got better over time like networked, real-time apps. The local ones got way worse, though, in terms of what one could do with the machine.

                              1. 1

                                I remember using a 3D desktop program (yes, it was a real program) on my machine [1] and it ran a bit sluggishly. On a 33MHz CPU with 16M of RAM. That same program, today, would probably be acceptably fast, even on an emulator (the CPU in question was a MIPS R3000).

                                My, how times have changed.

                                [1] Not really mine, but the University’s. I was, however, about the only user on the machine.

                                1. 1

                                  What computer did you have from SGI that ran that program on 33MHz CPU, etc? I thought the workstations ranged from 100+MHz on up even in early 90’s. Or was it an emulator or a port?

                                  1. 2

                                    I’m guessing @spc476 was using an Indigo, the earlier variants of which had a 33Mhz R3000 (later versions had 100Mhz R4000s). I love SGIs :)

                                    1. 1

                                      Ahh. Didnt know about them. Thanks. Yeah, I miss them too. Way ahead of their time with HW architecture.

                                      1. 1

                                        IRIS actually, but yes, close enough 8-)

                          1. 3

                            I knew that the x86 instruction set was a bit irregular, but I find this really disgusting. Not the hack itself, it’s a neat hack given the circumstances … but the circumstances.

                            So there are some instructions that take up five bytes including operands, some that take up seven bytes … shudder

                            1. 9

                              Intel capped the instruction length of the x86 to 15 bytes. That’s not the worse offender though. The 68K family has one instruction that can be 22 bytes in length (MOVE from memory to memory, each argument using one of the more esoteric addressing modes). The VAX might win this though, as there is one instruction that can take 31 bytes to encode (ADDP6—add packed 6-operand) and another one that can take up to all available memory (CASEL [1]).

                              And 5-byte instructions are not limited to 32-bit machines. My favorite 8-bit CPU, the 6809, also has a few 5-byte instructions (although most are two to three bytes).

                              [1] The instruction takes four parameters, one of which is a table of displacements immediately following the instruction. Given that you can’t locate this table elsewhere is what makes me say this instruction is the longest encoded instruction on the VAX.

                              1. 2

                                I’ve worked on a machine that maxes at 28 bytes per instruction. It’s still shipping in consumer products

                              2. 4

                                Then I guess you’ve got to be disgusted by a lot of ISAs, looking at https://en.wikipedia.org/wiki/Comparison_of_instruction_set_architectures and how many of them have variable length instruction encoding.

                                1. 1

                                  I expect you to be right about that. Some are worse than others, though.

                              1. 4

                                A proprietary platform running on top of a permissively licensed program? Well, color me surprised.

                                1. 20

                                  And the problem is? That’s precisely why people license it that way.

                                  1. 2

                                    Not exclusively. There are a reasonable number of people (myself sometimes included) who use permissive licenses not because we want big companies to refuse to contribute back changes, or approve of them not doing so, but just because the cure of a GPL or AGPL style license can sometimes be worse than the disease in terms of imposing license-compatibility complexity on downstream users. It’s completely consistent imo to say, if someone makes significant improvements to my software and doesn’t likewise share them, I’m not going to reserve the right to sue them (hence a permissive license), but I do reserve the right to think they’re kind of a jerk.

                                    It’s even fairly common sentiment in parts of the BSD community. I remember especially a few years back when OpenBSD was in financial trouble, people were drawing up lists of companies that had products built on OpenBSD and didn’t give anything back (code or money). De Raadt, for example, has complained that “it sure is sad that none of these companies return even a fraction of value in kind”, though nobody disputes that they had no legal obligation to do so.

                                    1. -1

                                      Yeah, that’s my point. Wasted effort.

                                      1. 17

                                        Ah yes, it’s definitely wasted effort to create great software, then watch people use it and benefit from it.

                                        1. 18

                                          If you’re on the BSD side of open-source then there is a reasonable train of thought to it.

                                          Many people create open-source not just to spread the idea of open-source, but to let others have good software. BSD people are usually totally happy with a closed-source system running BSD because they know that system is more stable and secure than if the company tried to create something themselves.

                                          This is outright appalling to the GPL people, but oh well.

                                          1. 6

                                            This is outright appalling to the GPL people, but oh well.

                                            No it’s not. I don’t know a single “GPL person” (including myself) who is anything remotely close to “appalled” by permissive licensing. The most anyone might care is being slightly disappointed that the author is allowing the proprietarization of their software. Personally, I’m just bemused that they would want to donate their labor to people making money off it without compensation (it’s like the opposite of socialism!), but that’s entirely their prerogative.

                                            1. 13

                                              A lot of BSD and Apache licensed code I have written was paid for.

                                              1. 8

                                                If they based the device off of Linux and the kernel bits modified were only some proprietary hardware drivers that aren’t of much use to anyone else, but the device is still locked down so you can’t run your own software on it, and all the fun stuff is in userland which isn’t open source, how is that any different to Linux kernel developers that still “donated their labor to people making money off it without compensation”?

                                                1. 4

                                                  Or more likely these days, some company building a SaaS product on top of Linux with some custom changes (like say… some custom hardware support) that nobody else ever gets to see. As long as said company never distributes it, they are in compliance with the GPL license.

                                                2. 5

                                                  The most anyone might care is being slightly disappointed that the author is allowing the proprietarization of their software.

                                                  But that’s pure fiction. No permissive software is ever “proprietarized”. The permissvely licensed work is never taken away. FreeBSD is still right there, as permissive as ever.

                                                  The only proprietary thing here is the proprietary work Nintendo did beyond or on top of it.

                                                  Personally, I’m just bemused that they would want to donate their labor to people making money off it without compensation

                                                  I’ve been paid for all the permissively-licensed software I’ve ever written. My employer was happy to donate the code to the community for a variety of reasons.

                                        1. 2

                                          Did Google do the CPU design? Is Rockchip just doing the fabrication?

                                          Odd world to have Google poised to join Apple as the best mobile CPU vendors. Maybe they got sick of Qualcomm’s relatively lackluster performance.

                                          1. 2

                                            I’m not sure there’s any evidence for a Google-designed CPU; if it was happening, it’d be pretty hard to hide hiring a team of that size.

                                            1. 4

                                              Right. Looks like an ARM-designed CPU core for sure.

                                              Last October, a product page for the Plus, then branded the Chromebook Pro, was leaked, ID'ing the chip as the Rockchip RK3399. Some folks benchmarked a dev board with it. Some early announcements about it exist too, also tagging it as based on Cortex-A72/A53 cores and a Mali GPU.

                                              There’ve also benchmarks out there of another A72-based SoC, the Kirin 950.

                                              1. 2

                                                There’s reasonable evidence of Google ramping up at least more competence in chip design over the past 3-5 years than they traditionally had, which seems to spawn rumors of a Google CPU every time they hire someone. Anecdotally from the perspective of academia, they do seem much more interested in CE majors than they once were, plus a few moderately high-profile hardware folks have ended up there, which would’ve been surprising in the past. But I agree it’s nowhere near the scale to be designing their own CPU. I don’t know what they’re actually doing, but assumed it was sub-CPU-level custom parts for their data centers.

                                                1. 8

                                                  CPU design is also a really small world; it’s almost all the same people bouncing between teams. You can trace back chip designs to the lineage of the people who made them; there’s even entire categories “pet features” that basically indicate who worked on the chip.

                                                  1. 3

                                                    Pet features, that’s neat. Like ISA features or SoC/peripheral stuff? Can you give an interesting example?

                                                    1. 10

                                                      One example is the write-through L1 cache, which iirc has a rather IBM-specific heritage. It also showed up in Bulldozer (look at who was on the team for why). A lot of people consider it to be a fairly bad idea for a variety of reasons.

                                                      Most of these features tend to be microarchitectural decisions (e.g. RoB/RS design choices, pipeline structures, branch predictor designs, FPU structures….), the kind of things that are worked on by quite a small group, so show a lot of heritage.

                                                      This is probably a slightly inaccurate and quite incomplete listing of current “big core” teams out there:

                                                      Intel: Core team A, Core team B, and the C team (Silvermont, I think)? They might have a D team too.

                                                      AMD: Jaguar (“cat”) team (members ~half laid off, ~half merged into Bulldozer), not sure what happened after Bulldozer, presumably old team rolled into Zen?

                                                      ARM: A53 team, A72 team, A73 team (Texas I think)

                                                      Apple

                                                      Samsung (M1)

                                                      Qualcomm (not sure what the status of this is after the death of mobile Snapdragon, but I think it’s still a thing)

                                                      nvidia (not sure what the status of this one is after Denver… but I think it’s still a thing)

                                                      Notably when a team is laid off, they all go work for other companies, so that’s how the heritage of one chip often folds into others.

                                              1. 9

                                                Ahem.

                                                Though on a related note, Penny Arcade should’ve been put out to pasture years ago.

                                                Some gaming grumping from me:

                                                • Half-Life 2 and Doom 3 were both worse games, graphically and in gamplay, than Farcry.
                                                • F.E.A.R. still looks better than most things put out in the last decade.
                                                • Text adventure games are mostly crap–I’m looking at you Infocom.
                                                • Total Annihilation was and always will be better than any of the garbage Blizzard puts out.
                                                • Focusing on the “meta” of games and adopting “games-as-a-service”-style development has helped ruin game design.
                                                • Halo: Combat Evolved was the best in the series, combat-wise and story-wise.
                                                • STALKER is more atmospheric than any of the new Fallout or Elder Scrolls games. Cheeki breeki.
                                                • Golden oldy isometric games like Fallout and Arcanum and Jagged Alliance are almost unplayable due to their graphics.
                                                • WoW killed the MMO market and has been a net loss on humanity (unlike Eve). RIP Asheron’s Call, DAoC, Ultima Online, and SWG.
                                                • Blizzard single-handedly set back the RTS genre from the interesting places Cavedog and Ensemble and Relic were taking it.
                                                • Gamers are terrible customers.

                                                EDIT: Removed overly-serious grumping.

                                                1. 3

                                                  Halo: Combat Evolved was the best in the series, combat-wise and story-wise.

                                                  Yep. I’ll add one to it:

                                                  Halo has the most bland and boring universe of current popular series.

                                                  1. 3

                                                    Total Annihilation was and always will be better than any of the garbage Blizzard puts out.

                                                    Fight me

                                                    Total Annihilation was an incredibly innovative and visionary game, sure. It also had tons of cruft (floating metal makers), game-breaking bugs (pelicans), and flat-out unusable units (necros, buzzsaws). Not to mention that CORE, one out of the two factions in the game, was a strictly worse pick.

                                                    Starcraft was a much less interesting game than TA was. But at least all of the units were useful and all of the factions were competitive. Worse is better because worse is balanced.

                                                    XTA running on Spring was fantastic, though.

                                                    1. 2

                                                      Halo: Combat Evolved was the best in the series, combat-wise and story-wise.

                                                      And yet its story was still a huge let-down compared to Bungie’s previous work.

                                                      Text adventure games are mostly crap–I’m looking at you Infocom.

                                                      Sierra *Quest adventures were only fun when you were a kid because what else were you going to do; of course you have time to click on every single thing on the screen with every single thing in your inventory to solve that puzzle that doesn’t make any logical sense.

                                                      1. 1

                                                        Total Annihilation was and always will be better than any of the garbage Blizzard puts out. Blizzard single-handedly set back the RTS genre from the interesting places Cavedog and Ensemble and Relic were taking it.

                                                        Did you ever play Supreme Commander? (the original one, with Forged Alliance). That keeps most of what made TA great and adds a much better interface and general workingness. I still play with my friends even today.

                                                        (Too bad the sequel was bad, or so my group’s consensus was anyway)

                                                        (If the Relic reference is about Homeworld, I think it proves that 3D combat will always be uninteresting because there are no chokepoints and alpha strikes will always be dominant)

                                                        WoW killed the MMO market and has been a net loss on humanity (unlike Eve). RIP Asheron’s Call, DAoC, Ultima Online, and SWG.

                                                        Shrug. It killed it by being better. Though in the case of SWG I read a really interesting piece about how marketing insistence that they had to have a Jedi ruined it.

                                                        1. 1

                                                          Total Annihilation was and always will be better than any of the garbage Blizzard puts out.

                                                          Absolutely agree. Someone in the thread referenced 3D combat and the lack of chokepoints. I agree that “normal” RTS combat doesn’t work in 3D, but that’s why it would be amazing to see more attempts at innovating on that model. Combat more like what you see in the Battlestar Galactica reboot would lend itself to gaming in my opinion: capital ships that rely on scouts and then jump directly into a fight. The flak guns in BSG are also interesting, no magic “shields”, so tactics would play a larger role. Babylon 5 is another example of that model in television. Full disclosure: I haven’t played EVE or Homeworld.

                                                          WoW killed the MMO market and has been a net loss on humanity (unlike Eve). RIP Asheron’s Call, DAoC, Ultima Online, and SWG.

                                                          Also agree, the static world means WoW is essentially a single-player game with a multi-player meta game inside of it (raids). They don’t even make you move through the world to the raid location any more, you can spend 100% of your time in one spot once you hit max level.

                                                          I played WoW like it was my job, mostly alone or with a couple casual people I met in-game until I hit max level, got yelled at by angry nerds in a raid, and quit immediately.

                                                          I remember reading about UO when it first came out and being incredibly excited about being a merchant and “owning” property. I never got to play UO (mom and dad wouldn’t pay for it), and I’m told it didn’t quite live up to its potential, but WoW seems like a regression. Maybe UO was overly ambitious and virtual worlds are just inherently unstable, I don’t know.

                                                          1. 1

                                                            RE the WoW thing, I remember reading back in the day that WoW didn’t increase the MMO market at all, it just took all existing MMO players, killing everything else.

                                                            I felt compelled to finish HL2 (and have replayed the first 2/3rds several times). I got “done” with Farcry pretty quickly in, so that point is a bit shocking to me.

                                                          2. 7

                                                            Wind Waker was better than Ocarina of Time

                                                            Half-Life 1 was better than 2

                                                            Final Fantasy 7 is not as good as you remember it being, and you pretty much forgot everything that happened after leaving Midgard except Aeris dying

                                                            Also, your favorite band isn’t as good as my favorite band.

                                                            1. 2

                                                              Final Fantasy 7 is not as good as you remember it being, and you pretty much forgot everything that happened after leaving Midgard except Aeris dying

                                                              Not true. And who could forget the big ass monsters emerging and being shot by big ass cannon. Music was cool. Characters fairly deep vs other Playstation games. Chocobo races. No mouths on characters, though, lol.

                                                              Edit: Watching same summon 100+ times wasnt cool either.

                                                              1. 2

                                                                Wind Waker was better than Ocarina of Time

                                                                You kiss your mother with that mouth!? :D

                                                                1. 1

                                                                  These are very correct good takes.

                                                                2. 2

                                                                  SimCity 2000 is better than SimCity 3000.

                                                                  Deus Ex aged better than System Shock 2.

                                                                  1. 1

                                                                    It has aged quite well, I had no problem getting it to work and it was still a pleasant experience (With no nostalgia involved, since it was my first time playing a deus ex game)

                                                                  2. 2

                                                                    You devil.

                                                                  1. 6

                                                                    this is basically kibozing in 2017, isn’t it

                                                                    1. 26

                                                                      If these trade-offs seem familiar, they’re straight from the worse is better essay. It turned out that correctness, simplicity of the interface, and consistency are the wrong metrics of goodness for most users.

                                                                      I would say it is more likely that people who care about these things prefer PostgreSQL.

                                                                      1. 14

                                                                        I evaluated RethinkDB twice at two different jobs and wound up not choosing it because it wasn’t fast enough.

                                                                        I’m a Postgres guy who wouldn’t touch MongoDB with a ten foot pole. I was only evaluating RethinkDB because Aphyr’s evaluation showed it was correct and wouldn’t lose my data. But in the end I need both speed and correctness. I put RethinkDB in the “look at this later” category, and I think a lot of other people did too.

                                                                        1. 8

                                                                          Seconded. I heard an interview a while back with one of the RethinkDB folks, talking about the realtime push capabilities. I thought the same thing I think every time I hear about a new database: “That sound cool, but… is it going to lose my data?”

                                                                          It’s cool to be able to push out updated data as soon as it’s available. But not if the data isn’t reliable. And if I have to use something else as my primary data store to guarantee reliability, then unless it supports push notifications, I’m back to polling the primary store.

                                                                          I’m not even close to being a database admin (even though I wrote “Protect Your Data with PostgreSQL Constraints”). I don’t know much about things like sharding and clustering. I can’t explain all the details of ACID. But I know I want my database to never lose data or store incorrect data, and my default (perhaps not perfectly informed) choice is always going to be the thing I trust to do that. Nothing will pry my fingers off PostgreSQL unless it convinces me it’s at least as reliable - that needs to be Point 1 before I start caring.

                                                                          (If for some reason I don’t need reliable storage - eg for a cache - I might pick something else.)

                                                                          1. 8

                                                                            Speaking as a Postgres bigot, I would very much be interested in a different data store that addressed different pain points than Postgres, if it were as well put together as Postgres.

                                                                            1. 8

                                                                              This is a great example of the maxim that who you compete against is really decided by your users, not you.

                                                                              1. 3

                                                                                yeah, and he’s saying palpable speed is a painpoint, which is a BIG problem if you’re not even a relational db.

                                                                                1. 5

                                                                                  For single document insert and retrieval RethinkDB always seemed extremely fast to me. It was searches that were painful, and my impression is that this isn’t a use case all other document datastores excel at. RethinkDB had a very complete querying API based on Javascript. But if you indexed a field, you had to tell RethinkDB which index to use—it wouldn’t just “figure it out” like other systems. That seemed like a bridge too far to me at the time.

                                                                                2. 3

                                                                                  RethinkDB was also AGPL, as I recall. That is a bit of a hard sell to businesses when PostgreSQL is a viable alternative, regardless of the fact that the drivers were all apache licensed and 99.9% of people aren’t going to be modifying the database code themselves anyway.

                                                                                  Honestly I have always been a bit surprised mongodb has done as well as it has, given that it also uses the AGPL license. I chalk it up to mongo lucking out and being associated early on as part of the js-everywhere/node hype train stack.

                                                                                  1. 7

                                                                                    Both are dual-licensed under AGPL and a commercial license. I don’t think the expectation was that many large companies would use the AGPL version, but that they’d license the commercial version for enterprise use. Basically a variant of the traditional GPL-as-poison-pill / commercial-license-as-alternative model, but with a stronger brand of poison. Certainly that’s what most large users of MongoDB are doing.

                                                                                    1. 1

                                                                                      I understand that, but I’m not sure many businesses would be comfortable with it regardless, even presuming they had people that likewise understood that.
                                                                                      Apparently I am not alone in thinking it can occasionally be a problem either.

                                                                                      EDIT: Futher, it seems like it was super hard (if not impossible?), to actually buy a non-AGPL license, which I am sure didn’t help.

                                                                                1. 7

                                                                                  I’ve only had AMPs come up in search results a few times, but it has literally never worked for me. I just get failure pages on Firefox Android + ublock o, so it’s become an indicator of something to avoid if I want to get to content.

                                                                                  It also doesn’t seem like something I’d work too hard to target as a content publisher, either… I’d rather just take responsibility for good mobile experiences.

                                                                                  1. 4

                                                                                    same. AMP literally just doesn’t work for me at all. I have to go reload google in desktop mode for it to even be usable. i’m legitimately tempted to switch to Bing at this point to avoid it.

                                                                                  1. 10

                                                                                    Perhaps one should wait until a version number higher than “0.0.1” (and perhaps a few audits and independent implementations) to declare something “Standard”, secure, and long-lasting. It does little good to confuse aspirations with actual features.

                                                                                    (Also, this runs into the problem where if your file format is simple enough, everything sufficiently complex implemented on top of it just becomes a meta-file-format with all the same compatibility problems as before, and you’ve implemented a filesystem, not a file format. Compare “XML”, which is technically a “simple file format”, but actual document formats built on it are things like Office XML, which can still be gargantuan monstrosities.)

                                                                                    1. 1

                                                                                      Yeah, I think that this has great potential, but probably hold off on real world usage with sensitive data until it matures - including audits and independent implementations, as you say.

                                                                                    1. 8

                                                                                      I’m honestly really curious as to what the problem is here. It clearly escaped internal testing, but even more interestingly it’s nondeterministic! Laptops with terrible battery life are a dime a dozen, but how often do you get a device with bad battery life sometimes, but great the rest of the time? Not even “some units have bad battery life” or “the device has bad battery life when doing X”.

                                                                                      The symptoms are bizarre enough that there’s got to be some interesting bug behind it all.

                                                                                      1. 5

                                                                                        I’m honestly really curious as to what the problem is here. It clearly escaped internal testing, but even more interestingly it’s nondeterministic!

                                                                                        No, it didn’t escape internal testing, their testing methodology is simply crap.

                                                                                        1. 1

                                                                                          I think they said it only happens when using Safari, as they got better results with Chrome. That might be the culprit.

                                                                                          1. 3

                                                                                            John Gruber

                                                                                            Once our official testing was done, we experimented by conducting the same battery tests using a Chrome browser, rather than Safari. For this exercise, we ran two trials on each of the laptops, and found battery life to be consistently high on all six runs. That’s not enough data for us to draw a conclusion, and in any case a test using Chrome wouldn’t affect our ratings, since we only use the default browser to calculate our scores for all laptops. But it’s something that a MacBook Pro owner might choose to try.

                                                                                            This is crazy too. Whatever the benefits of Chrome are, everyone knows it’s an energy hog. There is no way that using Chrome should result in better (and more consistent) battery life than Safari.

                                                                                            I would want to see more complete tests.

                                                                                        1. 8

                                                                                          You say “liberal arts–type curriculum that’s less impressive to employers”, but the statistically top schools (in terms of job placement and salary) for engineers and CS majors are often exactly the programs you’re referring to. For example, my alma mater, Harvey Mudd, is a “liberal arts school” with hefty emphasis on humanities and theoretical classes, but ranks ~3rd in the nation in that category (http://www.payscale.com/college-salary-report/bachelors).

                                                                                          Beyond that, pretty much seconding everything michaelochurch said. There’s a pretty significant chance you won’t be into the same things in 1 year, let alone 4. There’s even nontrivial odds you won’t even graduate with a CS degree, but rather something else! Be flexible. Don’t insist too much on defining your entire future based on what you’re into this week.

                                                                                          1. 10

                                                                                            Can we call it what it is: a compiler. The term “transpiler” has come into vogue lately, and has a history dating back to the 80s. The problem is that it offers no benefit in use over the word “compiler,” which of course means: “transforms source code written in a programming language into another computer language.”

                                                                                            The amount of jargon in computing is already so numerous that the distinction of “at about the same abstraction level” that transpiler utilizes isn’t helpful to new programmers, and is actually a hinderance, in my opinion.

                                                                                            Regarding this project, it seems really nice! I’ve seen many attempts to build systems programming languages (using close to the metal for the definition of sysprog) using s-expressions as the syntax, but don’t recall ever seeing a skin for C like this!

                                                                                            1. 8

                                                                                              if a compiler that goes from language A to language B is a transpiler, is a compiler that goes from a language back to itself (performing some transformation) a cispiler? enquiring minds wish to know

                                                                                              1. 1

                                                                                                If you really want to introduce the term “cispiler”, come up with something that takes language A as input and outputs language A.

                                                                                                1. 3

                                                                                                  come up with something that takes language A as input and outputs language A.

                                                                                                  That’s exactly what she said:

                                                                                                  a compiler that goes from a language back to itself

                                                                                                  1. 2

                                                                                                    Like a pretty-printer for an IDE! These things exist.

                                                                                                    1. 1

                                                                                                      Formatting is not really a significant transformation. “Gofix” does some actual rewriting, but it can be argued that it takes an older version of the language and outputs the newer one.

                                                                                                2. 2

                                                                                                  Can we call it what it is: a compiler. The term “transpiler” has come into vogue lately, and has a history dating back to the 80s. The problem is that it offers no benefit in use over the word “compiler,” which of course means: “transforms source code written in a programming language into another computer language.”

                                                                                                  We call “compiler” the software that outputs assembly and “transpiler” the one that outputs some other programming language (that may be compiled to assembly in another stage). Sometimes the distinction is important.

                                                                                                  1. 9

                                                                                                    I’m well aware of the distinction. I just don’t think it’s necessary.

                                                                                                    I’ll assume you know of Yacc. Did you know it’s actually an acronym for “Yet Another Compiler Compiler”? The history of “compiler compilers” is interesting, and tools that aren’t just parser generators exist and create full blown interpreters and other compilers form a descriptive language. These don’t, typically, generate assembly code, but are called compilers. Therefore, I reject your definition of compiler.

                                                                                                    1. 0

                                                                                                      My definition is relevant in other contexts. Take programming language implementations, for example. When we talk about “interpreted” vs. “compiled” we mean “compiled to Assembly”, not “compiled to Javascript” or “compiled to bytecode”.

                                                                                                      So if we want to use fewer words, we can say that TypeScript is transpiled and Rust is compiled and be sure that everybody understands the difference.

                                                                                                      1. 2

                                                                                                        I don’t actually know of a compiler that produces an executable. In all cases I know of, the compiler generates some sort “object code” which needs to be “linked” to run. Python to bytecode, and TypeScript to JavaScript, then are really no different since you “interpret” JavaScript, just the same as Python interprets bytecode.

                                                                                                        A typical compiler, by itself, does nothing but rerepresent some source code as something else, which some other tool can do something else with.

                                                                                                    2. 4

                                                                                                      Then a transpiler can be a subset of compiler where either term is correct but transpiler is used when distinction is really important. Most people just want to input X to get Y out the other end.

                                                                                                      1. 1

                                                                                                        Then a transpiler can be a subset of compiler

                                                                                                        It might make more sense the other way around: a compiler is a particular type of transpiler.

                                                                                                        1. 3

                                                                                                          It could go either way. Compiler is the oldest term. Might factor in.

                                                                                                          1. 1

                                                                                                            Transpilation is often considered a compilation strategy, like ahead of time or just-in-time compilation.

                                                                                                      2. 1

                                                                                                        I agree with you. Called mime a source-to-source compiler on Hacker News with a compiler dev correcting me that what I described was a “transpiler” but admitting they do “essentially the same thing” in same comment. Begs the obvious question about why he’d bother to correct me…

                                                                                                        1. 9

                                                                                                          He did it because like a lot of people (myself included) in this industry, he’s a blowhard.

                                                                                                      1. 3

                                                                                                        As a user, I’m much happier to have a long-lasting phone that might forget a couple of seconds of data than a device that has to be trashed after a year of use.

                                                                                                        A few seconds of data loss is the difference between “I can access my mobile app’s data after a sudden power loss” and “all my work in a particular app is now corrupt, wtf!?”. Sure, if the developer is using a power-loss resistent (not immune) data storage solution like SQLite, then the problem is alleviate somewhat, but I specifically buy SSD’s with capacitors for that last-few-seconds-of-write to prevent data loss during a power outage situation, I’d prefer my phone did the same.

                                                                                                        NAND write endurance failures are also horribly overblown (both in this article and elsewhere), I used my Crucial M4 SSD as the primary drive & swap file for years and after dozens of terabytes of writes, it was still going strong (and relatively fast–though I don’t know what % of a performance degradation it’s been through). Then that laptop was stolen, but alas, no amount of fsync would have prevented that data loss :P

                                                                                                        Also, as the author noted in a small footnote update, a slightly slower FS will not manifest itself as user-facing UI latency as Android (like iOS) do not run animations on the main OS thread, so the title of “lags 10x more” is really a misnomer.

                                                                                                        1. 1

                                                                                                          The author points out that phones really are a special case… when do they ever experience sudden power loss? Particularly when the battery is not removable, it seems less likely than for desktops.

                                                                                                          1. 5

                                                                                                            When they crash and you have to force reboot them.

                                                                                                            1. 1

                                                                                                              When the battery dies because I’ve been using it all day and I’m on a bus or something rather than near a wall outlet.

                                                                                                              1. 1

                                                                                                                Right, but that isn’t sudden power loss. The OS can tell that the battery is low and has the opportunity for a graceful shutdown.

                                                                                                                1. 3

                                                                                                                  Right, but that isn’t sudden power loss. The OS can tell that the battery is low and has the opportunity for a graceful shutdown.

                                                                                                                  Lithium ion batteries have an “cutoff voltages” for “out of power” (~3.1v), however getting to that voltage is NOT a linear curve with a predictable time range. The micro-controller in a lithium ion battery pack uses many heuristics to guess at the remaining % of battery left based on many criterion (including, but not limited to, current power draw and past behaviour), and is susceptible to significant drift over time.

                                                                                                                  I’m surprised that you have never seen a case where your phone thinks it still has X% of battery left (where X is clearly bigger than 1%) and the phone shuts off suddenly. It’s happened to me on a number of occasions, and I’m glad none of those cases resulted in any data loss.

                                                                                                                  I have a 3 year old iPhone 5S, and the most recent egregious occurrence I can think of is it was stuck at 17% for hours, and then suddenly shut down.

                                                                                                                  And the iPhone is well regarded when it comes to battery management. I can’t imagine how much less reliable the average Android phone’s battery controllers are when it comes to reporting an accurate charge remaining. Sure, we all know of designed obsolescence, and perhaps I should get a new phone, but that would be counter to one of the objectives that the author of the article laid out, which is to have a long lasting phone (by life span, not battery life).

                                                                                                                  1. 2

                                                                                                                    Funnily enough, the only friends of mine who complain about their battery life are iPhone users. ¯\_(ツ)_/¯

                                                                                                                    I’ve only ever owned Samsung smartphones (Galaxy S3 and S5) and have never experienced sudden power loss, but thanks for setting me straight. I didn’t realize it was common.

                                                                                                          1. 5

                                                                                                            I’m not quite so sure about competition generating “good” code – certainly good at maximizing metrics, but I’d be rather afraid of any of my competitive Shenzhen I/O solutions being actually implemented!

                                                                                                            For example, one of them literally relies on the subtle behavior of logic gates to produce an ad hoc flip-flop to store data between time units. It beats Laser Tag in 8 yuan / 178 power / 6 lines of code – I have yet to see anyone I know even get close – but it’s complete hyperoptimized flaming garbage. And I’m slightly doubtful it’d actually be provably correct in all cases, too.

                                                                                                            This is at least slightly less of a problem than in TIS-100, where you could implement actually-wrong solutions that were probabilistically correct and keep running the tests until they passed.

                                                                                                            A good caution on using metrics to try to create “best” sorting algorithms (the example given in the article) is TimSort, which was used for years in Java, Python, and countless other implementations…. only to be proven formally incorrect a few years ago: http://www.envisage-project.eu/proving-android-java-and-python-sorting-algorithm-is-broken-and-how-to-fix-it/ . Remember, test cases cannot prove a program correct: they can only show that it works in a finite set of cases.

                                                                                                            1. 10

                                                                                                              We’ve heard the reasons for the 16gb cap before (power consumption).

                                                                                                              The thing apple isn’t realizing is that developers use their laptops at their desks, plugged in, the vast majority of the day. Even when flying, most flights have power plugs.

                                                                                                              The compromise of less ram for an hour or two of extra battery life isn’t worth it.

                                                                                                              I use a laptop as my main dev machine because I can close it at the end of the day and put it in my bag and open it at home if something comes up.

                                                                                                              Maybe I’m in the minority though.

                                                                                                              1. 16

                                                                                                                I hate to say so, but anything refering to “developers” as a heterogeneous group has to be rejected outright.

                                                                                                                Many developers don’t need 32 GB of ram. Other use their device unplugged all the time. Apple doesn’t cater to a specific kind of developer, and I think the size of that group is widely overestimated and - business-oriented - neglectable to do hardware development for.

                                                                                                                1. 6

                                                                                                                  indeed. my work MBP has 8GB of RAM – and for most things it’s enough! I unplug it and use it in the car, or around the house, or on a couch, or in bed, or in coffee shops. I often run down the battery in normal use; if I had to pick between an upgrade to 16GB or 2 hours of extra battery life, I’d be hard-pressed to choose.

                                                                                                                  but I know that my experience is just mine; I don’t need tons more RAM because my tools don’t demand them. but some tools do. and I move around a lot – but not everyone does. I like the macbook pro primarily because it strikes a fairly good point for me in the pareto-optimal curve of performance-when-plugged-in vs efficiency-when-unplugged – but not everyone needs that point on the curve, of course.

                                                                                                                  a similar point of contention is people who complain the MBP doesn’t have a powerful enough GPU option. for some applications, it sure doesn’t, and sometimes when I play games on mine, I really wish its GPU was better. But stuffing a 150 watt GPU in a laptop completely changes its entire design (I have a gaming laptop, and the experience is night and day!), requiring way more mass, a different structure, different tradeoffs – and would definitely make the laptop worse for a whole lot of use-cases. and there’s only so much area of the curve one laptop can cover, sadly.

                                                                                                                  1. 2

                                                                                                                    This. I’ve never felt a need for more than even 8GB so far. I’d much rather have 8 or 16GB RAM and know that the battery life is long enough that I can do a full day’s worth of work without a recharge, so I can set up anywhere and not have to worry about where the nearest outlet is or whether I even brought a power adapter.

                                                                                                                    Honestly, professional developers are already a small minority of Macbook users, and developers who really benefit from 32GB RAM are a small minority within that group. I don’t blame Apple at all for writing them off for now. If you really need 32GB, maybe what you need is a full desktop-replacement laptop, designed for max computing power while sacrificing battery life and portability. They’re out there, but Apple isn’t making them.

                                                                                                                    1. 3

                                                                                                                      Personally, 16GB gives me a bit more headroom but I could probably get by with 8GB in this day of PCI-E NVME SSDs (oh, the horror of having to survive with 8GB RAM!). I don’t typically run more than 1-2 VMs concurrently though.

                                                                                                                      Seriously though, I’m guessing that the next generation of MBPs in this new form factor will almost certainly support more RAM. By which time any issues with the new chassis, keyboard, Thunderbolt 3, Touch Bar, etc will be worked out. That’s the laptop I’ll be looking at, not this first generation model. I can understand the complaints though, particularly amongst those who had held out for the much anticipated Skylake-based refresh.

                                                                                                                      1. 1

                                                                                                                        Yeah I expect the next update of these will be the better deal. I don’t mind the RAM personally, but they seem kinda overpriced, and there isn’t much support for USB-C/Thunderbolt 3 yet. Quite a few people have been complaining about the keyboards too.

                                                                                                                    2. 1

                                                                                                                      I don’t want to lump developers categorically; maybe should have s/developers/power users/g.

                                                                                                                      The larger point is choice; by making the hard stop at 16gb they’re eliminating an entire segment of the market that does want 16+gb of ram in their portable desktop that runs osx.

                                                                                                                    3. 10

                                                                                                                      Minority maybe, but you’re basically using your laptop as a portable desktop in my book. You’re just using your computer in different places, that it is a laptop is tertiary to your use patterns. I’m in the opposite camp, hell no to more ram less battery.

                                                                                                                      As for me, I do use my laptop unplugged, right now for example, losing about 30% of my battery life with DDR4 memory versus low power DDR3 means why should I bother buying a laptop, it’ll just be a portable desktop that can barely survive a trip on a plane.

                                                                                                                      Thing is, I want 32GiB of ram too, but I don’t want to lose that much portability. And given this is more on Intel, I can’t blame Apple for this one. That said, its odd that even Apple can’t push/prod/shove/provoke/inspire/bribe/whatever Intel to get a chipset that does provide 32GiB of memory. It either speaks poorly of Intel for being effectively useless at gauging their own market, or Apple for not pursuing an alternative like Arm to provide it to us.

                                                                                                                      1. 4

                                                                                                                        I’m a developer as well, but I can’t picture myself requiring more than 16GB of RAM for any dev-related tasks…

                                                                                                                        1. 3

                                                                                                                          Even if the primary use-case is unexpected; it seems clear that the MBP, especially in the last few iterations, is built to be a “portable” device. The form and function of the product is to be as powerful as it can be without sacrificing portability (lightness, size, and battery life). The fact that there is so much emphasis on the display itself tells me that they expect to you to be looking at the main display.

                                                                                                                          Someone can correct me on this, but I believe it is the case that Apple employees tend to use laptops in a portable way rather than all being connected to large displays. Even if the use-case outside of Apple is usually different; the influence from inside is likely stronger.

                                                                                                                          1. 1

                                                                                                                            Ram != space or weight.

                                                                                                                            The point is ram == power draw, which would either necessitate a larger battery or less battery life.

                                                                                                                            I would prefer less battery life.

                                                                                                                        1. 5

                                                                                                                          The questions assume a particular set of implementation characteristics, so the post ought to have the title “So you think you know how a particular C implementation works?”

                                                                                                                          More than you want to know about C.

                                                                                                                          1. 14

                                                                                                                            More specifically, it does not offer “Implementation-defined behavior” or “undefined behavior”—the correct answers—as options.

                                                                                                                            The answers to these questions are not unknowable, the semantics are just not defined by ISO C. But knowing ISO C in a vacuum isn’t helpful; in order to actually write C, you need to be familiar with an implementation (your compiler).

                                                                                                                            I get that it’s popular to rag on C these days (a lot of the criticisms are well-founded after all), but quizzes like this offer no benefit to the discussion.

                                                                                                                            1. 9

                                                                                                                              Agreed, this is a terrible “quiz”. C has a helluva lot of real pitfalls… but most of these aren’t real pitfalls. “What would the Deathstation 9000 do” isn’t really a useful question. Undefined behavior is one thing, but “ah, but you see, what if you were running this code on a PDP-11? what then?” is truly silly nitpickery.

                                                                                                                              Much of programming C -is- knowing what the typical implementation behavior is, for better or worse.

                                                                                                                              1. 4

                                                                                                                                A big problem is that typical implementation behavior is changing as the developers of GCC and Clang seem to be hell bent on taking any undefined behavior and using it to optimize down to those those last 3 or 4 instructions, regardless of how many nasal demons will be released. New optimizations breaking existing code makes it important to avoid undefined behavior.

                                                                                                                                I had really high hopes for John Regehr’s ‘Friendly C’ dialect, but it seems dead in the water: http://blog.regehr.org/archives/1180

                                                                                                                            2. 1

                                                                                                                              I think that that’s the point. :)

                                                                                                                              (Spoiler Alert.) The problem is that fleshing out “I don’t know” (the correct answer) into “implementation defined” as @halosghost suggests would give away the “twist”. Many of these constructions look like valid (i.e. no undefined/implementation-defined) code to intermediate-level C programmers like me, but are not.