1.  

    I haven’t given hours for probably a decade. Since then I’ve been at companies where, as a team, we only need to provide story estimation points in a Fibonacci range. I think this is pretty normal practice for development teams following the agile/scrum methods.

    I suppose internally this is all converted to hours by managers looking at the team’s velocity although I’m not sure and thankfully I never have to worry about it.

    1. 7

      Man, everyone uses fibonacci numbers for estimation but it seems really weird to me – in practice it just means anything but 4.

      1.  

        My team’s been using Fibonacci estimates but I don’t really think it makes much sense, though in practice it somehow works. For example four 2-point stories rarely take as long as one 8-point story. So what’s the point in measuring the total points that a team accomplishes at the end of one sprint? Maybe our estimate errors add up in a way that cancels the noise.

    1.  

      Yeah, it’s pretty frustrating that software has seemingly gotten as wasteful as it has. I think this is a consequence of the power of abstraction – we’re building on the shoulders of giants on the shoulders of giants. That silky smooth text editor on a 286 was essentially 100% bespoke, with every line of code contributing. Now our text editors are built with GUI toolkits and powerful, yet general purpose edit controls…so we drag in all sorts of stuff that we don’t need. This is the sacrifice we make at the altar of code reuse.

      Whether or not this is a good thing is up for debate. But I think it’s unavoidable with the particular combination of software demand, economies of scale in hardware, and labor pool that we have in this world we’re stuck with.

      1.  

        It sounds like the challenge going forward is collapsing these towers of dependencies. Like the Zen of Python says:

        Flat is better than nested.

        1.  

          Bouncing off of your statement about text editors, I suggest you check out Kakoune, it’s a modal code editor that packs a punch.

        1. 6

          Holy misleading axes. The post-warmup improvements look impressive and significant until you realize that the range is chosen to make it look so. The LuaJIT improvement is about 7 milliseconds, or around 1.2%. The other improvements are on a similar scale.

          I don’t think these orders-of-magnitudes even come close to supporting the thesis of the article.

          1. 4

            I don’t think this is entirely fair - yes, the differences are not dramatic, but as the article says “We fairly frequently see performance get 5% or more worse over time in a single process execution. 5% might not sound like much, but it’s a huge figure when you consider that many VM optimisations aim to speed things up by 1% at most. It means that many optimisations that VM developers have slaved away on may have been incorrectly judged to speed things up or slow things down, because the optimisation is well within the variance that VMs exhibit.”

            1. 5

              I might be a bit more generous if the article would call out the actual differences, rather than just point at misleading graphs – seriously, with an honest y axis it’d be hard to even notice it.

              Regardless, while a 5% improvement is not trivial, it is trivial in the context of the article, which is people complaining about slow VMs. That’s in the noise as far as general programming language performance goes.

          1. 1

            Looks great! But I don’t understand the rationale behind not allowing genericity on struct methods. Why is that case any more complicated than on normal functions, which they say this proposal will support?

            1. 3

              I think it is due to reflection and their “dual-implementation” constraint. In Go you can query any type for number of methods (and iterate over them) using reflect.Type. It would be impossible to implement reflect.Type.Method* family of methods for public type using static generics implementation strategy. When compiling package you have no way to know how many different methods type will have in final executable.

              1. 1

                Ah, I see. Seems like an odd concession to make – I would think the utility of generic methods would outweigh the use of reflection. Also, what would have happened if generic functions were also at odds with the reflection API? Would that have torpedoed the whole thing?

                1. 2

                  Go has a philosophy of only adding orthogonal features.

                  They aren’t willing to have ‘reflection works except on generic methods’, and they aren’t going to break backwards compatibility on reflection.

                  1. 2

                    Well sure, but now they’ll have ‘generics work except on methods.’ As annoying as it seems I respect the unwillingness to break backwards compatibility, though.

                  2. 1

                    They want to maintain compatibility with Go1 which means that all the old APIs should still produce correct results.

              1. 6

                I really, really want to like Nim but there are just too many oddities for me to seriously dig in. The biggest one is probably the story for writing tests: given its import and visibility rules, it’s really awkward to test functions that aren’t explicitly exported. The official position is, “don’t test those functions,” which I find somewhat naive.

                The standard library is a bit unwieldy, but maybe it’s just a maturity issue. For instance, the “futures”-like system for threads, processes and coroutine style processing are all incompatible. What?

                Finally, (and maybe this is just taste), but there’s heavy use of macros everywhere. The nim web server (jester), for instance, is heavily macro-ized which means error messages are odd and pretty severely affects composability.

                1. 4

                  Don’t forget the partial case sensitivity. “Let’s solve camelCase vs snake_case by allowing you to write any identifier in either, at any time! Yay!”

                  1. 2

                    it’s really awkward to test functions that aren’t explicitly exported. The official position is, “don’t test those functions,” which I find somewhat naive.

                    I don’t know if that’s the official position, but you can test internal functions in the module itself in the when isMainModule: block. Here’s an example.

                    1. 3

                      You can also include the module and test it that way. So there are definitely ways to do it. I don’t think there are any official positions on this.

                      1. 1

                        Hmm, I seem to remember include getting kind of messy on larger code bases – wish I could be more specific, it was a while ago. By official position I meant the responses by the developers on the Nim forum, so maybe that was a bit heavy handed.

                  1. 13

                    There’s a quote I like that I can’t remember from where:

                    Thirty years ago “it reduces to 3SAT” meant the problem was impossible. Now it means the problem is trivial.

                    1. 2

                      I wrote something vaguely like that a few years ago, though I’m sure I wasn’t the first to observe it:

                      SAT, the very first problem to be proven NP-complete, is now frequently used in AI as an almost canonical example of a fast problem

                      1. 1

                        Why is that? Because computers are much faster, or better algorithms?

                        1. 3

                          We have faster hardware and better algorithms now, yes. But the real reason is because early theoretical results which emphasized worse-case performance had scared the entire field off even trying. These results based on complexity classes are true, but misleading: as it turns out, the “average” SAT instance for many real-world problems probably is solvable. Only when this was recognized could we make progress on efficient SAT algorithms. Beware sound theories mis-applied!

                      1. 2

                        Cool, I’m surprised a SAT solver can do a sodoku puzzle in just a millisecond or so. It’s got to be what, a couple thousand clauses? I guess it all collapses pretty quickly after a couple values slot in.

                        1. 4

                          yeah one of the nice things about SAT solving is the efficient propagation of information: once one clause is solved it quickly updates all the clauses that were blocked by not knowing that.

                        1. 2

                          I wrote a fast cat once out of actual necessity.

                          I was working with an embedded system where, for installation/testing purposes (details forgotten), we had a service running in a VM that would help bootstrap a system. Part of this setup was a script that would run on the target, read a binary blob from a file descriptor and write it to the device. That reading was done with the target system’s implementation of cat.

                          As it happened, the sending of the binary blob to the target turned out to be a significant bottleneck. After I profiled it, it turned out that cat was taking a long time to read and write the bytes. I looked at the source and found it was using fread and fwrite. I changed it to use read/write and the transfer time went down significantly. It was great because I’m fairly certain this was part of our automated build system to create images for the devs and the result was that build times went down a lot.

                          So sometimes you really do need a faster cat.

                          1. 2

                            Why were fread/fwrite slow? Aren’t they thin veneers over read/write that do a bit of buffering?

                            1. 2

                              For “buffering”, read “copying”.

                              With read/write, it’s just copying into a buffer (read), and then copying out of that buffer (write).

                              With fread/fwrite it’s copying into a buffer (read), then copying from that buffer to another buffer (fread), then potentially copying that back (fwrite), before finally copying it back into another buffer (write). That can really add up - even the read/write loop is more than we’d want ideally, hence stuff like splice.

                          1. 2

                            After reading the story here on solving snakebird levels I’ve convinced myself that it isn’t as hard as the author seems to think. So I’m going to give that a go and be proven wrong, I’m sure.

                            1. 12

                              Say what you will about the Ruby community, but I’ve never seen them fly into a moral panic over a tiny syntax improvement to an inelegant and semi-common construct.

                              The person-years and energy spent bikeshedding PEP 572 are just astoundingly silly.

                              1. 7

                                Say what you will about the Ruby community, but I’ve never seen them fly into a moral panic over a tiny syntax improvement to an inelegant and semi-common construct.

                                Try suggesting someone use “folks” instead of “guys” sometime…

                                1. 13

                                  I switched to folks and do you know how satisfying of a word it is to say? “Hey folks? How’s it going folks? Listen up folks!” I love it.

                                  On the other hand the interns weren’t too keen on “kiddos.”

                                  1. 6

                                    I’ve gotten used to saying “’sup, nerds” or “what are you nerds up to?”

                                    1. 4

                                      A man/woman (or nerd, I guess) after my own heart! This has been my go-to for a while, until one time I walked into my wife’s work (a local CPA / tax service) and said, “What up, nerds?” It didn’t go over so well and apparently I offended some people – I guess “nerd” isn’t so endearing outside of tech?

                                      Thankfully, I don’t think I learned anything from the encounter.

                                      1. 3

                                        It’s not endearing within tech to anyone over 40.

                                        1. 2

                                          I generally only use it in a technical setting – so within my CS friend group from college, other programmers at work, etc… whenever it’s clear that yes, I am definitely not trying to insult people because I too am a nerd.

                                  2. 1

                                    as @lmm notes above, a minimalist, consistent syntax is an important part of python’s value proposition. in ruby’s case, adding syntactic improvements is aligned with their value proposition of expressiveness and “programmer joy”, so it’s far less controversial to do something like this.

                                  1. 2

                                    Looks like it’s being developed in Rust now? So from javascript to C++ to Rust?

                                    1. 1

                                      From what I understood, it was started in JS but was more about testing out various prototypes and c++ was pretty quickly abandoned in favor of rust when he switched from javascript. The devblog has very detailed explanations of his process: http://cityboundsim.com/devblog

                                    1. 2

                                      The insertion-order preservation nature of dict objects is now an official part of the Python language spec.

                                      My ruby friends will finally stop laughing at me

                                      Does this mean that OrderedDict will be phased out?

                                      1. 1

                                        That’s a great question, I think there’s still a place for OrderedDict. Dict isn’t getting anything beyond the insertion-order guarantee, whereas OrderedDict has a bunch of other things going on like reversed(). There’s some more on this stackoverflow question. Also, there is a pretty interesting mailing list thread that gets into this a little bit, though from the perspective of 3.6 (which introduced some of this).

                                        1. 1

                                          I recall perl5 having a feature shuffling hash keys for security reasons. Python2 also had similar problem. So, is that safe?

                                          1. 2

                                            That’s a different issue involving forcing hash collisions to trigger pathological run time behavior. This has nothing to do with the hash function, but rather just adds a level of indirection in the storage to save memory. A side effect of that is it’s easy to preserve insertion order.

                                        1. 10

                                          No, you don’t need C aliasing to obtain vector optimization for this sort of code. You can do it with standards-conforming code via memcpy(): https://godbolt.org/g/55pxUS

                                          1. 2

                                            Wow, it’s actually completely optimizing out the memcpy()? While awesome, that’s the kind of optimization I hate to depend on. One little seemingly inconsequential nudge and the optimizer might not be able to prove that’s safe, and suddenly there’s an additional O(n) copy silently going on.

                                            1. 2

                                              memset/memcpy get optimized out a lot, hence libraries making things like this: https://monocypher.org/manual/wipe

                                              1. 1

                                                Actually it’s not optimizing it out, it’s simply allocating the auto array into SIMD registers. You always must copy data into SIMD registers first before performing SIMD operations. The memcpy() code resembles a SIMD implementation more than the aliasing version.

                                              2. 1

                                                You can - and thanks for the illustration - but the memcpy is antethical to the C design paradigm in my always humble opinion. And my point was not that you needed aliasing to get the vector optimization, but that aliasing does not interfere with the vector optimization.

                                                1. 8

                                                  I’m sorry but the justifications for your opinion no longer hold. memcpy() is the only unambiguous and well-defined way to do this. It also works across all architectures and input pointer values without having to worry about crashes due to misaligned accesses, while your code doesn’t. Both gcc and clang are now able to optimize away memcpy() and auto vars. An opinion here is simply not relevant, invoking undefined behavior when it increases risk for no benefit is irrational.

                                                  1. -1

                                                    Au contraire. As I showed, C standard does not need to graft on a clumsy and painful anti-alias mechanism and programmers don’t need to go though stupid contortions with allocation of buffers that disappear under optimization , because the compiler does not need it. My code does’t have alignment problems. The justification for pointer alias rules is false. The end.

                                                    1. 10

                                                      There are plenty of structs that only contain shorts and char, and in those cases employing aliasing as a rule would have alignment problems while the well-defined version wouldn’t. It’s not the end, you’re just in denial.

                                                      1. -2

                                                        In those cases, you need to use an alignment modifier or sizeof. No magic needed. There is a reason that both gcc and clang have been forced to support -fnostrict_alias and now both support may_alias. The memcpy trick is a stupid hack that can easily go wrong - e.g one is not guaranteed that the compiler will optimize away the buffer, and a large buffer could overflow stack. You’re solving a non-problem by introducing complexity and opacity.

                                                        1. 10

                                                          In what world is memcpy() magic and alignment modifiers aren’t? memcpy() is an old standard library function, alignment modifiers are compiler-specific syntax extensions.

                                                          memcpy() isn’t a hack, it’s always well-defined while aliasing can never be well-defined in all cases. Promoting aliasing as a rule is like promoting using the equality operator between floats – it can never work in all cases, though it may be possible to define meaningful behavior in specific cases. Promoting aliasing as a rule is promoting the false idea that C is a thin layer above contemporary architectures, it isn’t. Struct memory is not necessarily the same as array memory, not every machine that C supports can deference an int32 inside of an int64, not every machine can deference an int32 at any offset. Do you want C to die with x86_64 or do you want C to live?

                                                          Optimizations don’t need to be guaranteed when the code isn’t even correct in the first place. First make sure your code is correct, then worry about optimizing. You talk about alignment modifiers but they are rarely used, and usually they are used after a bug has already occurred. Code should be correct first, and memcpy() is the rule we should be promoting since it is always correct. Optimizers can meticulously add aliasing for specific cases once a bottleneck has been demonstrated. You’re solving a non-problem by indulging in premature optimization.

                                                          1. 3

                                                            Do you want C to die with x86_64 or do you want C to live?

                                                            Heh I bet you’d get quite varied answers to this one here

                                                            1. 0

                                                              The memcpy hack is a hack because the programmer is supposed to write a copy of A to B and then back to A and rely on the optimizer to skip the copy and delete the buffer. So unoptimized the code may fault on stack overflows for data structures that exist only to make the compiler writers happier. And with a novel architecture, if the programmer wants to take advantage of a new capability - say 512 bit simd instructions , she can wait until the compiler has added it to its toolset and be happy with how it is used.

                                                              As for this not working in all cases: Big deal. C is not supposed to hide those things. In fact, the compiler has no idea if the memory is device memory with restrictions on how it can be addressed or memory with a copy on write semantics or …. You want C to be Pascal or Java and then announce that making C look like Pascal or Java can only be solved at the expense of making C unusable for low level programming. Which programming communities are asking for such insulation? None. C works fine on many architectures. C programmers know the difference between portable and non-portable constructs. C compilers can take advantage of SIMD instructions without requiring C programmers to give up low level memory access - one of the key advantages of programming in C. Basically, people who don’t like C are trying to turn C into something else and are offended that few are grateful.

                                                              1. 4

                                                                You aren’t writing a copy of a buffer back and forth. In your example, you are reducing an encoding of a buffer into a checksum. You are only copying one way, and that is for the sake of normalization. All SIMD code works that way, you always must copy into SIMD registers first before doing SIMD operations. In your example, the aliasing code doesn’t resemble SIMD code both syntactically and semantically as much the memcpy() code does and in fact requires a smarter compiler to transform.

                                                                The chance of overflowing the stack is remote, since stacks now automatically grow and structs tend to be < 512 bytes, but if that is a legitimate concern you can do what you already do to avoid that situation, either use a static buffer (jeopardizing reentrancy) or use malloc().

                                                                By liberally using aliasing, you are assuming a specific implementation or underlying architecture. My point is that in general you cannot assume arbitrary internal addresses of a struct can always be dereferenced as int32s, so in general that should not be practiced. In specific cases you can alias, but those are the exceptions not the rule.

                                                                1. 1

                                                                  All copies on some architectures reduce to: load into register, store from register. So what? That is why we have a high level language which can translate *x = *y efficiently. The pointer alias code directly shows programmer intent. The memcpy code does not. The “sake of normalization” is just another way of saying “in order to cooperate with the fiction that the inconsistency in the standard produces”.

                                                                  In many contexts, stacks do NOT automatically grow.Again, C is not Java. OS code, drivers, embedded code, even many applications for large systems - all need control over stack size. Triggering stack growth may even turn out to be a security failure for encryption which is almost universally written in C because in C you can assure time invariance (or you could until the language lawyers decided to improve it). Your proposal that programmers not only use a buffer, but use a malloced buffer, in order to allow the optimizer (they hope) not to use it, is ridiculous and is a direct violation of the C model.

                                                                  “3. C code can be non-portable. Although it strove to give programmers the opportunity to write truly portable programs, the Committee did not want to force programmers into writing portably, to preclude the use of C as a “high-level assembler;” the ability to write machine-specific code is one of the strengths of C. It is this principle which largely motivates drawing the distinction between strictly conforming program and conforming program.” ( http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2021.htm)

                                                                  Give me an example of an architecture where a properly aligned structure where sizeof(struct x)%sizeof(int32) == 0 cannot be accessed by int32s ? Maybe the itanium, but I doubt it. Again: every major OS turns off strict alias in the compilers and they seem to work. Furthermore, the standard itself permits aliasing via char* (as another hack). In practice, more architectures have trouble addressing individual bytes than addressing int32s.

                                                                  I’d really like to see more alias analysis optimization in C code (and more optimization from static analysis) but this poorly designed, badly thought through approach we have currently is not going to get us there. To solve any software engineering problem, you have to first understand the use cases instead of imposing some synthetic design.

                                                                  Anyways off the airport. Later. vy

                                                                  1. 2

                                                                    I’m willing to agree with you that the aliasing version more clearly shows intent in this specific case but then I ask, what do you do when the code aliases a struct that isn’t properly aligned? There are a lot of solutions but in the spirit of C, I think the right answer is that it is undefined.

                                                                    So I think what you want is the standard to define one specific instance of previously undefined behavior. I think in this specific case, it’s fair to ask for locally aliasing an int32-aligned struct pointer to an int32 pointer to be explicitly defined by the standards committee. What I think you’re ignoring, however, is all the work the standards committee has already done to weigh the implications of defining behavior like that. At the very least, it’s not unlikely that there will be machines in the future where implementing the behavior you want will be non-trivial. Couple that with the burden of a more complex standard. So maybe the right answer to maximize global utility is to leave it undefined and to let optimization-focused coders use implementation-defined behavior when it matters but, as I’m arguing, use memcpy() by default. I tend to defer to the standards committees because I have read many of their feature proposals and accompanying rationales and they are usually pretty thorough and rarely miss things that I don’t miss.

                                                                    Everybody arguing here loves C. You shouldn’t assume the standards committee is dumb or that anyone here wants C to be something it’s not. As much as you may think otherwise, I think C is good as it is and I don’t want it to be like other languages. I want C to be a maximally portable implementation language. We are all arguing in good faith and want the best for C, we just have different ideas about how that should happen.

                                                                    1. 1

                                                                      what do you do when the code aliases a struct that isn’t properly aligned? There are a lot of solutions but in the spirit of C, I think the right answer is that it is undefined.

                                                                      Implementation dependent.

                                                                      Couple that with the burden of a more complex standard.

                                                                      The current standard on when an lvalue works is complex and murky. Wg14 discussion on how it applies shows that it’s not even clear to them. The exception for char pointers was hurriedly added when they realized they had made memcpy impossible to implement. It seems as if malloc can’t be implemented in conforming c ( there is no method of changing storage type to reallocate it)

                                                                      C would benefit from more clarity on many issues. I am very sympathetic to making pointer validity more transparent and well defined. I just think the current approach has failed and the c89 error has not been fixed but made worse. Also restrict has been fumbled away.

                                                                  2. 1

                                                                    The chance of overflowing the stack is remote, since stacks now automatically grow and structs tend to be < 512 bytes, but if that is a legitimate concern you can

                                                                    … just copy the ints out one at a time :) https://godbolt.org/g/g8s1vQ

                                                                    The compiler largely sees this as a (legal) version of the OP’s code, so there’s basically zero chance it won’t be optimised in exactly the same way.

                                                              2. 2

                                                                You don’t need a large buffer. You can memcpy the integers used for the calculation out one at a time, rather than memcpy’ing the entire struct at once.

                                                                Your designation of using memcpy as a “stupid hack” is pretty biased. The code you posted can go wrong, legitimately, because of course it invokes undefined behaviour, and is more of a hack than using memcpy is. You’ve made it clear that you think the aliasing rules should be changed (or shouldn’t exist) but this “evidence” you’ve given has clearly been debunked.

                                                                1. 0

                                                                  Funny use of “debunked”. You are using circular logic. My point was that this aliasing method is clearly amenable to optimization and vectorization - as seen. Therefore the argument for strict alias in the standard seems even weaker than it might. Your point seems to be that the standard makes aliasing undefined so aliasing is bad. Ok. I like your hack around the hack. The question is: why should C programmers have to jump through hoops to avoid triggering dangerous “optimizations”? The answer: because it’s in the standard, is not an answer.

                                                                  1. 3

                                                                    Funny use of “debunked”. You are using circular logic. My point was that this aliasing method is clearly amenable to optimization and vectorization - as seen

                                                                    You have shown a case where, if the strict aliasing rule did not exist, some code could [edit] still [/edit] be optimised and vectorised. That I agree with, though nobody claimed that the existence of the strict aliasing rule was necessary for all optimisation and vectorisation, so it’s not clear what you do think this proves. Your title says that the optimisation is BECAUSE of aliasing, which is demonstrably false. Hence, debunked. Why is that “funny”? And how is your logic any less circular then mine?

                                                                    The question is: why should C programmers have to jump through hoops to avoid triggering dangerous “optimizations”?

                                                                    Characterising optimisations as “dangerous” already implies that the code was correct before the optimisation was applied and that the optimisation can somehow make it incorrect. The logic you are using relies on the code (such as what you’ve posted) being correct - which it isn’t, according to the rules of the language (which, yes, are written in a standard). But why is using memcpy “jumping through hoops” whereas casting a pointer to a different type of pointer and then de-referencing it not? The answer is, as far as I can see, because you like doing the latter but you don’t like doing the former.

                                                            2. 1

                                                              The end.

                                                              The internet has no end.

                                                      1. -1

                                                        Yes.

                                                        1. 1

                                                          Do you think people who don’t enjoy coding have a place in the industry?

                                                          1. 5

                                                            Yes, assuming you don’t outright loathe it. How long have you been a professional software engineer?

                                                            1. 1

                                                              Going on five years. My ability to tolerate it varies on the work environment tbh.

                                                        1. 2

                                                          Maybe until a year or so ago I would have answered yes without hesitation, but recently I’ve realized that the actual act of coding has lost its luster for me. I like making stuff, and coding is now just a means to that end. One might think that’s a distinction without a difference, but the upshot is that dealing with uninteresting grunt work is that much harder.

                                                          I’ve found myself drawn to other creative endeavors in my personal time (emphasis on “create”) such as woodworking and gardening. Both have filled that make-something niche in my life quite nicely.

                                                          1. 1

                                                            I feel similarly. I think it was novel at first but now I want to make different things.

                                                          1. 4

                                                            I was interested in what he was saying just up until he said

                                                            Some may even be lucky enough to find themselves doing Extreme Programming, also known as ‘The Scrum That Actually Works’.

                                                            My experience with XP was that it was extremely heavyweight and did not work well at all. It created the greatest developer dissatisfaction of any of the versions of Agile I’ve encountered.

                                                            1. 5

                                                              Couldn’t disagree more – the most successful team I was on was heavily into XP. When people say it’s heavyweight, they’re usually talking about pair programming. I’m not sure what people have against it; I’ve found it’s a great way to train junior developers, awesome for tricky problems, and generally a great way to avoid the problem of, “Oh this PR looks fine but redo it because you misunderstood the requirements.”

                                                                1. 2

                                                                  I don’t want to discount your experience, but it sounds like the issues you’ve had with pair programming are more with the odd choices your employer imposed.

                                                                  Both people have specialized editor configs? Sure, switch off computers or whatever too; no need to work in an unfamiliar environment.

                                                                  And if one person is significantly less experienced than the other, that person should be at the keyboard more often than not – watching the master at work will largely be useless.

                                                              1. 3

                                                                Why I like XP over anything else is the focus on development practices rather than business practices. Pairing, TDD, CI, <10 minute builds, WIP, whole team estimation, etc are all used to produce a better product, faster.

                                                                The weekly retrospective offers a way to adjust practices that aren’t working and bolster those that are.

                                                                1. 2

                                                                  Agreed 100%. It turned my head a bit when he thought Agile was too prescriptive, but then was considering an even more prescriptive methodology.

                                                                  1. 1

                                                                    What was your experience with XP? Also, scrum is heavyweight as well in my experience and doesn’t work excellently in an actually agile environment like a startup. Feels like it could work in corp. though.

                                                                  1. 3

                                                                    Huh, I didn’t realize the 8086 only had a 20 pin address bus. Makes everything seem a bit more sane, although why make the segments overlap? I can’t see the benefit and it makes expanding the address space that much harder.

                                                                    1. 3

                                                                      I guess one (tiny) benefit of having segments overlap every 16 bytes is that a malloc() implementation could return pointers of XXXX:0000 format, i.e. only concern itself with segments? And then, if you want to index into such an array, you can put the array element’s index/offset in a register without having to add a base address offset, since the array always starts at 0000 (within the given segment).

                                                                      1. 3

                                                                        Overlapping has a lot of sense if you take into account that non-trivial amount of programs only ever needed one segment, so you could use “near” pointers and shorter jump instructions that only deal with offsets.

                                                                      2. 2

                                                                        More silly trivia: All wintel PCs boot with line 20 disabled, in order to default to 8086 mode. And if you turn it on, you talk to the keyboard controller. Some quick googling led me to an example here: https://github.com/Clann24/jos/blob/master/lab2/code/obj/boot/boot.asm#L29

                                                                        Of course these days all these devices exist on-die, but back in the day they would have been discrete ASICs.

                                                                      1. 2

                                                                        We’re considering introducing this in at work, because apparently maintaining docs and client libs is too difficult for today’s engineers. Am curious what other lobsters have had as experiences.

                                                                        1. 10

                                                                          My experiences have been pretty positive. At the very least, it’s way better than what most organizations use instead (i.e. nothing).

                                                                          My first time using it, I was grafting it onto an existing API and was pleasantly surprised to see that it was expressive enough to bend to the existing functionality. All the stuff it enables is great – automated conformance testing being the big one, in my book.

                                                                          1. 1

                                                                            Ah, neat. I already had been running boring documentation and updates (wiki ops, wheee), and am resistant to change if something works.

                                                                            But, if it’s been working well for other folks, it’s worth investigating.

                                                                            1. 3

                                                                              maintaining docs and client libs

                                                                              Have you thought about gRPC, or twirp at all? If the use-case is internal-facing systems then I think what a lot of people really want is RPC. Swagger seems great for external-facing systems, though.

                                                                              (Disclaimer: I haven’t used any of the tools I just mentioned :D)

                                                                          2. 1

                                                                            We have also used it quite a bit at work, getting the most benefit from automatic documentation of endpoints. The automatic client lib generation didn’t work very well for us, and hand written clients were our approach.

                                                                            Depending on what your API is written in, you may have quite good support for generating pretty detailed docs.

                                                                            Also, this opens the door for some kind of automated testing against the API, based on the Swagger definition, but, I never got around to doing that.

                                                                            1. 1
                                                                            1. 4

                                                                              What’s the meaning of the last line, “I showed up for him”?

                                                                              1. 13

                                                                                As used here, “him” implicitly means more than just “Steve Jobs.” It alludes to the essence of Jobs, the things that make him who he is. Carmack is saying he dropped everything he was working on when Jobs asked for him because of his admiration and respect for Jobs’ as a person, not just for Jobs’ fame or influence. Sentences like these are frequently written with “him” italicized, and spoken with strong emphasis on “him.”

                                                                                That’s how I read it anyway.

                                                                                Sorry if my explanation seems patronizing, I just went ahead and assumed you’re a non-native English speaker.

                                                                                1. 6

                                                                                  Thanks – not patronizing at all, even though I am in fact a native speaker :) Just not familiar with that phrase and unsure if I should take it literally.

                                                                                  1. 8

                                                                                    As a non-native speaker, that’s quite reassuring to know that English subtleties are deep, even for a native speaker :-)

                                                                                  2. 6

                                                                                    I read it as John showing up for Jobs’ funeral :-)

                                                                                1. 2

                                                                                  There’s something about Scala that my brain just can’t handle; I think the designers just have fundamentally different tastes from mine. The “implicit” keyword for example strikes me as crazy – the name alone says to me it’s a bad idea!