1. 1

    Technical debt is something that always creates an internal debate for me. In some cases, I believe that it is our responsibility as engineers to “fight” against it, through the use of well-known refactoring techniques and being efficient on time management to improve code even when adding new features. But sometimes (or most of the time) you always have the pressure to add new features to the system, and how you sell that the feature is going slow due to not being able to “just add the feature” but instead you have to move things around to accommodate the new feature?

      1. 6

        I have learnt never to say “Technical debt”.

        If you say “Technical debt” to the business end, you’ve lost already.

        Because then you have to explain what the problem is exactly. And you run in to the Manager’s Syllogism.

        You know the one, “Manager’s are Important People. They know and Understand Important Things, if XXX was Important, They would Know All About it.”

        Ok, I’m exaggerating for humorous effect, but the point is compared to everything else on the list of things they want done, Technical Debt is really low on the list…. and most times falls of the list of things To Do This Cadence. Again.

        Or worse, “So you’re saying this mess exists because your Managers are Bad and your colleagues are Worse? And now you expect me to trust you to go off on a long rewrite?”

        I distrust rewrites.

        They are never as Good as Hoped and always take way way longer than expected to get up to full functional equivalency. In fact, usually never do. In fact, I’d argue most of the benefits of a rewrite is in deleting cruft nobody really needs.

        So start there.

        Delete cruft as you go along every time you see it.

        If someone yells and really wants it back… hey that’s what version control is for.

        So merely say, “It will take X amount of time.” Why so Long? “Because that is the way the existing code is. (You can start explaining it to them until their eyes glaze over..)”

        And then fix the existing code.

        When you need to fix a bug, write a unit test that stimulates it. Then tidy up the code until it is so simple the bug is obvious. Fix the bug.

        When you need to add a feature, brace exiting required functionality in unit tests, write a unit test to add the new feature, watch it fail, clean up the code until it’s trivial to add the new feature, add the new feature, watch it pass.

        Technical Debt? What’s that? Heard of it once, not a useful concept in practice.

        1. 2

          Thank you so much for this. Really really useful!

          1. 1

            Thank you! Will read it :)

        1. 16

          The click-bait title is not really backed up in any way in the content. The conclusion doesn’t even bring up company death. All-in-all, a rehash of the existing knowledge and statements around technical debt.

          1. 3

            Moreover, the fact that almost all of the most successful, rich companies have a pile of technical debt… some maybe inescapable… refutes the title so thoroughly that it almost seems more natural to ask if technical debt is a positive sign of success.

            Im not saying it is so much as looking only at correlations between companies that last and amount of technical debt would make it look positive by default.

            1. 4

              I tend to view that as “Stagger onwards despite the mountain of technical debt, because the prime drivers in our current economy are not efficiency or competence of engineering”.

            2. 1

              I’m sorry if the content of my post wasn’t explicit enough. The 4 areas of technical debt I analyse give my idea of how they lead to the corrosion of the engineering organization and the company:

              • Lack of shared understanding on the functionality of the product (leading to waste of effort and disruption of service)
              • Inability to scale and loss of agility in respect to the competing organizations
              • Inability to react to failures and learn from how the product is used by your clients
              • Inability to scale your engineering team

              My bad if the above points haven’t been clear enough from my post. Thanks for your feedback, really appreciated!

              1. 2

                No, I got those points out of it, but you didn’t link that to company death in anyway. I’ve not done a study but of the successful companies I’ve worked at, tech debt is pervasive, depending on the area.

                Also, and this point as more to do with me than you so I don’t hold it against you, I’m sick of these articles that of the form:

                Observation -> Logical Assumptions Based On Observation -> Conclusion

                There is no empiricism in there at all. So does tech debt make it harder to scale your engineering team? Maybe! But you’ve just presented some nice sounding arguments rather than any concrete evidence. It’s easy to say “here’s a a bad thing and here are the bad things I would assume happen because of it” but that’s a long ways away from “here’s a bad thing and here is the concrete evidence of outputs because of it”.

            1. 3

              This paper is somewhat jumbled. The short version is that a a bunch of tools were run through the benchmark used for a 2017 software verification competition, where lots of systems were compared. The benchmarks were mostly short, i.e., under 20 lines, but there are lots of them.

              The programs used to represent testing were: AFL-fuzz, CPATiger, Crest-ppc, FShell, Klee, and PRtest. A somewhat odd selection to label ‘testing’, but if you are looking for automated tools, what else is there?

              The model checkers were: Cbmc, CPA-Seq, Esbmc-incr, and Esbmc-kInd.

              Figure 6 shows the ‘testing’ tools to be a lot slower than the model based tools. Is this because the testing tools are designed to work on programs that are much larger than 20 lines and so have a higher startup cost? Model based tools would certainly have performance issues with larger programs.

              Perhaps when the authors have had time to think about their results, they will write a more interesting paper (assuming the data contains something interesting).

              1. 3

                I also think they have focused entirely on the wrong benchmark.

                I really don’t care about two orders of magnitude difference in CPU time for this class of task.

                I really really do care about how much of my time and brain power is required to set up and run these tools.

                They can then run all week for all I care.

                I suspect under that benchmark, and given their results, AFL looks very very good. (And from what I have read, and from a couple of personal experiments I have run, AFL is indeed very very Good)

                Also very interesting, and I didn’t spot (on admittedly on a fast read) any deeper analysis of the factoid that fuzzers found some bugs model checkers didn’t and vice versa.

                1. 1

                  They needed a benchmark that the model based tools had a chance of being able to handle; the one thy used is it.

                  Me thinks the authors saw the opportunity to get a paper involving formal methods published, with their names on, that did not involve too much effort.

                2. 1

                  Appreciate you taking time to enumerate weaknesses. Being able to check small modules is fine for my purposes since I advocate using methods for safe interfacing of components. However, for use with legacy software or not designed with verification, that would be a big problem for the model-checkers.

                  1. 1

                    I have probably debugged more gnarly linux problem with this tool than any other.

                    https://fosdem.org/2018/schedule/event/debugging_tools_strace_features/

                    Cool! You can now inject faults and delays into syscalls! You can dump a stacktrace from an arbitrary syscall.

                    Oh dear, a heads up, super fancy compiler optimizations are now giving valgrind trouble… https://fosdem.org/2018/schedule/event/debugging_tools_memcheck/

                    1. 2

                      Next Year in Christchurch New Zealand! https://twitter.com/linuxconfau2019

                      I look forward to welcoming you in person!

                      1. 1

                        New Zealand is a beautiful country. Hope I can get the opportunity to take part in linuxconfau2019.

                        1. 1

                          While I’m sure there will be some activities associated with the conference, there is a lot more to see and do.

                          So if you do come, take a bit of vacation leave after the conf so you can do a few trips into the mountains or to the beaches.

                      1. 1

                        One thing that bends my mind is signal safety.

                        There is very very little written about is the signal safety.

                        Why? Because it’s like the multi-threaded / multi-core memory access problem… but worse.

                        Partly very little is written about it because very little is guaranteed. The only thing guaranteed about it is this one small paragraph and strictly,as far as I can see only applies to C++….

                        When the processing of the abstract machine is interrupted by receipt of a signal, the value of any objects with type other than volatile sig_atomic_t are unspecified, and the value of any object not of volatile sig_atomic_t that is modified by the handler becomes undefined.

                        The other thing that always bugs me…. is the fine fine print on what actually happens is CPU implementation dependent.

                        ie. Ultimately it doesn’t matter what the compiler and library gives you…. it’s down to the very fine fine print of the particular (version) of CPU implementation… and often the exact behaviour, even at the assembler level, is very fuzzily specified in the CPU instruction reference manuals.

                        ie. Your libC may give you a sig_atomic_t….. but just try read your CPU instruction reference manual to find a guarantee that your libC’s choice of sig_atomic_t is valid…. and usually you will come away deeply troubled.

                        1. 3

                          I wish these guys https://www.schleich-s.com/en/US/wild-life.html would come up with a Ubuntu Line.

                          I have a family of Mongeese on my desk, but of late it has been pretty hard to find a matching animal.

                          1. 5

                            A provocative title, but pretty good content–especially how engineering tends to underestimate their time costs. And how we’re often bad at justifying the cost vs return of tools to management. A lot of the manager vs engineer headbutting I’ve seen results from talking past each other about the value of engineerings time being better spent here or there.

                            1. 4

                              Not nearly as much as management underestimates their time cost.

                              Want to subscribe to one of these services?

                              That takes money.

                              Any idea just how much engineering you can do for the time cost of engaging management to spend money (for the life time of their product)?

                              Any idea of how much fun it is for an engineer to do that?

                              And when you need to run up another service for something with a tight dead line?

                              And you lay out the schedule and you realise that by far the largest chunk of real time will be getting management to OK the spend?

                              And if you jump through all these hoops…. you look at the Version Control logs and realise that CEO’s come and go like mayflies but the code lives on and on… and you know for certain the service you bought will shutdown somewhere in the life of the code…. and everything you engineered to rely on it will die.

                              The first question I ask when I engineer a dependency is “Can I pull the source into my repo? Can I build it and debug it and patch it? Is there a viable/active upstream I can send patches, to that will respond and mainline them? Is there an active community around it? If upstream dies or goes a different way… will my code die or rumble on?”

                              1. 2

                                On the flip side, there is “Not Invented Here!” and “But my use case is 1% different than the standard use case, so I need to write my own version!” It’s all very organization dependent, of course, as to which ends up being the reason. And sometimes you do have to roll your own because that is the fastest route to completion.

                            1. 5

                              I really love Sqlite, and reading accounts like this is great. BUT, note this all reads with no inserts/updates/deletes. Sqlites achilles heel for being really useful?

                              Additionally, though this test was focused on read performance, I should mention that SQLite has fantastic write performance as well. By default SQLite uses database-level locking (minimal concurrency), and there is an “out of the box” option to enable WAL mode to get fantastic read concurrency — as shown by this test.

                              1. 9

                                You’d be surprised. Serializing writes on hardware with 10ms latency is pretty disasterous, giving parallel write databases a huge advantage over SQLite on hard drives. But even consumer solid state drives are more like 30us write latency, over 300 times faster than a conventional hard drive.

                                Combine with batching writes in transactions and WAL logging and you’ve got a pretty fast little database. Remember, loads of people loved MongoDB’s performance, even though it had a global exclusive write lock until 2013 or something.

                                People really overestimate the cost of write locking. You need a surprising amount of concurrency to warrant a sophisticated parallel write datastructure. And if you don’t actually need it, the overhead of using a complex structure will probably slow your code down.

                                1. 3

                                  Sounds like you might like the “COST” metric…. https://lobste.rs/s/dyo11t/scalability_at_what_cost

                                2. 6

                                  Given that they run all of expensify.com on a single (replicated) Bedrock database, that would pass my “really useful” test, at least. :)

                                  1. 2

                                    The project page itself warns about that. When toying with ideas, I thought about a front-end that sort of acted as a load balancer and cache that basically could feed writes to SQLite at the pace it could take with excesses held in a cache of sorts. It would also serve those from its cache directly. Reads it could just pass onto SQLite.

                                    This may be what they do in the one or two DB’s Ive seen submitted that use SQLite as a backend. I didnt dig deep into them, though. I just know anything aiming for rugged database should consider it because the level of QA work that’s gone into SQLlite is so high most projects will never match it. That’s kind of building block I like.

                                    Now I’ll read the article to see how they do the reads.

                                    1. 1

                                      Devil’s advocate. If you are going to give up on easy durability guarantees, you could also try just disabling fsync and letting the kernel do the job you are describing.

                                      1. 1

                                        I’ve been trying to make posts shorter where possible. That’s twice in days someone’s mentioned something I deleted: original version mentioned strongly-consistent with a cluster. I deleted it thinking people would realize I wanted to keep the properties that made me select SQLite in first place. Perhaps, it’s worth being explicit there. I’ll note I’m brainstorming way out of my area of expertise: databases are black boxes I never developed myself.

                                        After this unforgettable paper, I’d be more likely to do extra operations or whatever for durability given I don’t like unpredictability or data corruption. It’s why I like SQLite to begin with. It does help that a clean-slate front-end would let me work around such problems with it more true for memory-based… depending on implementation. Again, I’m speculating out of my element a bit since I didn’t build databases. Does your line of thinking still have something that might apply or was it just for a non-durable, front end?

                                  1. 7

                                    The bit on pages 4-5 showing how Spectre-style attacks can happen even on an in-order processor that lacks speculative execution, due to compiler optimizations, was eye-opening. Obvious in retrospect that certain kinds of optimizations that convert explicit conditionals into branchless code are the moral equivalent of speculative execution (because they execute both sides of a conditional), but it hadn’t occurred to me, and I don’t think I’d seen that mentioned elsewhere.

                                    1. 4

                                      The fun thing is has nothing to do with hardware.

                                      Imagine doing a Freedom of Information request on some large bureaucratic organisation.

                                      It will pull the files they need from store, prepare the answer and at the final step, redact anything sensitive.

                                      Now suppose you create a second FOIA request, there is a tiny leak of information if the information needed to answer the second is in the files needed to answer the first.

                                      So even if the second is blacked out wall-to-wall, if the answer came back “faster-than-usual”, you might guess there was an overlap between source material between your first and second request.

                                    1. 2

                                      The Raiblocks crypto currency approaches the DCS challenge by a couple of mechanisms. (ps: This is my reading of it, which may be wrong, see the RaiBlocks white paper for the details and the correct view)

                                      https://raiblocks.net/media/RaiBlocks_Whitepaper__English.pdf

                                      1. Instead of a single block chain, each coin holder maintains their own chain, and only they may write to it (as verified by public key)

                                      2. Each account assigns a representative to vote for them when a conflict arises between accounts.

                                      3. The representatives are weighted by Proof of Stake rather than Proof of Work on the basis that those with the highest stake in RaiBlocks are least likely to undermine it’s value.

                                      4. Proof of Work is only used to stop spam.

                                      Warning. I have no skin in this game so my reading of the paper is purely casual.

                                      1. 1

                                        Benchmarking perhaps?

                                        If you are trying to determine truly which algorithm is faster X or Y…. you had better be sure you are measuring the algorithm not merely whether the caches are hot or not, since the cache effects will dominate.

                                        Besides you can emulate the effect by filling the caches with other stuff. Just takes longer but still can be done.

                                        1. 1

                                          can you get that kind of timing though? All those exploits seem to measure how long clflush takes. I don’t see how you get the same info without it

                                          1. 4

                                            Hmm. I thought it was via checking the timing to access a permitted addressable location, but used indirect addressing to load that permitted location into cache based on an indirect value that you are not permitted to access.

                                            If…

                                            • you allowed to access location BaseAddress to BaseAddress + 256 * CacheLineSize,
                                            • you evict all of the allowed range from cache (or flush it from cache, either will do)
                                            • but you want to know the value of byte at the protected address pointerToByte
                                            • then attempt to load BaseAddress[ *pointerToByte * CacheLineSize]
                                            • Which will segfault since you’re not allow to dereference pointerToByte
                                            • but you had masked and ignored the fault
                                            • but the damage to the cache has already been done
                                            • and then walk down for I = 0 to 255 checking the time to access BaseAddress[ I*CacheLineSize]
                                            • all of which are permitted

                                            If the time to access BaseAddress[ I0 * CacheLineSize] is significantly faster than the other 255 timings… you know *pointerToByte had value I0

                                          1. 1

                                            Suppose you have a pet work or hobby project X.

                                            It depends on someone else’s project Y which you don’t have commit access to.

                                            Now you fix a bug in Y, or add a feature to Y ie. Create a DeltaY.

                                            It’s largish so it’s a matter of several commits.

                                            Now you would like to upstream it, you chat on the mailing list for X and upstream is not convinced, have other priorities so are not going to just slap your stuff in before the next release.

                                            People using X need DeltaY. Where are they going to get it? From Y. Nope. From your fork of Y. Yup.

                                            People developing Y can have a look at your DeltaY at leisure, and (at least in gitlab, I assume it is the same for github) you can click on “Create Merge Request” and ask the Y dev’s to merge DeltaY into Y.

                                            They can review it and you can have a bit of too and fro updating the merge request if need me….

                                            And they can trivially click merge. Done.

                                            So sort of the first step in creating a dependency from any project Z on project A is for those with commit access to Z to fork A.

                                            So why so many without any deltas? Well partly insurance.

                                            If Y goes hogs wild and dodgy, changes licence, deletes itself…. your project trudges on without a hiccup.

                                            1. 3

                                              It hasn’t been until very recently that people have been able to even fathom the idea that you could have a Lisp that isn’t based on sexprs. It seems like once people learn Lisp they accept sexprs as necessary to the point of not being able to imagine anything else. So, I’m really grateful for all the time Moon has spent dreaming/studying this idea.

                                              1. 2

                                                Not so sure about this. There’s been gadzillion attempts at lisp with infix syntax. It’s a common early idea of a newcomer to the language that never sticks as you progress.

                                                There’s even been a real lisp (Dylan) with mature implementations and corporate support which tried to do away with sexprs. Didn’t fare well.

                                                1. 2

                                                  It’s a common early idea of a newcomer to the language that never sticks as you progress.

                                                  Maybe I don’t want Lisp in so much as as much of Lisp’s values without the sexprs. I concede I may lose something in the process. I’d also argue Ruby was a successful take on this idea in the OO realm even if it lacks a lot of the power of Lisp.

                                                  There’s even been a real lisp (Dylan) with mature implementations and corporate support which tried to do away with sexprs. Didn’t fare well.

                                                  I’m not comfortable disregarding ideas just because they didn’t experience success with the mainstream. I find Dylan beautiful and forward-thinking. Based on what I’ve read of Dylan, I don’t believe a deficiency of the language caused it to be passed over; but rather the circumstances of the institutions and time period it was created in.

                                                  1. 1

                                                    Same could be said of LISP itself versus mainstream languages or JVM LISP versus native LISP. The other factors of language adoption probably dropped those other attempts like they did most of the LISP’s with regular syntax. Hardly any made it. Those that did would also have strong cultural pressure on keeping syntax style that was popular in that niche.

                                                    Still a chance that a Python or Clojure-like project with most things going for it could succeed with non-traditional syntax mixed with a capable LISP implementation.

                                                  2. 1

                                                    The Ancient Lisper in me (yes, I have a treasured copy of McCarthy’s Lisp manual) gets the giggles when I note TFA is hosted on cddddr.org

                                                  1. 5

                                                    Looks like Leprechauns…

                                                    https://leanpub.com/leprechauns

                                                    1. 1

                                                      This should be read by every working software eng. So good.

                                                    1. 1

                                                      I have mostly been ignoring the whole WASM thing as I feel it is merely going to make things worse…

                                                      It’s the old, “Syntax doesn’t really matter, Syntax is just sugar. It’s the semantics of a language that make a language a language”.

                                                      So it seems to me WASM is just JavaScript uglified by removing all syntactic sugar to lay bare the JavaScript semantics.

                                                      ie. No matter which language you choose to sugar it with…. the semantics won’t be and can’t be changed.

                                                      So you’re going to end up with pages like https://clojurescript.org/about/differences (and worse) for every language that compiles to WASM.

                                                      Am I missing something?

                                                      1. 2

                                                        WASM isn’t JS and JS can’t compile to WASM. It’s a lower level that doesn’t even feature a GC. Think more bytecode, than uglified JS.

                                                        A quick scroll through the instruction set gives you a pretty good idea of what it provides. Compiling to WASM allows for some substantial performance benefits over it’s JS counterpart in initial start time and a number of scenarios in runtime. Realistically, however, most web developers at this point in time will not have any use for it.

                                                        There is still a bit of work needed to provide WASM with the ability to interact with the DOM directly rather than bridge through JS. How they are going to achieve this, I don’t know. It’s is the main feature I am looking forward to as it will allow VDOM implementations to be optimized further, and incorporated into languages like rust. See asm-dom for example

                                                        1. 1

                                                          So it seems to me WASM is just JavaScript uglified by removing all syntactic sugar to lay bare the JavaScript semantics.

                                                          That’s not the case at all. There is JS interop but it’s unrelated to JS. It doesn’t even have GC, for starters (as mentioned in the post.)

                                                        1. 2

                                                          Name is already taken by a much more interesting concept…

                                                          http://www.kevinalbrecht.com/code/joy-mirror/joy.html

                                                          Especially look at the Algebra for joy and a rewriting system for Joy and the joy in joy.

                                                          1. 13

                                                            Finally! This article is about desktop, but OpenSSH is coming to all of Windows, including IoT Core where I work. I’ve been championing the upgrade for years now. Compared to our old SSH implementation, OpenSSH is more stable, supports newer encryption standards, and can send files over sftp.

                                                            Very excited to see this land. Kudos to the Powershell team for putting in most of the porting work, and of course to OpenBSD for developing OpenSSH in the first place.

                                                            1. 5

                                                              Last time I tried anything microsofty in that sort of realm I started throwing things at the screen. (Can’t remember what it was telnet maybe? Their built in “term” thing?)

                                                              It obstinately refused to resize, and got the wrapping horribly wrong and clearly had been written by somebody who had an ideological hatred of the command line.

                                                              Downloaded putty and…. Oh My! It all just worked and worked correctly!

                                                              So merely having a ssh client will not cause me to shift from putty, having a ssh client that works properly and slickly might convince me.

                                                              1. 7

                                                                Well, for IoT Core I’m more excited about the OpenSSH server than the client. I’ve been connecting to it with PuTTY.

                                                                That said, the Windows command-line has vastly improved from 8.1 to 10. The biggest improvement is that text reflows as you resize the window. Copy/paste was also improved.

                                                                Telnet and SSH are just transports. I bet your frustration was due to the old Windows conhost.exe being a terrible terminal.

                                                                1. 2

                                                                  When you connect to IoT Core via SSH what shell are you dropped in to?

                                                                  1. 1

                                                                    Just plain old CMD. Usually Powershell is present too, but OEMs can choose to build an image without Powershell.

                                                                    If you want to connect directly to a Powershell session, it has its own remote shell feature, enter-pssession.

                                                                    1. 1

                                                                      There’s a more detailed answer by Joey Aiello in the HN thread.

                                                                  2. 3

                                                                    Their built in “term” thing?

                                                                    AFAIK some projects such as the Git command line utilities for Windows have for years now shipped with a TTY which is based on PuTTY’s TTY (just not using any of the SSH code or anything) and it’s much nicer.

                                                                    1. 2

                                                                      ConEmu is another tool that will improve your commandline life on Windows. As for Microsoft products, there are many people who swear by Powershell!

                                                                      1. 2

                                                                        Powershell is a nice shell, but it lives inside the same terminal (conhost.exe) that CMD does.

                                                                        1. 1

                                                                          Cmder is a great shell built on top of ConEmu that even has support for the quake-style appear/disappear animation.

                                                                      2. 2

                                                                        Try cmder for a decent terminal. The git version comes with a bunch of tools (including ssh, ls, etc) and provides a terminal experience on Windows that won’t make you throw things at the screen (hopefully!).

                                                                      3. 1

                                                                        That’s pretty impressive. OpenSSH makes a lot of POSIX assumptions about things like PTYs and fork.

                                                                      1. 3

                                                                        Currently the most promising cryptocurrency I have seen….

                                                                        Only missing feature that may hurt is lack of “private / untraceable” that Monero claims.

                                                                        I’m not convinced by smart contracts and dApps yet…

                                                                        It’s going to be hard to convince people to really buy into things that take an hour of deep thought to understand.

                                                                        I’m convinced the main reason for the popularity of Bitc at the moment is the only thing most punters understand is “It’s a Bubble! Let’s try ride it to fame and fortune!”