1.  

    I’m looking for software developers on location in Toronto, Canada

    We’re building a system that enables cryptographically secure chain-of-custody on distributed infrastructure without a ledger. We’re using some ideas from the cryptocurrency world, but we’re a traditionally-funded startup with quality investors and paying customers.

    I’d love to hear from developers interested in DevOps work, front-end web and mobile in ClojureScript and React Native, and backend developers who are comfortable with distributed systems. Previous experience with Clojure, cryptography, and security would be an asset.

    Let’s talk over e-mail - my address is in my profile

    1.  

      “without a ledger.”

      That part sounds refreshingly different after all the blockchain startups in that space.

      1.  

        Yes. Also, a great example of democracy at work. A citizen saw a problem, collected data, got a proposal to government, and government actually fixed the problem. We need more like him.

        1.  

          Could you paste a curl -i of the behavior you’re seeing? Link seems fine here, that’s puzzling.

          1.  

            There’s a linked called “cached” under each submission that takes you to an Archive.is copy. Both the original and cached links are working for me. Try cached if original keeps 404ing.

          1.  

            Collapsing Towers of Interpreters, the work of GraalVM/Truffle and PyPy.

            Exciting times! CToI deserves its own lobsters post.

            1.  

              Submit it next week then.

            1. 2

              This is one of the most powerful tools around for making extremely precise compiler analysis passes without much effort.

              1. 3

                Are you saying you like the ideas in the paper or have previously seen/used the technique? I found it through my semi-blind searches. I don’t know if it had uptake.

                1. 3

                  Well I think this idea has been deeloped further into the monadic aam approach. It hasn’t been applied in practice yet to my knowledge, except maybe for a java analysis.

              1. 4

                Furby emulator when?

                1. 2

                  Why emulator? It’s better to make an extensible, open hardware interactive toy platform.

                  I wonder, did they really have to use raw assembly for that. That kind of codebase must be a real headache to maintain.

                  1. 5

                    In my opinion an emulator would be a cool thing to tinker with and I am sure I’m not alone in thinking that.

                    The processor in the Furby is apparently a Sunplus SPC81A, its datasheet describes it as a 6502 instruction set although it only has 69 instructions as opposed to the 151 in the 6502 so it’s likely to be missing some registers. It comes with 80K-byte ROM shared by audio samples as well as programme and 128-bytes of working RAM. While it can run at up to 6Mhz at (3.6V-5.5V) circuit diagrams for the Furby show a 3.58MHz crystal in use that runs at 3V.

                    So why raw assembly rather than some higher language compiled into ASM? The answer should become clear from the above. With just 80K-byte of storage, a lot of which is likely consumed by audio samples - the only way to make a programme fit in the space available let alone be performant on such limited hardware is to program it in ASM.

                    An open hardware, interactive toy platform would be cool, I have seen some projects were people have hacked a Raspberry Pi Zero or other such small micro controller into the guts of the Furby to give new life to an old toy.

                    :)

                    1. 6

                      The original 6502 only had 50ish instructions documented, iirc, with some undocumented. The later iterations added some instructions (the 65C02 had a bunch more).

                      Also, the 6502 only had three registers that were part of instructions: the accumulator, X, and Y. It also had a stack pointer and program counter which were indirectly accessible. Again, some of the later iterations added more (I think the NES-specific 6502 added a dozen or more registers).

                      In the source code, you see statements like:

                      Bank       EQU      07H
                      

                      … associated with comments like “BANK SELECTION REGISTER”. Those aren’t actual registers. Most instructions used two-byte memory addresses, a full 64k. The addresses from 0000 to 00FF, however, were the “zero page”. The 6502 had a special addressing mode which allowed one-byte addresses into the zero page, which many programs used as registers.

                      The same instructions and addresses were used to access both RAM and ROM, so if that total was larger than 64k, then you needed to use hardware switch instructions to swap memory banks. Again, though, later iterations expanded that (though only the wild later iterations, like the 16- and 32-bit versions of the processor).

                      1.  

                        Thanks for sharing, I am fascinated by old 8-bit processors and processor design in general. I was sure the 6502 has 151 but maybe I misread it somewhere and am talking out my ass. The Sunplus SPC81A only has the X register.

                        Looking at the data sheet I can see that it says:

                        To access ROM, users should program the BANK SELECT Register, choose bank, and access address to fetch data.

                        Given its described as having a 6502 instruction set and not actually being a 6502 i’m guessing that’s in addition to what ever opcodes they chose to implement.

                      2. 2

                        “Popular home video game consoles and computers, such as the Atari 2600, Atari 8-bit family, Apple II, Nintendo Entertainment System, Commodore 64, Atari Lynx, and others, used the 6502 or variations of the basic design. “

                        People might want to play games or try to code within a similar setup. There’s already a large number of people doing stuff like that.

                  1. 3

                    ““Against the average user, anything works; there’s no need for complex security software. Against the skilled attacker, on the other hand, nothing works.””

                    I want to note that this is actually wrong. There’s all kinds of devices I can’t access or understand to the degree I want. This is especially true if it’s implemented in, totally or partly, silicon instead of software. The other times it requires esoteric knowledge of stuff like RF. These secrets stay secret for long periods despite billions of dollars of volume with hackers being in possession of the devices. They effectively solved the trusted client problem for those secrets with the one technique that works: defeating it costs piles of money. Money to tear the chips down, money on rare specialists, money on common specialists, and lots of time.

                    So, it’s true if you say “against a skilled attacker with necessary time and money.” In many cases, they might not have the time and money. Quick example: FBI paid Cellebrite something like $100-200 grand to crack that iPhone. So, if iPhone implements DRM, your data is secure on them if your enemy can’t afford or won’t spend $100-200 grand to get it. Then, they downgrade to having to use cameras aimed at the screen, retype documents by hand, try to get exploits into apps, try to con people, etc. For media, both quality expectations and laziness can ensure the “cam” copies have minimal impact on sales. So the DRM works in that case against main audience even with smart hackers in possession of the device.

                    1. 2

                      Sounds pretty true to me? Even Apple with their untold billions can’t keep the iPhone secure enough that a couple of hundred grand can’t crack it.

                      1. 2

                        Apple doesnt care about secuurity. Their Macs were a decade behind others in security features at one point. The iPhones have a few features that help. Neither those nor the OS are implemented in a rigorous way. You could say they just added a few things with average implementation effort. And that half-ass job on a few things takes several hundred grand a year to beat.

                        Now, lets say they invested in medium-to-high-assurance additions to CPU, security co-processor, and OS. They have the money to attempt it. Ive seen startups and CompSci folks on small budgets build each item. Apple might knock out whole categories of risk with a few million to tens of millions spent one time. They had tens of billions. They didnt do it. So, they dont care. That simple. Theirs stuff will sell anyway, too.

                      2. 1

                        True, you can attach a dongle costing a few hundred dollars ;-)

                        1. 2

                          Uses DongleCoin so there’s no single point of failure.

                      1. 2

                        One of the their applications is bootstrapping:

                        Implementing language X in a much harder, lower level language Y may be difficult. But if X is self extensible then the implementation is done with a bit less Y code and a bit more X code.

                        1. 3

                          That reminds me about Bootstrapping Site additions I needed to make. I added Bash Infinity since it contained things that might be useful for bootstrapping compilers via bash. I also added a previous submission near Wirth’s. Tannenbaum, mostly known for microkernels, uses a general-purpose, macro language to boostrap up a language they wrote OS’s with back in the 1970’s. The language itself, Sal, is interesting since it operate directly on memory forms like BCPL did. Going back to that style might be advantageous for assembly-up bootstrapping. Some prior bootstrappers did something similar with LISP’y languages designed just above assembly.

                          1. 2

                            he did not mention forth!

                          1. 4

                            As someone who never used Rust I want to ask: does the section about crates imply that all third-party libraries are recompiled every time you rebuild the project?

                            1. 6

                              Good question! They are not; dependencies are only built on the first compilation, and they are cached in subsequent compilations unless you explicitly clean the cache.

                              1. 2

                                I would assume dependencies are still parsed and type checked though? Or is anything cached there in a similar way to precompiled headers in C++?

                                1. 10

                                  A Rust library includes the actual compiled functions like you’d expect, but it also contains a serialized copy of the compiler’s metadata about that library, giving function prototypes and data structure layouts and generics and so forth. That way, Rust can provide all the benefits of precompiled headers without the hassle of having to write things twice.

                                  Of course, the downside is that Rust’s ABI effectively depends on accidental details of the compiler’s internal data structures and serialization system, which is why Rust is not getting a stable ABI any time soon.

                                  1. 4

                                    Rust has a proper module system, so as far as I know it doesn’t need hacks like that. The price for this awesomeness is that the module system is a bit awkward/different when you’re starting out.

                                  2. 1

                                    Ok, then I can’t see why the article needs to mention it. Perhaps I should try it myself rather than just read about its type system.

                                    It made me think it suffers from the same problem as MLton.

                                    1. 4

                                      I should’ve been more clear. Rust will not recompile third-party crates most of the time. It will if you run cargo clean, if you change compile options (e.g., activate or deactivate LTO), or if you upgrade the compiler, but during regular development, it won’t happen too much. However, there is a build for cargo check, and a build for cargo test, and yet another build for cargo build, so you might end up still compiling your project three times.

                                      I mentioned keeping crates under control, because it takes our C.I. system at work ~20 minutes to build one of my projects. About 5 minutes is spent building the project a first time to run the unit tests, then another 10 minutes to compile the release build; the other 5 minutes is spent fetching, building, and uploading a Docker image for the application. The C.I. always starts from a clean slate, so I always pay the compilation price, and it slows me down if I test a container in a staging environment, realize there’s a bug, fix the bug, and repeat.

                                      One way to make sure that your build doesn’t take longer than is needed to is be selective in your choice of third party crates (I have found that the quality of crates varies a lot) and making sure that a crate pays for itself. serde and rayon are two great libraries that I’m happy to include in my project; on the other hand, env_logger brings a few transitive libraries for coloring the log it generates. However, neither journalctl nor docker container logs show colors, so I am paying a cost without getting any benefit.

                                      1. 2

                                        Compiling all of the code including dependencies, can make some types of optimizations and inlining possible, though.

                                        1. 4

                                          Definitely, this is why MLton is doing it, it’s a whole program optimizing compiler. The compilation speed tradeoff is so severe that its users usually resort to using another SML implementation for actual development and debugging and only use MLton for release builds. If we can figure out how to make whole program optimization detect which already compiled bits can be reused between builds, that may make the idea more viable.

                                          1. 2

                                            In last discussion, I argued for multi-staged process that improved developer productivity, esp keeping mind flowing. The final result is as optimized as possible. No wait times, though. You always have something to use.

                                            1. 1

                                              Exactly. I think developing with something like smlnj, then compiling the final result with mlton is a relatively good workflow. Testing individual functions is faster with Common Lisp and SLIME, and testing entire programs is faster with Go, though.

                                              1. 2

                                                Interesting you mentioned that; Chris Cannam has a build setup for this workflow: https://bitbucket.org/cannam/sml-buildscripts/

                                    1. 2

                                      A lot of folks trying to show off with small amounts of hardware use 8-bitters. It’s one of smallest types of architectures that many early computers used. However, there’s an even more limited architecture, 4-bit MCU’s, that’s still actually sold in the market for use in things like Gillette razors. They had a lot of uses before per Wikipedia. I’d love to see some demoscene competitions for 4-bit since I can’t even guess what they might be able to do with it.

                                      The MARC was programmed in Forth, too. So, I submitted the guide in case people wanted to see what 4-bit programming looked like. It’s not the lower-limit, though: Motorola made a 1-bit MCU for PLC-style, control applications. Here’s the programmer’s guide. Here’s a derivative with a video of it doing basic tasks on a board. So, there’s your lower limits if wanting the simplest or most-constraining MCU. I mean, there’s also No Instruction Set Computing (NISC) where you just get primitives like ALU’s but have to synthesize your own ISA. I’m drawing the line on MCU’s since that’s still software programming. ;)

                                      1. 11

                                        Great article. I recommend reading yosefk’s article with it. I’m going to draw on it as I counter some of this one.

                                        “due to its roots in simplicity and compactness, considering that it emerged in the microcomputer era, on machines that provided (for todays standards) severely limited capacity.”

                                        “You step down on the level of assembly language which may sound daunting, yet gives you full control over every aspect of memory layout. There are no articial barriers between you and the machine’s architecture”

                                        This is only partly true. It’s other root was in the personal preferences of Chuck Moore. That’s why it’s 18-bit, stack-oriented design in an era of 8/16/32/64-bit stack and register machines. An 18-bit stack on a 32-64-bit RISC machine is neither the simplest nor most efficient way to do things. Even if aiming for low gates, the billions a year in 8-16-bit MCU’s sold tell me there’s people getting by with less. The constructs themselves aren’t close to how we think about a lot of operations. Languages like LISP and Nim have metaprogramming that let us write constructs close to how we think that get turned into effecient primitives we still can understand. They have some benefits of Forth without the downside of putting square pegs into round holes.

                                        So, Forth is the result of both squeezing what one can out of constrained hardware/software as author says and arbitrary decisions by Chuck Moore. It’s easier to understand some of it when you remember that one person just liked doing it that way, refused to adopt any method from others, and tried to force everything through his limited tools. He’s actually like a for-profit, industrial version of the demoscene. It’s cool seeing what he does. It’s also objectively worse than what others are doing along a lot of metrics. Especially hardware where his tools can’t handle modern, deep-sub-micron designs with their speed, size, cost, and energy benefits.

                                        So, I default on thinking that future work in Forth should attempt to isolate and ditch arbitrary restrictions to build a similar model of simplicity for current CPU’s or hardware design. A modern CPU is 64-bits w/ registers and stacks, several levels of cache, SIMD, multicore, maybe hardware accelerators, supports memory isolation, and supports VM’s. What does the simplest implementation of that look like in software? For hardware, it has to be FSM’s that are synthesized and resynthesized in a series of NP-hard problems on manufacturing processes that have up to 2,500 design rules at 28nm. One problem even requires image recognition on circuits themselves before redrawing them. What does the simplest, EDA tool for that look like? Answering such questions gets us the benefits of Moore’s philosophy without the unnecessary baggage.

                                        Hint: Microcode and PALcode. They’re like the ISA version of defining Forth functions. I want microcode I can customize with open tools in every CPU. :)

                                        “Imagine being able to debug device drivers interactively”

                                        This is an argument for interpreters, not Forth itself. Forth is one possibility. An interpreter for a subset of C or superset of assembly (eg Hyde’s HLA) w/ live debugging and code reloading is also possible. I’ve thought about building one many times. I’d be shocked if there weren’t already a dozen examples.

                                        “You can’t have both a language that provides direct low-level access to the machine and that at the same time give you all advantages of a high-level environment like automatic memory management.”

                                        These languages exist. They’re safe, systems languages that allow inline assembly. There’s also typed and macro assembly languages. Recently, there’s work on type systems to know combinations of them behave as expected. You can’t do just any arbitrary thing. You can mix approaches where you want, though.

                                        “It is satisfying and reassuring to be able to fully understand the code that is generated from your sources”

                                        Wirth showed you can do this with a language built for human understanding. His languages are simple enough that students design compilers for them. One could do an interpreter even easier. It’s pretty clear what object code will come out of it. And a little discipline can prevent things like cache misses. Worst case, you just look at the assembly of each module to see if you want to tinker with it. As with any other goal, it’s possible that Forth takes simplicity too far. I thought Wirth did that, too. I think there’s tradeoffs to be made between simplicity and complexity. Forth philosophy, like Wirth’s, will sacrifice everything else to simplicity. I’m fine with a more complex implementation of a language if it lets me express myself faster, safer, and so on with efficient code coming out.

                                        “But why do so many insanely complex protocols and architectures exist?”

                                        Author is right that a lot of complexity is unnecessary. The author also writes like all complexity is unnecessary. This is incorrect: much of it is intrinsic to the software requirements as Brooks wrote. Even a simple design becomes more complex if you handle stuff like error handling, software updates, backups, security mitigations, and so on. Even Forth itself is more complex than a native alternative with similar primitives and macros since it will have an interpreter/compiler/updater built in. Forth’s people think that complexity is justified over bare metal since it gives them benefits in the article. Well, that’s what we’re saying about a lot of this other complexity in CPU’s, OS’s, and languages.

                                        “It means to drop the bells and whistles, the nice user interfaces”

                                        User adoption, onboarding, and support is apparently not a thing in the Forth world. Both market share and usability studies showed GUI’s were superior to text for common applications for most users. So, we should give them what they want, do what works, or however you want to look at it. If we don’t need it, then don’t implement it to reduce complexity. There’s plenty of terminal users or services that don’t need GUI’s. That’s not them being inherently bad like author argues, though.

                                        “What you reuse of a library is usually just a tiny part, but you pay for all the unused or pointless functionality and particularly for the bugs that are not yours.”

                                        Author massively overstates real problems that come with bringing in dependencies. Lots of libraries are simple to use and easy to swap out. The OpenBSD and Suckless camps regularly crank out simple implementations of software. Even the complex ones like Nginx often have guides that make installing and using them simple. The complexity is dealt with by somebody else. Likewise with these hosting providers that make it simple to get some services running at almost no cost.

                                        I imagine many businesses and products wouldn’t happen at the rate and cost they do if each developer spent tons of time rebuilding compatible networking stacks, web servers, and browsers from the ground up with deep understanding of them. Nah, they just install a package, read its guide, and Get Shit Done. You could say we opponents of Do It Yourself just like to Get Shit Done in Whatever Way Works. If you do API’s and modularity right, you can always improve later on anything you had to keep shoddy to meet constraints.

                                        1. 3

                                          Forth was the first programming language I encountered. I’ve rarely used it (debugged from the openboot prom once, wrote some minor things in Quartus on the palm pilot) since learning it as a child in 1978. There’s a joke that if you’ve seen one Forth, you’ve seen one Forth. This rings true because there’s nearly nothing to Forth but a couple of primitives everyone more or less agrees upon, a standard everyone ignores, a philosophy, and the problem at hand. The solution is to build up the language and it becomes a DSL for that specific problem, as understood by that particular programmer, and becomes weird.

                                        1. 4

                                          Wait, we have a Chicken developer here. Why speculate… Hey C-Keen, how would you briefly pitch Chicken vs other Schemes to someone that already knows a little about CL’s or Schemes?

                                          EDIT: There is an elevator pitch on the site but I was just curious if C-Keen would say anything different.

                                          1. 9

                                            There are some technical differences in the compiler design for example. CHICKEN uses cheney-on-the-mta CPS compilation for implementing scheme in C. But for the user the most outstanding difference is that it generates host binaries, that are easy to distribute.

                                            With CHICKEN 5 these binaries as well as core itself are built reproducably. Also cross-compilation is a feature of the system.

                                            Because CHICKEN compiles to C, FFI into C is really really easy and there’s schemely syntax support for doing so. Wrapping your favourite library becomes an easy task allowing an explorative approach to understanding your problem while using your external library from the interpreter.

                                            In the previous version a lot of external modules that provided functionality (called eggs have been written by chicken users). For the next release the most important and used ones are already ported or a work in progress.

                                            Chicken scheme has a small but newcomer friendly and live community. You can easily reach us on #chicken on freenode or via our mailing lists.

                                            From the language point of view scheme vs. CL comparisons apply (lexical scope, continuations, hygienic macros, yadda yadda yadda)

                                            There are many more and I have glossed over a lot of things. Let me know if there’s some special topic you’d like to know more.

                                            1. 1

                                              Thanks!

                                              1. 1

                                                Thanks, that is informative.

                                                Exploring C libraries from a REPL is really cool.

                                            1. 2

                                              I am interested in personal opinions about CHICKEN vs Racket. I want to get into one of them but I am not sure which one. I am looking at them from the point of view.of someone who likes developing web and apps. Can anyone share some of their experiences with me?

                                              1. 10

                                                Caveat: I’m a CHICKEN user.

                                                Racket is a kitchen-sink/batteries-included kind of Scheme that compiles to bytecode that runs in a virtual machine. It’s got the largest Scheme community and ecosystem by far. It seems to excel in GUI in particular. It also has its own varieties like Typed Racket and Lazy Racket, which are quite neat. (You could argue that Racket is a separate dialect of Scheme at this point, as it doesn’t exactly follow the RnRS.)

                                                CHICKEN is a much more minimal Scheme dialect that compiles to C. It’s fast and portable, and the compiled applications are very easy to deploy elsewhere, given you bundle libchicken.so with the executable (or statically link it). It has a very clean C FFI. It implements most of R5RS with growing R7RS support.

                                                Honestly, if you like developing web apps, I’d personally recommend Racket since it has a sizable and mature codebase for web dev, mostly using a sublanguage called Insta.

                                                1. 5

                                                  On top of other comment, Racket has advantage of How to Design Programs written for it.

                                                  1. 2

                                                    What does this mean?

                                                    1. 5

                                                      The book How to Design Programs is written by Racket authors and uses Racket throughout.

                                                      1.  

                                                        Ooh, thanks for the context! :)

                                                1. 3

                                                  While I don’t think I agree that it’s a good idea, note that the RISC-V ISA also allows division by zero, producing all-bits-set instead of a trap. (See the commentary in section 6.2 here for their rationale.)

                                                  1. 2

                                                    “We considered raising exceptions on integer divide by zero, with these exceptions causing a trap in most execution environments. However, this would be the only arithmetic trap in the standard ISA (floating-point exceptions set flags and write default values, but do not cause traps) and would require language implementors to interact with the execution environment’s trap handlers for this case. Further, where language standards mandate that a divide-by-zero exception must cause an immediate control flow change, only a single branch instruction needs to be added to each divide operation, and this branch instruction can be inserted after the divide and should normally be very predictably not taken, adding little runtime overhead.

                                                    The value of all bits set is returned for both unsigned and signed divide by zero to simplify the divider circuitry. The value of all 1s is both the natural value to return for unsigned divide, representing the largest unsigned number, and also the natural result for simple unsigned divider implementations. Signed division is often implemented using an unsigned division circuit and specifying the same overflow result simplifies the hardware.”

                                                    1. 1

                                                      Does it also set a flag? If so, that seems perfectly reasonable. The value returned shouldn’t matter as long as you can use out-of-band info to check for divide by zero. Although, I suppose you could just check for zero before the divide… Hmm. So I guess really it doesn’t matter at all. It makes sense to have the result be whatever is easiest to implement.

                                                      1. 1

                                                        It appears it just returns the special value with no flags or traps. You have to explicitly check for it on every division. They claim this is to simplify circuitry.

                                                  1. 3

                                                    I wrote a long rant on this same subject and am using newLISP instead. It’s great.

                                                    1. 3

                                                      You’d probably get more people if you wrote a list of why newLISP is a great shell language with examples instead of a rant. It certainly looked neat when you last mentioned it. Here’s some quotes from the FAQ for anyone interested:

                                                      “newLISP is a LISP-like scripting language for doing things you typically do with scripting languages: programming for the internet, system administration, text processing, gluing other programs together, etc. newLISP is a scripting LISP for people who are fascinated by LISP’s beauty and power of expression, but who need it stripped down to easy-to-learn essentials.

                                                      …pragmatic and casual, simple to learn without requiring you to know advanced computer science concepts. Like any good scripting language, newLISP is quick to get into and gets the job done without fuss… newLISP has a very fast startup time, is small on resources like disk space and memory and has a deep, practical API with functions for networking, statistics, machine learning, regular expressions, multiprocessing and distributed computing built right into it, not added as a second thought in external modules.”

                                                      1. 3

                                                        You’d probably get more people if you wrote a list of why newLISP is a great shell language with examples instead of a rant

                                                        You mean like this?

                                                        BTW: I do not care if you use newLISP. I’m not interested in “getting more people”. Use it if you want. Or not (it’s your loss, not mine). I’m just sharing a tip.

                                                        I was, however, interested in writing a long rant, and I enjoyed the process thoroughly. It was very cathartic. :D

                                                    1. 13

                                                      Participating in the annual “queer days” festivities.

                                                      1. 1

                                                        And of course someone is upset about this and marks it as spam.

                                                        1. 2

                                                          It’s sometimes hard to tell where people just disagree or are haters. This is a thread about literally anything we might be doing this weekend. Someone marked that as spam. Clearly a hater. Ignore them and enjoy your weekend. ;)

                                                      1. 8

                                                        “A practical and portable Scheme system”

                                                        From the website

                                                        (The post and the linked email didn’t give me any idea what Chicken was.)

                                                        1. 6

                                                          The way I remember it is it’s the Scheme that compiles to C for speed and portability. silentbicycle posted this interview with the author. aminb added someone’s blog posts on interesting work.

                                                          1. 5

                                                            I believe it’s package ecosystem is better than other schemes as well.

                                                          2. 1

                                                            Thanks! I was over here saying “wtf is chicken”

                                                            1. 2

                                                              Cool. Thanks for sharing!

                                                            1. 65

                                                              This blogpost is a good example of fragmented, hobbyist security maximalism (sprinkled with some personal grudges based on the tone).

                                                              Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.

                                                              Talking about threat models, it’s important to start from them and that explains most of the misconceptions in the post.

                                                              • Usable security for the most people possible. The vast majority people on the planet use iOS and Android phones, so while it is theoretically true that Google or Apple could be forced to subvert their OSs, it’s outside the threat model and something like that would be highly visible, a nuclear option so to speak.
                                                              • Alternative distribution mechanisms are not used by 99%+ of the existing phone userbases, providing an APK is indeed correctly viewed as harm reduction.
                                                              • Centralization is a feature. Moxie created a protocol and a service used by billions and millions of people respectively that provides real, measureable security for a lot of people. The fact is that doing all this in a decentralized way is something we don’t yet know how to do or doing invites tradeoffs that we shouldn’t make. Federation atm either leads to insecurity or leads to the ossification of the ecosystem, which in turn leads to a useless system for real users. We’ve had IRC from the 1990s, ever wonder why Slack ever became a thing? Ossification of a decentralized protocol. Ever wonder why openpgp isn’t more widespread? Noone cares about security in a system where usability is low and design is fragile. Ever tried to do key rotation in gpg? Even cryptographers gave up on that. Signal has that built into the protocol.

                                                              Were tradeoffs made? Yes. Have they been carefully considered? Yes. Signal isn’t perfect, but it’s usable, high-level security for a lot of people. I don’t say I fully trust Signal, but I trust everything else less. Turns out things are complicated when it’s about real systems and not fantasy escapism and wishes.

                                                              1. 34

                                                                Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.

                                                                In this article, resistance to governments constantly comes up as a theme of his work. He also pushed for his tech to be used to help resist police states like with the Arab Spring example. Although he mainly increased the baseline, the tool has been pushed for resisting governments and articles like that could increase perception that it was secure against governments.

                                                                This nation-state angle didn’t come out of thin air from paranoid, security people: it’s the kind of thing Moxie talks about. In one talk, he even started with a picture of two, activist friends jailed in Iran in part to show the evils that motivate him. Stuff like that only made the stuff Drew complains about on centralization, control, and dependence on cooperating with surveillance organization stand out even more due to the inconsistency. I’d have thought he’d make signed packages for things like F-Droid sooner if he’s so worried about that stuff.

                                                                1. 4

                                                                  A problem with the “nation-state” rhetoric that might be useful to dispel is the idea that it is somehow a God-tier where suddenly all other rules becomes defunct. The five-eyes are indeed “nation state” and has capabilities that are profound; like the DJB talk speculating about how many RSA-1024 keys that they’d likely be able to factor in a year given such and such developments and what you can do with that capability. That’s scary stuff. On the other hand, this is not the “nation state” that is Iceland or Syria. Just looking at the leaks from the “Hacking Team” thing, there are a lot of “nation states” forced to rely on some really low quality stuff.

                                                                  I think Greg Conti in his “On Cyber” setup depicts it rather well (sorry, don’t have a copy of the section in question) and that a more reasonable threat model of capable actors you do need to care about is that of Organized Crime Syndicates - which seems more approachable. Nation State is something you are afraid of if you are political actor or in conflict with your government, where the “we can also waterboard you to compliance” factors into your threat model, Organized Crime hits much more broadly. That’s Ivan with his botnet from internet facing XBMC^H Kodi installations.

                                                                  I’d say the “Hobbyist, Fragmented Maximalist” line is pretty spot on - with a dash of “Confused”. The ‘threats’ of Google Play Store (test it, write some malware and see how long it survives - they are doing things there …) - the odds of any other app store; Fdroid, the ones from Samsung, HTC, Sony et al. - being completely owned by much less capable actors is way, way higher. Signal (perhaps a Signal-To-Threat ratio?) perform an good enough job in making reasonable threat actors much less potent. Perhaps not worthy of “trust”, but worthy of day to day business.

                                                                2. 18

                                                                  Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.

                                                                  And yet, Signal is advertising with the face of Snowden and Laura Poitras, and quotes from them recommending it.

                                                                  What kind of impression of the threat models involved do you think does this create?

                                                                  1. 5

                                                                    Who should be the faces recommending signal that people will recognize and listen to?

                                                                    1. 7

                                                                      Whichever ones are normally on the media for information security saying the least amount of bullshit. We can start with Schneier given he already does a lot of interviews and writes books laypeople buy.

                                                                      1. 3

                                                                        What does Schneier say about signal?

                                                                        1. 10

                                                                          He encourages use of stuff like that to increase baseline but not for stopping nation states. He adds also constantly blogged about the attacks and legal methods they used to bypass technical measures. So, his reporting was mostly accurate.

                                                                          We counterpoint him here or there but his incentives and reo are tied to delivering accurate info. Moxie’s incentives would, if he’s selfish, lead to locked-in to questionable platforms.

                                                                  2. 18

                                                                    We’ve had IRC from the 1990s, ever wonder why Slack ever became a thing? Ossification of a decentralized protocol.

                                                                    I’m sorry, but this is plain incorrect. There are many expansions on IRC that have happened, including the most recent effort, IRCv3: a collectoin of extensions to IRC to add notifications, etc. Not to mention the killer point: “All of the IRCv3 extensions are backwards-compatible with older IRC clients, and older IRC servers.”

                                                                    If you actually look at the protocols? Slack is a clear case of Not Invented Here syndrome. Slack’s interface is not only slower, but does some downright crazy things (Such as transliterating a subset of emojis to plain-text – which results in batshit crazy edge-cases).

                                                                    If you have a free month, try writing a slack client. Enlightenment will follow :P

                                                                    1. 9

                                                                      I’m sorry, but this is plain incorrect. There are many expansions on IRC that have happened, including the most recent effort, IRCv3: a collectoin of extensions to IRC to add notifications, etc. Not to mention the killer point: “All of the IRCv3 extensions are backwards-compatible with older IRC clients, and older IRC servers.”

                                                                      Per IRCv3 people I’ve talked to, IRCv3 blew up massively on the runway, and will never take off due to infighting.

                                                                      1. 12

                                                                        And yet everyone is using Slack.

                                                                        1. 14

                                                                          There are swathes of people still using Windows XP.

                                                                          The primary complaint of people who use Electron-based programs is that they take up half a gigabyte of RAM to idle, and yet they are in common usage.

                                                                          The fact that people are using something tells you nothing about how Good that thing is.

                                                                          At the end of the day, if you slap a pretty interface on something, of course it’s going to sell. Then you add in that sweet, sweet Enterprise Support, and the Hip and Cool factors of using Something New, and most people will be fooled into using it.

                                                                          At the end of the day, Slack works just well enough Not To Suck, is Hip and Cool, and has persistent history (Something that the IRCv3 group are working on: https://ircv3.net/specs/extensions/batch/chathistory-3.3.html)

                                                                          1. 9

                                                                            At the end of the day, Slack works just well enough Not To Suck, is Hip and Cool, and has persistent history (Something that the IRCv3 group are working on […])

                                                                            The time for the IRC group to be working on a solution to persistent history was a decade ago. It strikes me as willful ignorance to disregard the success of Slack et al over open alternatives as mere fashion in the face of many meaningful functionality differences. For business use-cases, Slack is a better product than IRC full-stop. That’s not to say it’s perfect or that I think it’s better than IRC on all axes.

                                                                            To the extent that Slack did succeed because it was hip and cool, why is that a negative? Why can’t IRC be hip and cool? But imagine being a UX designer and wanting to help make some native open-source IRC client fun and easy to use for a novice. “Sisyphean” is the word that comes to mind.

                                                                            If we want open solutions to succeed we have to start thinking of them as products for non-savvy end users and start being honest about the cases where closed products have superior usability.

                                                                            1. 5

                                                                              IRC isn’t hip and cool because people can’t make money off of it. Technologies don’t get investment because they are good, they get good because of investment. The reason that Slack is hip/cool and popular and not IRC is because the investment class decided that.

                                                                              It also shows that our industry is just a pop culture and can give a shit about good tech .

                                                                              1. 4

                                                                                There were companies making money off chat and IRC. They just didn’t create something like Slack. We can’t just blame the investors when they were backing companies making chat solutions whose management stayed on what didn’t work in long-term or for huge audience.

                                                                                1. 1

                                                                                  IRC happened before the privatization of the internet. So the standard didn’t lend itself well for companies to make good money off of it. Things like slack are designed for investor optimization, vs things like IRC being designed for use and openness.

                                                                                  1. 2

                                                                                    My point was there were companies selling chat software, including IRC clients. None pulled off what Slack did. Even those doing IRC with money or making money off it didn’t accomplish what Slack did for some reason. It would help to understand why that happened. Then, the IRC-based alternative can try to address that from features to business model. I don’t see anything like that when most people that like FOSS talk Slack alternatives. Then, they’re not Slack alternatives if lacking what Slack customers demand.

                                                                                    1. 1

                                                                                      Thanks for clarifying. My point can be restated as… There is no business model for federated and decentralized software (until recently , see cryptocurrencies). Note most open and decentralized tech of the past was government funded and therefore didn’t face business pressures. This freed designets to optimise other concerns instead of business onrs like slack does.

                                                                              2. 4

                                                                                To the extent that Slack did succeed because it was hip and cool, why is that a negative? Why can’t IRC be hip and cool?

                                                                                The argument being made is that the vast majority of Slack’s appeal is the “hip-and-cool” factor, not any meaningful additions to functionality.

                                                                                1. 6

                                                                                  Right, as I said I think it’s important for proponents of open tech to look at successful products like Slack and try to understand why they succeeded. If you really think there is no meaningful difference then I think you’re totally disconnected from the needs/context of the average organization or computer user.

                                                                                  1. 3

                                                                                    That’s all well and good, I just don’t see why we can’t build those systems on top of existing open protocols like IRC. I mean: of course I understand, it’s about the money. My opinion is that it doesn’t make much sense to insist that opaque, closed ecosystems are the way to go. We can have the “hip-and-cool” factor, and all the amenities provided by services like Slack, without abandoning the important precedent we’ve set for ourselves with protocols like IRC and XMPP. I’m just disappointed that everyone’s seeing this as an “either-or” situation.

                                                                                    1. 2

                                                                                      I definitely don’t see it as an either-or situation, I just think that the open source community typically has the wrong mindset for competing with closed products and that most projects are unapproachable by UX or design-minded people.

                                                                              3. 3

                                                                                Open, standard chat tech has had persistent history and much more for decades in the form of XMPP. Comparing to the older IRC on features isn’t really fair.

                                                                                1. 2

                                                                                  The fact that people are using something tells you nothing about how Good that thing is.

                                                                                  I have to disagree here. It shows that it is good enough to solve a problem for them.

                                                                            2. 1

                                                                              Alternative distribution mechanisms are not used by 99%+ of the existing phone userbases, providing an APK is indeed correctly viewed as harm reduction.

                                                                              I’d dispute that. People who become interested in Signal seem much more prone to be using F-Droid than, say, WhatsApp users. Signal tries to be an app accessible to the common person, but few people really use it or see the need… and often they are free software enthusiasts or people who are fed up with Google and surveillance.

                                                                              1. 1

                                                                                More likely sure, but that doesn’t mean that many of them reach the threshold of effort that they do.

                                                                              2. 0

                                                                                Ossification of a decentralized protocol.

                                                                                IRC isn’t decentralised… it’s not even federated

                                                                                1. 3

                                                                                  Sure it is, it’s just that there are multiple federations.

                                                                              1. 8

                                                                                For those wanting the rationale, this is in the same Pony article:

                                                                                “From a practical perspective, having division as a partial function is awful. You end up with code littered with trys attempting to deal with the possibility of division by zero. Even if you had asserted that your denominator was not zero, you’d still need to protect against divide by zero because, at this time, the compiler can’t detect that value dependent typing. So, as of right now (ponyc v0.2), divide by zero in Pony does not result in error but rather 0.”

                                                                                1. 5

                                                                                  I’m going to be (when I have time) writing a longer and more detailed discussion of the issue.

                                                                                  1. 7

                                                                                    Im sure many of us would find it interesting. I have a total, mental block on divide by zero given it’s always a bug in my field. This thread is refreshingly different. :)

                                                                                    1. 7

                                                                                      I’ll post it on lobste.rs when its done and I’ve had several people review and give feedback.

                                                                                      1. 3

                                                                                        Thanks!

                                                                                  2. 4

                                                                                    This is very true. The fact that division by zero causes us to write so many guards can cause major issues.

                                                                                    I wonder, though, won’t explicit errors be better than implicit unexpected results which may be caused by this unusual behavior?

                                                                                    1. 1

                                                                                      I guess if you write a test before writing code, it should be possible to spot the error either way?

                                                                                      1. 2

                                                                                        It would be good to push this to the type system exactly so that we don’t have to remember to test for it.

                                                                                        1.  

                                                                                          Totally, but I am saying that there are specific cases where this may still throw people off and cause bugs - even when the typing is as expected here.

                                                                                        2.  

                                                                                          Sure… if you write a test…