1. 1

    I can’t seem to get any of the examples to load workout errors.

    1. 1

      Replied on the issue.

    1. 2

      It would be interesting to see a comparison of this with the single-IORef-with-atomicModifyIORef pattern, which has been shown in the past to outpeform many specialised concurrent structures - it turns out that using purity and mutability gives you excellent ‘mutable’ concurrent structures.

      1. 4

        Australians have been doing this for years, thanks Telstra!

        1. 6

          It hasn’t improved much IMO.

          Haskell is normally extremely strong when it comes to well-designed and reusable abstractions. Unfortunately that appears to be more or less absent from the Haskell crypto libraries. There are a few different monad classes for pseudorandom number generation, for example, and all of them are overcomplicated. I often end up just rolling my own (monad, not PRNG) when I need clean random number generation.

          There are a few decent libraries available for sundry concrete cryptographic tasks, but well below par for the Haskell ecosystem.

          In fairness, cryptography libraries are bad across almost all languages, but I expect more from Haskell.

          1. 4

            Is it fair to suggest that Haskell expects more from you, too? I mean, you’re certainly welcome to contribute.

            1. 3

              In fairness, cryptography libraries are bad across almost all languages, but I expect more from Haskell.

              Why? Does Haskell have any special features that make it fundamentally easier to correctly implement cryptography algorithms compared to other high-level languages? Parametricity doesn’t particularly help when all your routines map tuples of integers to tuples of integers.

              1. 4

                Does Haskell have any special features that make it fundamentally easier to correctly implement cryptography algorithms compared to other high-level languages?

                Yes, e.g. QuickCheck, QuickSpec, LiquidHaskell, etc.

                1. 4

                  These get you some of the way but there’s a whole class of dude channel attacks we have very little ability to reason about in Haskell. Timing is something I have no idea how to talk about how to make a constant time Integrr multiplication algorithm and be sure it is constant time.

                  My dream for this sort of work is a inline-rust package, in the vain of inline-c, so we get memory safety but also a language which better allows timing analysis.

                  1. 2

                    inline-rust is something I want in every programming language. :)

                    I think it’s possible that a subset of Haskell in which you only use primitive types and primops (like Word32# and MutableByteArray# and so on) and can’t have any laziness anywhere (because no values are ever boxed) might be more amenable to timing analysis.

                    I’m not sure if there is a pragma or Language setting in GHC that can automatically enforce that everything in a given file uses only primitives and primops.

                    1. 2

                      Check out Jasmin for language implementing high-assurance crypto. Once again, it’s achieved with a language quite opposite of Haskell’s high-level style.

                      1. 2

                        That would be cool indeed—but I can already viscerally imagine the impact on build times from invoking the Rust compiler via Template Haskell… :)

                      2. 3

                        In 2017, QuickCheck is by no means specific to Haskell. Nowadays you can find property-based testing libraries and frameworks for just about any language.

                        As for LiquidHaskell, the real verification is performed by an external SMT solver. So again I don’t think this constitutes a Haskell-specific advantage.

                      3. 3

                        Because Haskell libraries are, in general, much higher quality than libraries in other ecosystems I use. Correctness also isn’t the concern — I have little doubt that the crypto libraries are correct. The concern is usability. Most of the Haskell crypto libraries are clumsy, typically because they just wrap some C library without providing any additional abstractions.

                        1. 2

                          So you are confident the underlying C library is correct?

                    1. 2

                      I’ve done a very similar thing using Alpine and multi-stage builds. Using alpine:edge also gives you upx, which lets you compress your binary after the build, which you copy into the next stage, and using ldd you can figure out exactly which libraries are needed for dynamic linking, and you can end up with some really tiny images. Also using GHC’s split objects you can build even smaller binaries. Maybe I should write a post about that sometime soon, once we actually put this into production…

                      1. 1

                        Please do create that blog post! I’ve not used upx, got any links that explain that?

                      1. 1

                        Can someone explain what bridgeOS is?

                        1. 2

                          eOS 1.0 (which was renamed BridgeOS) was driving the touch bar in 2016/2017 MacBook Pros. BridgeOS 2.0 is the OS for essentially a system management controller capable of booting the x86 CPU and controlling a set of I/O, mainly so Apple can control the security model and do things like “Hey Siri.” (It uses an A10 Fusion SoC.)

                        1. 2

                          Hmm, I should dig out my pine64…

                          1. 2

                            I was thinking the same thing. I bought it with the intention of running a media server from it (hardware accel of video encode was something that made me buy it), but found working with Linux even more painful than i’d remembered. Having another OpenBSD machine will be a pleasure.

                          1. 2

                            Nice work mate. The only thing that stood out to me was defining updateMap when insertWith exists and would be more efficient by avoiding the member lookup first.

                            1. 1

                              Thank you! I stared at insertWith and ruled it out for some reason. Ended up resorting to updateMap which I wrote for a different blog post. I wrote this while streaming (2 hours and change). I’ll upload the video if this noise reduction filter doesn’t make the audio sound watery this time.

                              Update: good catch, I’ve fixed it and replaced it with insertWith

                            1. 3

                              The exploit author Patrick W also makes an interesting bunch of OSX / MacOS security tools outside of his current employment, if you weren’t aware: www.objective-see.com

                              1. 2

                                He also has a patreon which i’m really happy to contribute to - i’ve got several of his apps installed and they do a great job of doing things like pointing out any changes to any of the locations/systems (launchd) which can be used to start malware persistently. His new LuLu firewall looks really nice, but is probably a bit too alpha for me at this stage.

                              1. 17

                                The wikipedia comparison is pretty good, but it’s still lacking. Some things that are important to me in a serialisation protocol:

                                • Binary: As soon as you open the door to text you get questions about encoding and whitespace, and it becomes very difficult to process efficiently. The edge cases will haunt you in a way they never will with a binary protocol.
                                • Self-describing: It should be possible for a program to read an arbitrary object (unlike XDR, Protocol Buffers, etc)
                                • Efficiency: JSON/MessagePack/XML are out, but so is DER (ASN.1) because integers are variable length
                                • Explicit references/cycles (e.g. plain JSON, but not Capt’nProto or PHP’s serialize)
                                • Lots of types: ASN.1 has the right idea here, but it still falls short.
                                • Unambiguous: Fuck MessagePack. Seriously.

                                On the subject of types: k/q supports booleans, guids, bytes, shorts(16bit), ints(32bit), longs(64bit), real(32bit), float(64bit), characters, symbols, timestamps, months, dates, timespans (big interval), minutes, seconds, times, all as arrays or as scalars. It also supports enumerated types, plain/untyped lists, and can serialise functions (since the language is functional). None of the blog-poster’s suggestions can stand up to the kdb ipc/protocol, so clearly we needed at least one more protocol, but now what?

                                Something else I’m thinking about are capabilities/verified cookies. I don’t know if these can/should be encoded into the IPC (I tried this for a while in my ad server), but there was a clear advantage in having the protocol decoder abort early, so maybe a semantic layer should exist where the programmer can resolve references or cookies (however if you do it as a separate pass of the data, you’ll have efficiency problems again).

                                I think that if you can get away with an existing format, you should use it because you get to inherit all of the tooling that goes with it, but dogma that suggests serialisation is a solved problem is completely and obviously wrong.

                                1. 9

                                  Cycles are hard to use safely. In my opinion, it’s much better to encode them explicitly when you need them (not that often) than to include them in the format itself. Other than that, I agree completely.

                                  It is also important for the format to have a canonical form for when you deal with cryptography. Also, not having various lengths of numeric data types that are all treated differently is a great boon for current scripting languages.

                                  Have you seen RFC 7049: Concise Binary Object Representation (CBOR)? It has JSON semantics with additional support for chunked transfers and sub-typing (interpret this string as a date or something).

                                  RFC 8152: CBOR Object Signing and Encryption (COSE) also sounds promising. I believe that it’s time for DER and the whole ASN.1 world to go.

                                  1. 7

                                    I was surprised not to see CBOR in the list of formats, it’s actually an incredibly elegant encoding which can be efficiently decoded and provides a huge amount of flexibility at the same time (and is sensible enough to leave space to add more things if they become necessary). Haskell’s serialise and cborg libraries have adopted it, and I hope these will become the canonical serialisation format for Haskell data, replacing the really ad hoc and less efficient encoding currently offered by the binary and cereal packages.

                                    CBOR is a protocol done right, with standardisation and even IANA registry for tags and other stuff. It’s also part of the COAP REST-for-IoT-but-efficient standard (thing - I’m not familiar enough with exactly what COAP does).

                                    Edit: video describing the protocol and how it’s likely to be used in Haskell https://youtu.be/60gUaOuZZsE

                                    1. 0

                                      What do you mean by “JSON semantics”? JSON has really terrible semantics, especially around numbers.

                                      1. 5

                                        I should have said JSON-compatible semantics.

                                        Most of the types in CBOR have direct analogs in JSON. All JSON values, once decoded, directly map into one or more CBOR values.

                                        The conversion from CBOR to JSON is lossy. CBOR supports limited-size integers, arbitrary-precision integers and floats. It also has support for NaN and Infinities.

                                        1. 1

                                          By this definition, is there any serialization format that is not JSON-compatible?

                                    2. 1

                                      Always great to see another k programmer around. Been several years but man that was a trip.

                                    1. 2

                                      aircrafts, helicopter, container ships: You don’t need to know theory to use them. Just turn on the engine and GO!!!

                                      1. 1

                                        A more fair comparison would be numbers, vectors and matrices, you certainly need to know a lot of theory to fly, but very little to make use of these objects.

                                      1. 2

                                        The results of this talk (and paper) are truly amazing. Can’t wait to see it get some good use.

                                        1. 5

                                          Google is stopping one of the most controversial advertising formats: ads inside Gmail that scan users’ email contents. The decision didn’t come from Google’s ad team, but from its cloud unit, which is angling to sign up more corporate customers.

                                          You think they’d do it out of decency… nope.

                                          1. [Comment from banned user removed]

                                            1. 1

                                              It’s so incredibly evil that it’s amazing nobody seems to care.

                                              It’s not that nobody seems to care; it’s that people embrace it if it makes their lives more convenient.

                                            2. 6

                                              At this point in time we have already collected unough information about our customers through their most personal emails and have noticed new emails aren’t adding anything to our models any more

                                            1. 29

                                              tl;dr: wants generics

                                              1. 11

                                                Maybe we should add a new tag, “go-generics”… :-)

                                                1. 5

                                                  That’s not really true. He wants some way to avoid writing the same code multiple times. Generics is a solution to that, but not the only possible solution.

                                                  1. 10

                                                    it’s half the truth, i’ll give you that :)

                                                    i’m just a bit annoyed by the go complaints, it’s the ever repeating same. with go you sometimes have to copy some code, but that can be minimized if done good. go sometimes is verbose, but that’s imho the tradeoff for the small syntax and orthogonality.

                                                    last but not least: maybe just use another tool if go doesn’t work for you ;)

                                                    1. 7

                                                      i’m just a bit annoyed by the go complaints, it’s the ever repeating same. with go you sometimes have to copy some code

                                                      Copy code, copy complaints. The price you pay for a simple language.

                                                      last but not least: maybe just use another tool if go doesn’t work for you ;)

                                                      I can’t speak for the author’s situation but sometimes these posts are born more from other’s imposing solutions. There are plenty of good projects that if you want to contribute to, you have no choice but to use what the chose. And, more often in the case of Java, your employer might force a technology. These blog posts are sometimes a desperate plea for help.

                                                      1. 2

                                                        And, more often in the case of Java, your employer might force a technology. These blog posts are sometimes a desperate plea for help.

                                                        i’m aware of that, but then, there are so many posts with say “generics are the solution”, that i wonder why the author did have to include that point. the language isn’t going to change for the foreseeable future, and it’s better to just show others how to solve these problems in the current boundaries of the language (imho). it’s just more productive and helpful.

                                                        having to use a tool you are not familiar with (and not having the time to proper study the docs) sucks, maybe this is the problem for the author of the article. i guess i’d be lost when i’d have to use c# and would complain, too :)

                                                      2. 3

                                                        i’m just a bit annoyed by the go complaints, it’s the ever repeating same.

                                                        It’s not unique to Go. People’s complaints about C++, Java, Ruby, &c. have all been more or less the same in the last ten years as far as I can tell. C++ is confusing, Java is cumbersome, Ruby crashes and uses too much memory.

                                                        As long as the languages don’t change, the complaints won’t, either.

                                                        1. 0

                                                          Only one of the languages you’ve mentioned has been wilfully ignorant of the history of language design, to the point of being proud of its ignorance.

                                                          1. 3

                                                            I keep seeing this meme repeated without sources, which bothers me - sourcing claims like this makes the difference between a cogent argument and an ad hominem attack.

                                                            1. 1

                                                              It would be good to have a more “sourced” and supported debate about Go in general. That would change the “pro” camp’s approach as well, though. No more “Go was supposed to be X so X is how it is supposed to be. If you don’t like it, don’t argue about it – use another language.”.

                                                              1. 1

                                                                Absolutely.

                                                                A good example of sourcing, for the ‘go generics’ debate:

                                                                Four proposals for generics with unacceptable tradeoffs (skip to the end of each for the summary of why they don’t work out).

                                                                1. 1

                                                                  These make me wonder. Many languages have parameterized types; and in usable forms, to boot. The “unacceptable tradeoffs” must be acceptable in those languages – how is that? To say that those languages are bad, or do not achieve Go’s goals, is just assuming the conclusion.

                                                                  Only two of these proposals (2010-06 and 2013-10) have sections set aside for comparisons to the literature, and they are thin. The first one mentions only C++, the second mentions C (?), C++ and Java. There’s a lot of other systems out there, though; many are decades old, like ML’s, and work quite differently from those of C, C++ and Java. More recent languages – like Haskell and Scala – provide further examples to work from.

                                                                  These proposals by themselves – which of course does not make a conclusive case – do seem to indicate that Go is being developed with little reference to much of the work done in language design, and in a way that’s deliberately narrow. First, they don’t mention that work; and second, many of the reasons given for rejecting this or that parameterized types proposal – like some of the syntax complaints – seem quite finicky and particular (and eminently solvable).

                                                            2. 1

                                                              But let’s say it weren’t – people would still complain about it, and that wouldn’t be any kind of special discrimination against that language. People who are annoyed by complaints about their favourite language would do well to consider how other languages are treated before bringing a case before us.

                                                    1. 2

                                                      How is IO a monoid? It’s not even the right kind, is it?

                                                      1. 3

                                                        IO a is. It was proposed back in 2014 (Monoid instance for IO and discussion) and added in GHC 8:

                                                        instance Monoid a => Monoid (IO a) where
                                                            mempty = pure mempty
                                                            mappend = liftA2 mappend
                                                        

                                                        Relevant bit from Gabriel Gonzalez’s talk at LambdaConf: https://youtu.be/WsA7GtUQeB8?t=17m43s. That whole talk is pretty great by the way.

                                                        1. 2

                                                          Oh, sure, IO a for a that forms a Monoid would be monoidal. Isn’t that just the case automatically by dint of being an applicative (since it’s a monad)? That’s how it tends to work in Scala.

                                                          1. 1

                                                            Not really, at least that’s not the case in Haskell as far as I know. You’re right that since it’s a monad, it’s also an applicative functor (and a functor too) but you don’t need Monoid f for a Functor f instance, or for Applicative f for that matter. Check out Typeclassopedia if you like :)

                                                            1. 2

                                                              No I mean that Monoid a, Applicative m should imply Monoid m a without needing to be a special case for m = IO

                                                              1. 1

                                                                I don’t think that there are any laws that ensure that with Monoid a and Applicative m you automatically get Monoid (m a), but can’t think of any examples off the top of my head. Also defining that instance means you can’t (or at least shouldn’t) define Monoid instances for anything which is a member of both classes. Generally instances of that sort (automatic based on a type being in these specific classes) seems like a good idea but actually isn’t.

                                                                1. 2

                                                                  The applicative laws are exactly the monoid laws. It creates an associative operation around an associative operation, so the laws are satisfied. This creates a useful type Ap:

                                                                  https://hackage.haskell.org/package/reducers-3.12.1/docs/Data-Semigroup-Applicative.html

                                                                  1. 2

                                                                    It’s more like “Given Monoid a and Applicative f we can always construct a law-abiding Monoid (f a)”. We might not choose to actually have the instance reflected in code, but one is always there.

                                                                  2. 1

                                                                    Sure. I was just playing around:

                                                                    {-# language FlexibleInstances #-}
                                                                    
                                                                    import Data.Monoid
                                                                    import Control.Monad
                                                                    import Control.Applicative
                                                                    
                                                                    instance (Monoid a, Applicative m) => Monoid (m a) where
                                                                        mempty = pure mempty
                                                                        mappend = liftA2 mappend
                                                                    

                                                                    Though in practice I’d use newtype Ap f m to avoid overlapping instances.

                                                          1. 9

                                                            It’s worth keeping in mind that an IORef a not a mutable a, it a mutable reference to an immutable object of type a - a possibly fairer comparison would be to use volatile int * ref which dereferences the value increments it, allocates a new location for this incremented int, writes the value to this new location, then writes to the pointer pointing to that new location. Also the most appropriate comparison would be not using mutation at all, and using sum [1..10000], which should actually produce code much closer to what gcc would in the for loop, and if passed to LLVM would actually just compute the constant.

                                                            Using IORef actually prevents the compiler doing many many optimisations, because it has to maintain the correct semantics, including the ability to implement atomicModifyIORefM an incredibly useful function which allows any immutable data structure to be modified concurrently without contention. After trying many alternative implementations for a concurrent linked list, it was found by far the most efficient was standard Haskell lists wrapped in an IORef: https://pdfs.semanticscholar.org/2f9e/cc815906c4359cb02674123a1c3b06ec735b.pdf

                                                            1. 3

                                                              Looks good, but needs an introduction with why you’d be interested in looking further - just giving the history tells me nothing really useful and would be much appropriate after a “why” section. Also the site doesn’t work well at all on an iPhone 5 sized screen. Good idea though, I hope it continues to improve and can be a good resource to send people interested in getting started.

                                                              1. 1

                                                                Thanks to @fcbsd, there’s a new page for BSD.

                                                              1. 11

                                                                I wonder what people with JSON parsing problem are parsing. I had an issue with slow json, though not terribly slow (using it to serialize my daily emails, for scale), and the first thing I did was replace it with a better format. Way faster. And I spent much less time developing the solution than finding loops to unroll.

                                                                1. 5

                                                                  Deadline auctioning often is a use case. Though most of the advertising industry is awful and works to ~100ms auctioning, a full feed is in excess of 100k requests per second…that’s an awful lot of JSON :-)

                                                                  Some people get ‘excited’ then about protobuf, but fail to do sensible benchmarking (finding it is not really interesting faster), or pay attention to that there is effectively only a single C++ implementation.

                                                                  1. 6

                                                                    Some people get ‘excited’ then about protobuf, but fail to do sensible benchmarking (finding it is not really interesting faster), or pay attention to that there is effectively only a single C++ implementation.

                                                                    I often wonder why people aren’t more excited about msgpack, which has the allure of JSON’s adhoc design, but in a compact, binary packing. But, I don’t see the adoption of it…

                                                                    1. 7

                                                                      If I’m at working at that level, I tend to prefer CBOR, if only for its accepted RFC and being forged from the CoAP project.

                                                                      1. 13

                                                                        CBOR is basically a (somewhat hostile) fork of MessagePack. The author (Carsten BORmann…) appeared someday in the MessagePack GitHub issues saying he was going to submit a “slightly modified” version of a draft of the MessagePack V3 specification to the IETF under the name BinaryPack. The MessagePack community basically did not agree because there was no consensus on the spec, especially with regards to retrocompatibility.

                                                                        There were long discussions (see https://github.com/msgpack/msgpack/issues/121, https://github.com/msgpack/msgpack/issues/128 and https://github.com/msgpack/msgpack/issues/129) which led to the (current) MessagePack V5 spec, from the original MessagePack author. In parallel, @cabo went to the IETF alone with his spec that most of the community did not support, renamed CBOR. He got it accepted because he was already well known at the IETF.

                                                                        Here are a few extra links:

                                                                        Sadly (IMO), people appear to be convinced by the IETF stamp and implement CBOR instead of MessagePack those days, most without knowing the backstory. In any case, technically, there’s no difference relevant to most of the users between MessagePack V5 and CBOR.

                                                                        EDIT: Well, my sentence above is not totally fair, to be honest. There are differences, because CBOR is not really BinaryPack.

                                                                        One is indefinite-length items in CBOR. The idea is: instead of specifying the length of an object at its head, you use a terminator (like a C string). You may or may not thing this is a good idea (it makes things easier at encoding time, but more annoying for decoders).

                                                                        Another difference is tagged items, which allow the composition of types not specified in the spec out of any basic type, whereas MessagePack supports extensions but require them to be represented in binary. But for JSON compatibilty, you probably don’t want to use non-basic types anyway.

                                                                        Finally, there’s CDDL (https://tools.ietf.org/html/draft-greevenbosch-appsawg-cbor-cddl-10), which is a schema language for CBOR, but if you’re going to use schemas why not just go with Protocol Buffers?

                                                                        1. 2

                                                                          Interesting! I wasn’t aware of the history between the two. Thanks for summing it up.

                                                                          1. 2

                                                                            Having implemented CBOR for Lua (along with nearly all registered semantic tags) I took a look at MessagePack V5. CBOR is more consistent with its encoding scheme (a string in CBOR starts with 0x60 to 0x7F with a possible length of 2^64; in MP v5 it’s 0xA0 to 0xBF, 0xD9, 0xDA or 0xDB with a possible length of 2^32). CBOR semantic tags are way open ended (2^64 possible values) and while you could argue about their inclusion (self-describing vs. a schema) they do fill a need (this string is a date in ISO format for instance). MP v5’s extensions seem limited in nature—only 127 values possible.

                                                                            I also didn’t find the streaming nature of CBOR to be that difficult to handle on the decoding side (now handling circular references? That was interesting). But yes, for a consumer using a pre-built library for CBOR or MP, there probably isn’t much of a difference (semantic tagging and sizes notwithstanding).

                                                                            I personally find CDDL interesting, as not everyone wants to be beholden to the whims of Google.

                                                                          2. 3

                                                                            There’s been a fair amount of work to hopefully replace the default Haskell binary package with one based on CBOR, which promises some nice improvements in encoding and decoding performance as well as message size (and is an interesting contrast to the soon to be released compact regions support in GHC 8.2, which allows essentially free serialisation of Haskell types by sending the program’s in memory representation directly for zero serialisation cost at the expense of size).

                                                                            1. 1

                                                                              Oh nice! I wasn’t aware of CBOR! I’ll definitely look into it!

                                                                            2. 2

                                                                              Well the excitement is extinguished pretty fast when you find the implementation libraries from the official upstream cannot decode one anothers output and that, despite what it says on the website, it is neither fast nor particularly​ compact.

                                                                            3. 1

                                                                              fail to do sensible benchmarking (finding it is not really interesting faster)

                                                                              We found it significantly faster in a couple of use cases.

                                                                              there is effectively only a single C++ implementation.

                                                                              Not sure what you mean here, we were easily able to use it across a couple of languages (thought mostly Java to Java).

                                                                              1. 1

                                                                                there is effectively only a single C++ implementation.

                                                                                Not sure what you mean here, we were easily able to use it across a couple of languages (thought mostly Java to Java).

                                                                                Yes, but with protobuf everything outside uses the same C++ library[1] through bindings under the hood. You may as well just ship directly the internal data representation and ship your own bindings for when you want to jump between languages.

                                                                                JSON and ASN.1 are far better options here as a serialiser, if you are concerned about speed and/or portability.

                                                                                [1] I vaguely recall there is a native 100% Java version of the encoder/decoder library too from Google but that’s pretty much it.

                                                                                1. 1

                                                                                  Not true. The Go protobuf library is native.

                                                                                  1. 1

                                                                                    JSON and ASN.1 are far better options here as a serialiser, if you are concerned about speed and/or portability.

                                                                                    I’m confused by this recommendation, given that benchmarks (along with other requirements) should be guiding the decision. The Java benchmarks that I looked at all show the protobuf (v3) handily beat JSON. And my limited usage didn’t show any issues with portability.

                                                                                    1. 1

                                                                                      The message from the article underlines that it is not the parsing of JSON that is slower than protobuf, but the implementation you are using.

                                                                                2. 1

                                                                                  ah, ok, I figured it was some external source one might not control, but couldn’t think of an example.

                                                                              1. 5

                                                                                It’s a shame there wasn’t more information on the tools Ada provides for concurrency - I haven’t seen another language which gives you as much flexibility and power to write safe concurrent code. It’s much more heavy weight than some other languages in some ways, but gives you very strong guarantees you don’t get elsewhere.

                                                                                It also would’ve been nice to talk a bit about some of the features which make it particularly well suited for embedded systems:

                                                                                • things like the ravenscare profile which give you a safe subset of the language using statically allocated never terminating threads
                                                                                • the ability to select the scheduling algorithm (including ones with task priorities which are used to avoid the possibility of deadlocks completely - see the priority ceiling protocol for more details)
                                                                                • the ability to address specific memory locations being built into the language, not just casting ints to pointers like you would in C
                                                                                • full control over how elements on records are laid out, including endianness, this is essential with the previous feature when you’re dealing with memory mapped IO registers for controlling hardware
                                                                                • the ability to define that certain data types can only be used at certain specific memory locations (see https://cs.anu.edu.au/courses/comp4330/Lectures/RTES-03%20Interfaces.01.pdf “pages” 314 - 332)

                                                                                For anyone who’s had to write code to interface with hardware on micro controllers, they’re probably wetting themselves with excitement by now, doing this in C relies on the compiler doing what you think it will, which there’s no guarantee of because the ability to use many of the features is implementation defined, if defined at all.

                                                                                ANU’s COMP4330, which these slides come from, is an excellent resource for learning more about both Ada and real time systems in general: https://cs.anu.edu.au/courses/comp4330/1-Lectures-Contents.html

                                                                                1. 4

                                                                                  Nothing in Firefox 49.0.2 on macOS, alas!

                                                                                  1. [Comment removed by author]

                                                                                    1. 1

                                                                                      well it’s not working for me in Safari or Chrome either, so I guess it’s nothing only features.