1. 4

    Nim IMHO is one of underrated languages. It is as efficient as C and as clean as Python. I just wish for some solid tooling and companies backing it. Without that it’s extremely hard to convince people to pick up something that will have good tooling in future.

    1. 1

      Have been building a reverse proxy sidecar for a while; I always wanted a scripting support but hate to add full fledged JS or low performing scripting languages. Considered alternatives like Duktape, Lua etc. my end result was can have precompile loadable modules. But this can change that I can have middlewares in go interpreter.

      1. 5

        I suddenly see so many programming languages popping up. Zig, Inko, Janet, crystal etc. each tailored to solve a problem. Seems like people are done with interpreting and in some cases GC as well. Renaissance of compiled languages folks!

        1. 2

          Seems like people are done with interpreting and in some cases GC as well. Renaissance of compiled languages folks!

          Aren’t Janet and Inko interpreted?

          1. 2

            It’s a good time!

            1. 1

              I don’t see it that way, interpreted languages will be the next phase, but for now we know we have not seen awesome enough things from the current compiled languages, so are trying to fill that space out.

            1. 4

              The last time I wrote in C++ was probably 10 years ago. It was a server with libpurple that can do chat on multiple platforms. I have since then intentionally kept my self isolated and away from jobs/projects that involved C++. Not to say it is a bad language but the compile errors enter unreadable territories and syntax sometimes yields hard to trace code.

              D lang or Go lang are some modern alternatives with Go being obviously more adopted. However my eyes are out for Rust, which in my opinion is still not the silver bullet but departure in right direction.

              1. 3

                We use Kotlin on our backend with Spring (Webflux + coroutines), and Hibernate. Other than initial nuances on Jackson and some basic issues everything works pretty smoothly. IMHO it’s an underrated language for backends so far, but with projects like Ktor I think we might see wider adoption soon.

                1. 1

                  so what does this project do? the readme says a “fire-and-forget geolocation server” well what does that mean exactly?

                  +1 for crystal-lang tag

                  1. 1

                    Agreed I should add more description to readme. It’s IP to geolocation server that uses MaxMind GeoIP database and keeps is updated (hence fire and forget).

                  1. 2

                    Nim keeps getting better and better. I am planning to do a weekend hobby project writing a URL shortner; hopefully will be able to share my experience with it.

                    1. 1

                      I wonder how much space do you have on your hosting :D would be really nice if you can show size of SQLite db files on site as well. I love SQLite due to the it’s robust nature and solid code base!

                      1. 1

                        not much space, which is why I have a small disclaimer about jott.live not being used for important things. I’m wondering if there are any security concerns associated with displaying the storage being used by sqlite, but I’ll work on adding that now

                        1. 2

                          Just to help you out :)

                          pragma page_size;
                          pragma page_count;
                          

                          Multiply first with second and you will have a safe file size :D

                      1. 12

                        Nicer community, better quality posts, and 0 click bait.

                        1. 1

                          Just makes me wonder how many people use mercurial at there company?

                          Edit: from company I mean at work.

                          1. 2

                            Which company? Octobus? As per their website, there are only 2 people, and Pierre-Yves David (“marmoute”) is a well know core Mercurial contributor. The company itself is about providing commercial support for Mercurial, along with other stuff like Python contracting.

                            1. 1

                              I think mercurial definitely has a niche in a corporate space. It’s easier to train new people on than git, scales better for monorepo setups, is more easily extensivle via python, and allows richer customization.

                              1. 2

                                It’s easier to train new people on than git

                                I am curious about this – while mercurial definitely has less initial surface area and a far more consistent way of interacting. It also tends to have lots of customizations that add a lot of complexity right back in – and mix and match them in ways that are often unique per mercurial setup.

                                Git while far uglier, also has more training resources both professional and free. Additionally, while git is far less consistent in terms of interaction, to a far large degree once you know it – you know it. You are unlikely to go to a site where git has lots of customizations making it behave different than the “git” you used at your last organization.

                                1. 2

                                  Well you pretty much summed it up :) Mercurial is nicer/easier to use, but Git has more resources out there. I think at that point one being better than the other for a particular person or team will then depend less on the pros/cons of each tool, and more on the person/team’s mindset/culture/available support/etc.

                                  I’d add that Git having more resources, while helpful, is as much a proof of its success as a symptom of one of its main problems. Having to look up help pages and other tutorial pages on a regular basis becomes tedious quickly, and they still need to fix the core problem (they can’t quite fix the broken CLI at this point, but I did note several helpful messages being added to the output in the last few versions, so there’s progress).

                                  Finally, yeah Mercurial has a problem with the amount of customization they force on user because of their very strict backwards compatibility guarantees (resulting in a good portion of new features being turned off by default). This tends to be mitigated by the fact that teams will generally provide a central .hgrc that sets up good defaults for their codebase. Also, Mercurial extensions almost never change Mercurial’s behaviour (evolve is an outlier there but is still considered “experimental”) – they just add to it, so I’ve never come across (so far) any Mercurial resource that was potentially invalidated by an extension (feel free to point to counter-examples!).

                                  1. 1

                                    I suspect my issue might be more in my head (and my unique experience) than in reality. I have contracted with lots of git shops – and a fair number of mercurial ones. Most of the git shops worked very similarly, they differed in master dev versus feature branch dev, mono-repo or multi-repo – but they all felt similar and I could use very minor changes to my workflow to integrate with them, which is great for contracting.

                                    Each mercurial shop has been a wild adventure in unique workflow, and brand new extensions I have never seen or used. One used TWO different sub-repo extensions, another one used THREE configuration extensions! On top of that, most of them had annoying/wonky authentication mechanisms (some hand-rolled). The reason I use those examples (which is only a fraction of what I have seen) is that are all basically non-optional. I needed to use them to be able to work on the project… and of course mq versus non-mq. Never used evolve (yet).

                                    During the “will mercurial or git win?” – I was firmly on the mercurial side because I did work on Windows and git early on was non-function on it. But now when I hear a client is a mercurial shop, I dread it. But, I realize that is probably just my unique experience.

                                    1. 2

                                      Huh, well it’s very probable I’m just not aware of all the wild things people do out there with Mercurial. I frankly had no idea there were sub-repo extensions (outside of the core subrepo feature), and I don’t know why anybody would do custom authentication when SSH works everywhere (although I understand people might want to setup ActiveDirectory for Windows-only environments instead, but that’s it). What do you mean by “configuration extensions”? As for MQ, I don’t think it matters for the central repo, no? It should only matter for local workflows?

                                      1. 2

                                        According to https://www.mercurial-scm.org/wiki/UsingExtensions – there are at least 6 sub-repo extensions. And, yes, ActiveDirectory logins, other SSO variations and then on top of those multiple ACL layers.

                                        As for MQ – absolutely you can avoid it with others tools that can produce the same sort of history… rebase, graft, strip, etc. The issue being if all the “how we work” docs are written in MQ style – it is a bit of mental gymnastics to convert over.

                                        1. 1

                                          Ah I see. And yeah I never really scrolled down past the non-core extensions :) (The only non-core extensions I have are a couple I wrote myself…)

                                          1. 1

                                            are a couple I wrote myself…

                                            you… you are part of the problem! runs scared hehe

                                            1. 1

                                              Haha but that’s fine, I don’t think anybody besides myself are using them :)

                                  2. 2

                                    Might it instead be the other way around: that customization-seeking companies are more likely to choose Mercurial? This could be either because adventurousness promotes both non-Git and customization, or because Mercurial has the better architecture when you need to customize. IIRC the latter is true for both Mozilla and Facebook. Anyway, at my second job we used vanilla Mercurial, and we did fine. It was basically the same as any Git workflow, for that matter.

                                    1. 2

                                      Absolutely. Additionally, Mercurial is just more accessible in terms of customization. On top of that more than a handful of these shops had heavy Python contingents internally.

                                      1. 1

                                        Haha, yes, knowing the language certainly makes it easier to stray off the common path and into the woods of in-shop customization :-D

                              2. 1

                                I use Mercurial at work. My company uses Git, but I use Mercurial and clone, push, and pull transparently thanks to hg-git. I’ve noticed I am generally more aware than my Git-using colleagues of recent changes to the repo, because I’ve got a pre-pull hook set up to run hg incoming (with a tweak to avoid double network talk).

                              1. 3

                                Everytime I see a post for Nim I am hoping for a Golang competitor that can actually bring something new to the table. But then I look at the library support and community and walk back disappointed. I am still hoping for nim to take off and attract Python enthusiasts like me to a really fast compiled language.

                                1. 12

                                  But then I look at the library support and community and walk back disappointed.

                                  It’s very hard to get the same momentum that Go achieved, just by the sheer fact that it is supported and marketed by Google. All I can say is: please consider helping Nim grow its community and library support, if everyone sees a language like Nim and gives up because the community is small then all new mainstream languages will be owned by large corporations like Google and Apple. Do you really want to live in a world like that? :)

                                  1. 3
                                    1. 1

                                      Have tried it; GC is way to optimistic so under high loads you would see memory being wasted. I love the syntax and power of language but it still stands shy when you can’t compile single binary (like golang) and end up with weird cross compile issues. Nim is way more efficient in terms of memory and GC overhead.

                                      1. 1

                                        Cannot compile single binary? What do you mean by that?

                                        1. 1

                                          Let me rephrase; binary is not standalone with everything static linked (LibSSL and some dependencies). I had to recompile my binaries on server to satisfy the dynamic linked libraries with particular version.

                                          1. 5

                                            I think that’s more a result of Go having the manpower to develop and maintain an SSL library written in Go. As far as I understand, if you were to write an SSL library in 100% Crystal you wouldn’t have this problem.

                                            By the way, Nim goes a step further. Because it compiles to C you can actually statically embed C libraries in your binary. Neither Go nor Crystal can do this as far as I know and it’s an awesome feature.

                                            1. 3

                                              Is there a distinction between “statically embed C libraries in your binary” and “statically link with C libraries”? Go absolutely can statically link with C libraries. IIRC, Go will still want to link with libc on Linux if you’re using cgo, but it’s possible to coerce Go into producing a full static executable—while statically linking with C code—using something like go install -ldflags "-linkmode external -extldflags -static".

                                              1. 2

                                                There is a difference. Statically linking with C libraries requires a specially built version of that library: usually in the form of a .a or .lib file.

                                                In my experience, there are many libraries out there which are incredibly difficult to statically link with, this is especially the case on Windows. In most cases it’s difficult to find a version of the library that is statically linkable.

                                                What I mean by “statically embed C libraries in your binary” is: you simply compile your program’s C sources together with the C sources of all the libraries you depend on.

                                                As far as Go is concerned, I was under the impression that when you’re creating a wrapper for a C library in Go, you are effectively dynamically linking with that library. It seems to me that what you propose as a workaround for this is pretty much how you would statically compile a C program, i.e. just a case of specifying the right flags and making sure all the static libs are installed and configured properly.

                                            2. 2

                                              I suppose you built with --static?

                                              1. 2

                                                You have to jump through quite a few hoops to get dynamic linking in go.

                                                By default it statically links everything, doesn’t have a libc, etc.

                                              2. 1

                                                It’s not uncommon or difficult in go to compile a webapp binary that bakes all assets (templates, images, etc) into the binary along with a webserver, HTTPS implementation (including provisioning its own certs via ACME / letsencrypt), etc.

                                                1. 1

                                                  only have a passing familiarity with go’s tooling, how do you bake in assets?

                                                  1. 1

                                                    There are different approaches, https://github.com/GeertJohan/go.rice for example supports 3 of them (see “tool usage”)

                                              3. 1

                                                I think he mentions the ability to statically build [1] binaries in Golang. I’d note that this is a feature that is not so common and hard to achieve. You can do this with C/C++ (maybe Rust), but it has some limits, and it’s hard to achieve with big libraries. Not having statically built binaries often means that you need a strong sense of what you need and to what point or using good packaging/distribution workflows (fpm/docker/…).

                                                It’s a super nice feature when distributing software (for example tooling) to the public, so it feels like “here you are your binary, you just have to use it”.

                                                [1] https://en.wikipedia.org/wiki/Static_build

                                          2. 1

                                            The “programming by duct taping 30 pip packages together” method of development is pretty new, and it isn’t the only way to program. Instead, you grow the dependencies you need as you build your app, and contribute them back once they’re mature enough.

                                            More time consuming, but you have total control.

                                          1. 2

                                            Interesting article!

                                            The method getRestaurantMenus, when simultaneously invoked by many coroutines, will result in one of the coroutines winning the race condition and successfully entering the body to execute fetchMenuFromRemoteCacheOrDatabase.

                                            It looks like this is solving the cache stampede problem with the locking approach, but using deferred coroutines for the locking. Couple of questions for the author:

                                            1. Have you considered working with a CDN cache to eliminate stampedes? With a one second cache, DoorDash should be able to reduce the number of incoming requests to a single menu to the number of CDN PoPs per second.
                                            2. For the other requests that are waiting, do they serve stale data and return, or just wait until the winning coroutine’s database read completes?
                                            1. 2

                                              Hey, If you look closely we are using the deferred not as a using mechanism but as a grouping mechanism. The best part about this approach is the late comers. So if your reads are expensive the readers coming towards the end (when Deferred is about to be fulfilled); see lesser latency. To answer your question:

                                              1. The above mentioned scenario is just used as an example, of-course one can use CDN to prevent for this scenario. We have done something similar at places where it was applicable. We use this techniques at various places including identity systems, where putting up such information will be a bad idea.
                                              2. Other coroutines just wait for winning coroutine to complete it’s read. You can have all sort of variations on top of it, e.g. have some sort of timeout and return stale data if scenario permits or start your own DB read. The gist resides in using promises to avoid repeated reads.
                                            1. 2

                                              I don’t want to burst your bubble, but there are < [200 byte](http://j.mearie.org/post/1181041789/brainfuck-interpreter-in-2-lines-of-c for one.) implementations in C.

                                              With whitespace, that’s something like (according to this):

                                              s[999], *r=s, *d, c;
                                                
                                                main(a, b)
                                                {
                                                    char *v=1[d=b];
                                                    for(;c = *v++ % 93;)
                                                        for(b = c%7 ? 
                                                                a &&
                                                                    (c & 17 ? 
                                                                          c & 1 ? 
                                                                              (*r -= c - 44)
                                                                              :(r += c - 61)
                                                                           :c & 2 ?
                                                                              putchar(*r)
                                                                              :(*r = getchar())
                                                                    ,0)
                                                                :v;
                                                            b&&c | a * *r;
                                                            v=d)
                                                                main(!c,&b-1);
                                                    d = v;
                                                }
                                              

                                              Given JS is a C style language, surely some of these tricks would port fine, to reduce your byte count even further? Of course, at the sake of readability – yours is still quite readable. But, if golfing is the goal…

                                              1. 1

                                                Thanks for the code. Have already seen these minimal implementations; I am trying to push hard on minimizing while keeping code readable. I liked the recursive approach here; still looking at what are things I can pickup to minimize code even further, or what are the things that I can write to keep code readable while uglifier can compress it hard.

                                              1. 5

                                                I am using Windows 10 full time, never ever used Edge. I don’t think Microsoft ever recovered from the IE curse even with browser rebranding.

                                                1. 2

                                                  Even with the rebranding it’s still lagging behind on features and support.

                                                1. 7

                                                  Well what can I say there is already a reply article http://blog.breakthru.solutions/re-moving-from-php-to-go-and-back-again/

                                                  1. 11

                                                    I find arguments of the style “why did Facebook do X if there weren’t issues” (in this case build HHVM) or “Uber uses it for service development” very useless. It is interesting from the perspective of someone building an ecosystem, it’s not interesting for users that don’t build the next Facebook or Uber.

                                                    Facebook is - on the scope of all software development happening - a fringe thing. Their practices and decisions are hard to apply to smaller scales, even if their tech speakers say otherwise.

                                                    1. 4

                                                      Yeah his analysis of go as a language revealed a highly limited understanding of the language. I suspect he kept trying to write OO PHP and then got frustrated when it didn’t work the way he thought it did.

                                                    1. 5
                                                      Raspchat

                                                      I am working on refactoring frontend of http://raspchat.com/ previously it was written in Vue 1 without using any kind of packaging tools (webpack or rollup). I am refactoring frontend to be more simple with a new idea around chatting in multiple rooms, this time with hyperapp, and rollup. Staying away from react :) trying to keep it minimal. Few weeks back I switched backend from Golang to Node.js (for simplicity in codebase). I hope pretty soon I will finish the pending tasks and hit v1.0.

                                                      1. 2

                                                        Looks nice, however download button doesn’t work. Have you considered making the server actually IRC compatible? It’s a quite simple protocol.

                                                        1. 1

                                                          Download link will be linked to Github. About IRC I have similar ideas, but every time I think about it, it begs the question if I should just use an IRC server with Node.js doing the WebSocket relaying; which leads me down the path of https://github.com/kiwiirc/webircgateway/ . I think I will keep it simple for now and gradually evolve it into something bigger.

                                                      1. 11

                                                        When you mentioned channel costs, I wondered if there was communication via unbuffered channels, which can lead to traffic jams since the sender can’t proceed ‘til each recipient is ready. Looking at the old chat_handler.go that doesn’t seem to be the case, though. The three goroutines per connection thing isn’t without precedent either; I think at least the prototypes of HTTP/2 support for the stdlib were written that way.

                                                        It looks like maybe the socketReaderLoop could be tied in with ChatHandler.Loop(): where socketReaderLoop communicates with Loop on a channel, just inline the code that Loop currently runs in response, then call socketReaderLoop at the end of Loop instead of starting it asynchronously. You lose the 32-message buffer, but the end-user-facing behavior ought to be tolerable. (If a user fills your TCP buffers, seems like their problem and they can resend their packets.) However, saving one goroutine per connection probably isn’t a make-or-break change.

                                                        Since you talk about memory/thrashing at the end, one of the more promising possibilities would be(/have been) to do a memprofile to see where those allocs come from. A related thing is Go is bad about respecting a strict memory limit and its defaults lean towards using RAM to save GC CPU: the steady state with GOGC=100 is around 50% of the peak heap size being live data. So you could start thrashing with 512MB RAM once you pass 256MB live data. (And really you want to keep your heap goal well under 512MB to leave room for kernel stuff, other processes, the off-heap mmapped data from BoltDB, and heap fragmentation.) If you’re thrashing, GC’ing more often might be a net win, e.g. GOGC=50 to be twice as eager as the default. Finally, and not unrelated, Go’s collector isn’t generational, so most other collectors should outdo it on throughput tests.

                                                        Maybe I’m showing myself not to be a true perf fanatic, but 1.5K connections on a Pi also doesn’t sound awful to me, even if you can do better. :) It’s a Pi!

                                                        1. 2

                                                          Thank you for such a detailed analysis and looking into code before you commented :) positive and constructive feedback really helps. I have received a great amount of feedback and would definitely try with your tips. BoltDB is definitely coming up every-time and I think it contributes to memory usage as well. Some other suggestions include use fixed n workers and n channels, backlog building up, and me not doing serialization correctly. I will definitely update my benchmark code and test it with new fixes; and if I feel like code is clean enough would definitely love to move back.

                                                          1. 3

                                                            Though publicity like this is fickle, you might get a second hit after trying a few things and then explicitly being like “hey, here’s my load test, here are the improvements I’ve done already; can you guys help me go further?” If you don’t get the orange-website firehose, you at least might hear something if you post to golang-nuts after the Thanksgiving holiday ends or such.

                                                            Looking around more, I think groupInfo.GetUsers is allocating a string for each name each time it’s called, and then when you use the string to get the object out there’s a conversion back to []byte (if escape analysis doesn’t catch it), so that’s a couple allocs per user per message. Just being O(users*messages) suggests it could be a hotspot. You could ‘downgrade’ from the Ctrie to a RWLocked map (joins/leaves may wait often, but reads should be fastish), sync.Map, or (shouldn’t be needed but if you were pushing scalability) sharded RWLocked map. But before you put in time trying stuff like that, memprofile is the principled/right way to approach alloc stuff (and profile for CPU stuff)–figure out what’s actually dragging you down.

                                                            True that there are likely lighter ways to do the message log than Bolt. Just files of newline-separated JSON messages may get you far, though don’t know what other functionality you support.

                                                            FWIW, I also agree with the commenter on HN saying that Node/TypeScript is a sane approach. (I’m curious about someday using TS in work’s frontend stuff.) Not telling you what to use; trying to get Go to do things is just a hobby of mine, haha. :)

                                                        1. 5

                                                          TBH, it feels like you were looking only to change language and not architecture or paradigm here. You mentioned the disruptor pattern briefly then just moved on when you noted that it had no mature implementations. Why not make one? The reason I say this is because you have multiple budgets you are balancing: cognitive and performance. It is hard to get something simple that also scales really well on constrained hardware.

                                                          FWIW, I think an event loop is the right thing here. I’d probably reach for Erlang over JS if you want high concurrency, however. Also, why not C?

                                                          Edit: I think OTP only costs ~68 bytes per process.

                                                          1. 1

                                                            Changing language was one of the last choices, because I had to rewrite complete logic again. It’s a huge undertaking to redo everything in separate language. I could have implemented disruptor but I wanted stay on focused on problem at hand and get the results rather than going into library implementation mode. C/C++ again suffers same problem of writing complete event loop to hookup websockets myself; I explored options like uWebsockets (using that right now) and asio in Boost for example, after writing basic pubsub I felt like it’s too much code to do simple pubsub and the bang for buck would be low. I will definitely do a more detailed dive on OTP.

                                                          1. 3

                                                            I would be very curious to see if there was a way to design a go version of the pubsub server that didn’t require a go routine per socket.

                                                            1. 6

                                                              I have actually tried hard to do that, my conclusion was that you are effectively writing an event loop based system in that case. So if I have to choose an event loop system, why not choose one of the best implementations out there? Just like Node.js is built around event loop, Go lang is built around go-routines and channels. Even for synchronization people first try to use channels. Beam (or Erlang/Elixir) has similar theory with processes and messages.

                                                              1. 3

                                                                At that point, why write it in Go? It ceases to be idiomatic.

                                                                1. 0

                                                                  What’s the argument against having a go routine per socket? I was under the impression that because there’s potentially many go routines per actual thread, you can have a ton of go routines without much performance penalty.

                                                                  At what scales does an event loop become more efficient than go routines?

                                                                  1. 2

                                                                    Did you not read the article we are discussing?

                                                                1. 4

                                                                  Have you tried Ada? I never looked at it myself, but that article[1] posted today looks very interesting. And there seems to be a well supported web server with WS support[2]

                                                                  [1] http://blog.adacore.com/theres-a-mini-rtos-in-my-language [2] https://docs.adacore.com/aws-docs/aws/

                                                                  1. 4

                                                                    TBH I can’t believe Ada is still alive. I thought it is something that we did in Theory of Programming Languages course and called nothing other than obsolete systems use it. Would give it a shot for sure!

                                                                    1. 4

                                                                      This article trying to use it for audio applications will give you a nice taste of the language:

                                                                      http://www.electronicdesign.com/embedded-revolution/assessing-ada-language-audio-applications

                                                                      This Barnes book shows how it’s systematically designed for safety at every level:

                                                                      https://www.adacore.com/books/safe-and-secure-software

                                                                      Note: The AdaCore website has a section called Gems that gives tips on a lot of useful ways to apply Ada.

                                                                      Finally, if you do Ada, you get the option of using Design-by-Contract (built-in to 2012) and/or SPARK language. One gives you clear specifications of program behavior that take you right to source of errors when fuzzing or something. The other is a smaller variant of Ada that integrates into automated, theorem provers to try to prove your code free of common errors in all cases versus just ones you think of like with testing. Those errors include things like integer overflow or divide by zero. Here’s some resources on those:

                                                                      http://www.eiffel.com/developers/design_by_contract_in_detail.html

                                                                      https://en.wikipedia.org/wiki/SPARK_(programming_language)

                                                                      https://www.amazon.com/Building-High-Integrity-Applications-SPARK/dp/1107040736

                                                                      The book and even language was designed for people without a background in formal methods. I’ve gotten positive feedback from a few people on it. Also, I encouraged some people to try SPARK for safer, native methods in languages such as Go. It’s kludgier than things like Rust designed for that in mind but still works.

                                                                      1. 2

                                                                        I’ve taken a look around Ada and got quite confused around the ecosystem and which versions of the language are available for free vs commercial. Are you able to give an overview as to the different dialects/Versions/recommended starting points?

                                                                        1. 4

                                                                          The main compiler vendor for Ada is AdaCore - that’s the commercial compiler. There is an open source version that AdaCore helps to developed called GNAT and it’s part of the GCC toolchain. It’s licensed with a special GMGPL license or GPLv3 with a runtime exception - meaning you can use both for closed source software development (as long as you don’t modify the compiler that is).

                                                                          There is also GNAT AUX which was developed by John Marino as part of a project I was part of in the past

                                                                          1. 1

                                                                            Thanks for clearing up the unusual license.

                                                                          2. 2

                                                                            I hear there is or was some weird stuff involved in the licensing. I’m not sure exactly what’s going on there. I just know they have a GPL version of GNAT that seemed like it can be used with GPL’d programs:

                                                                            https://www.adacore.com/community

                                                                            Here’s more on that:

                                                                            https://en.wikipedia.org/wiki/GNAT