1. 5

    This is quite timely. I just set up OpenBSD on a new (to me) 15” PowerBook G4 and have been using it over the past week. Quite happy with how well it runs, and it’s astonishing that a 14 year old laptop can still be so good.

    1. 3

      So I’m not a Go guy, but I’ve been following it from the sidelines because I’m a huge fan of Rob Pike and his work and the language itself appeals to me.

      One of the reasons why I like C is that the differences between C11 and C89 are fairly minimal. I regularly work with C code that’s from before 1989 and it still generally compiles with only a few minor changes.

      So watching from the sidelines with Go I’m worried that Go 2 is going to be just an enormous change, to the point that there’s no reason to learn Go 1.

      Anyone with Go experience and insight into Go 2 have anything to assuage my fears?

      1. 6

        They said something like “we can afford somewhere between two and five breaking changes in go 2, and we’ll have tooling to assist the upgrade.”

        1. 3

          From the official blog:

          Go 2 must bring along all those developers. We must ask them to unlearn old habits and learn new ones only when the reward is great.

          Maybe we can do two or three, certainly not more than five.

          I’m focusing today on possible major changes, such as additional support for error handling, or introducing immutable or read-only values, or adding some form of generics, or other important topics not yet suggested.

        2. 3

          I write a lot of Go at my current job.

          While I have my qualms with the language (generics pls) I wouldn’t be worried at all about Go 1 knowledge being irrelevant for Go 2. The language probably won’t change too much. It’s very clear that they aren’t starting over and instead taking a practical look at what Go 1 doesn’t handle well and finding solutions for that. If anything, those solutions may make you appreciate Go 2 even more.

        1. 16

          Whereas most OS’ include proprietary, closed source drivers, OpenBSD does not, by default. Closed source drivers can’t be audited, thus forming an unknown attack vector. It might be bug-ridden, vulnerable, unfree licensed, etcetera. Of course, for your convenience, if you would like to go down the rabbit hole, there is fw_update.

          That sounds a bit confused.

          Many devices are just dead bricks of silicon without firmware (a small embedded OS) than runs on the device. So unless you run the firmware, you have bought a brick.

          fw_update(1) installs the hardware vendor’s non-free firmware (running on the device) to make the device operate so that drivers (running in the kernel, and always free in OpenBSD’s case) can use the device.

          1. 11

            And to add on to this, fw_update is only needed in cases where OpenBSD is unable to include the firmware in the base install because redistribution is prohibited. Other (including closed source) firmware can already be found in a clean install in /etc/firmware.

            1. 1

              What does redistribution mean in this case? What makes downloading it in an arbitrary tarball from ftp.openbsd.org not okay, but downloading it in an arbitrary tarball from firmware.openbsd.org okay?

              1. 2

                In some cases redistribution is ok. The line is really more about stuff on the ftp server is free (to modify, etc.) and the firmware stuff is not. There’s also only one firmware server. It’s not mirrored. So for some of the files that are in a bit of a grey area, mirrors aren’t exposed to any risk.

                1. 1

                  There are firmware mirrors (round robin dns) but indeed they’re separate from the ftp mirrors.

            2. 4

              I think the distinction is between drivers and firmware? OpenBSD does not ship driver blobs (which run on the main CPU), but does allow you to update firmwares (which run on the device).

              1. 2

                fw_update does not update drivers. The author’s comments implied they believed it does.

            1. 5

              As an update, the drivers were updated to remove the shortcut: https://twitter.com/CatalystMaker/status/857766176910446596

              1. 10

                Without reading this article (too many words, as noted by others), I have to make a fly-by comment just based on the title and opening sentences. I highly recommend anyone looking for a very well written story that can only be told as a video game to try out NieR:Automata (for PS4 and Steam). The article seems to be written entirely around the assumption that these can’t or don’t exist, which is patently false.

                If you play NieR, please heed the message after you “beat” the game and keep playing using the same save file. You’ve only scratched the surface of the content at that point.

                1. 3

                  NieR: Automata is one of the best games to be released in a long time.

                  Much like the Metal Gear Solid series, NieR: Automata can succeed only as a game because it effectively uses the medium to convey something much more than a narrative despite occasionally ham-fisted writing. In other words, the strength of the entire presentation overcomes the weaknesses in writing. To truly appreciate these games, however, you need at least a cursory ‘education’ in video games. A lot of what makes them brilliant is their willingness to tamper with players’ expectations. You just don’t see that happen much.

                  1. 1

                    Can confirm!

                  1. 7

                    What’s unclear about MIT/ISC and patents? I always assumed the answer was a simple no.

                    1. 6

                      “Unclear” probably just means “would have to be decided in court”.

                      US-based lawyers are super happy with an explicit patent grant they can use to defend their client in court, should someone sue for patent infringement.

                      1. 5

                        The author has a full article on MIT. It comes down to “Neither copyright law nor patent law uses “to deal in” as a term of art; it has no specific meaning in court.” and refers to the following part of MIT:

                        to deal in the Software without restriction,

                        1. 5

                          ISC does not use this terminology. So why did he throw it in one bucket with MIT?

                          EDIT: See https://www.openbsd.org/policy.html for arguments in favour of ISC.

                        2. 2

                          I think that’s because MIT doesn’t mention patents explicitly while Apache has this:

                          1. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
                          1. 2

                            There’s an implicit patent grant in these licenses. Given the statement “Permission to use, copy, modify, and distribute this software … is hereby granted” I think it would be hard to argue that the recipient is not given a license to use the patent.

                            This only works if the copyright holder also holds the patent. But I (an eminently unqualified non‐lawyer) don’t see what the Apache 2.0 text provides that the ISC text doesn’t. “Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual … patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted.”

                            What’s really annoying about Apache, besides the deluge of verbiage, is the next sentence: “If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.”

                            1. 1

                              What’s annoying about the patent pooling? It discourages a sue fest by revoking any patents granted to you by other contributors if you sue users of the software for patents you have granted to the project.

                          1. 1

                            Would have loved to see the CDDL show up here. I guess MPL is close?

                            1. 4

                              Without PijulHub getting a Series A, this will never take off!

                              I’m joking, but do have to wonder what the goals are here. If technical superiority is the goal, maybe that’s fine. If worldwide domination of DVCS ideology and usage is the target… well good luck! Git is going to be a tough competitor, but I guess if you play up the security of SHA1 enough…you might get some converts.

                              1. 9

                                I know you’re kidding, but I’m very convinced that Git succeeded due to GitHub. But that had relatively little, in my opinion, with GitHub being good or easy or anything else. Instead, I am pretty sure that the actual reason GitHub succeeded was that it gave developers an auditable portfolio—via a blockchain, no less, before those were cool—and then the usual factors of tool momentum and social pressure caused GitHub to dominate industry as well. If a PijulHub could do a markedly better job at the portfolio aspect, I think that might have a pretty easy time unseating GitHub amongst hobby developers.

                                I’ll mostly leave what that actually means as an exercise to the reader, but I will note that the big thing Darcs-like systems do better than Git-like like systems is make long-lived personal tweaks (as opposed to hard forks) a lot easier to maintain and distribute. They are, after all, in a much truer sense than either Mercurial or Git, basically a way of distributing plain, vanilla patches, with dramatically more metadata than e.g. Quilt of yore. I thus think that a PijulHub that focused on lots of different neat customizations you could do to otherwise stock-standard open-source software might actually stand a great chance of unseating GitHub.

                                Anyway, that’s my two cents. Free business plan if you want it. I already tried this industry and don’t feel like a repeat.

                                1. 8

                                  GitHub also had an early, incredibly hard buy-in from an entire language community: Ruby. That helped quite a bit too. Network effects are tough.

                                  1. 15

                                    Perhaps it’s too obvious to mention, but even before GitHub, git had a pretty important “launch customer”, so to speak, in the form of the Linux kernel. Not only is the Linux kernel a big project, but it’s also a project with a long reach into influential companies, since there are people in IBM, Red Hat, Intel, Google, etc. who routinely send code upstream, and who therefore all quickly started using git as soon as it became the official way to contribute.

                                2. 3

                                  maybe the goal is to build a system that works better for them? not everything is intended to take over the world.

                                  1. 3

                                    That seems to be the case; it’s not a company that needs to turn a profit or anything. The specific motivations of the people behind Pijul are that they’re darcs users who were frustrated by some speed issues with darcs, plus a few other things about its patch theory, and thought they could do better.

                                    1. 2

                                      I didn’t mean to imply that I felt that to be the case. But, typically, if you spend the time market a project, spend the time to make a marketing website (and not just generate documentation via Sphinx, Haddock, etc) you want it to be used. And, there’s nothing wrong with that!

                                    2. 2

                                      Pijul will have to have some answer for extremely large repositories that can’t realistically be cloned in their entirety to have a real competive advantage over git. That will attract big corporations (and their money) who wish they could use git.

                                    1. 5

                                      He does raise a valid point. If the code I want to publish is GPL’d, and I’m not the original author, just someone who forked it, how am I allowed to put that code on github???

                                      1. 1

                                        I don’t think it’s just an issue for GPL and other copyleft licenses but BSD/MIT/ISC as well since they also require attribution. If you aren’t the copyright holder, how can you possibly grant a license exception for someone else’s work? GitHub has essentially stated that they are exempt from the one and only requirement of the ISC license.

                                      1. [Comment removed by author]

                                        1. 14

                                          I’m looking for a language without any garbage collector, with option types (instead of exception or return codes) and algebraic data-types.

                                          Go satisfies none of these requirements

                                          edit: it’s also unsafe (as in, undefined behavior) in the presence of data races, further disassociating it from the safety traits of rust.

                                        1. 7

                                          In Go:

                                          type X struct {
                                              y Y
                                          }
                                          
                                          func (x *X) gety() *Y {
                                              return &x.y
                                          }
                                          

                                          Like C and C++, it’s still possible to return an interior pointer, but unlike those languages doing so is still guaranteed to be memory safe because of garbage collection. This might mean that your object ends up being punted off to the heap because the *Y outlives the X, but the actual access to the pointer carries no extra cost.

                                          1. 20

                                            Fwiw, part of his specification (a “crucial” part) seems to be exactly that *Y doesn’t outlive and that it compiles to mere pointer arithmetic. Not compelling for all use cases, but definitely Rust’s big rallying cry: “zero cost abstraction!”.

                                            1. 4

                                              well to be pedantic it still doesn’t outlive the owning object, but that’s because the owning object is kept alive by the GC for as long as it needs to even if the code doesn’t reference it anymore. But yes I understand what the author is getting at.

                                            2. [Comment removed by author]

                                              1. 3

                                                No, the trick here is that it is an interior pointer. Most GC systems do not support this at all (e.g. Java). In fact, the only other one I can think of that can is the CLR, and that can only be done using unsafe pointers and pinning the allocation to the heap.

                                                1. 5

                                                  I’m surprised Java doesn’t handle that correctly, but I’m no Java expert. In languages like Lisp, ML, etc. it works fine, and it’d be surprising if it didn’t. Those kinds of high-level GC’d languages generally have managed/tagged pointers that they communicate to the GC, using e.g. a pointer map (a technique dating to Algol 68), which should mean handling derived/interior pointers works fine, and it’s pretty much a bug if any legal reference to an object gets collected while the reference is still live, no matter what else it sits inside of. Even the textbook GC implementation given in Andrew Appel’s Modern Compiler Implementation book discusses how to handle derived pointers (in Section 13.7, “Garbage collection: Interface to the compiler”).

                                                  1. 4

                                                    I’m certainly not an expert on GC design but my understanding is that the JVM uses the fact that interior pointers are illegal to perform some optimizations. Google came up with http://forum.dlang.org/thread/o5c9td$30ki$3@digitalmars.com which looks like a pretty good and recent discussion of how interior pointers limit the possible optimizations.

                                                    1. 1

                                                      Sure, but I think it’s a case of finessing the meaning of “legal reference”. If you make it illegal/impossible to construct an interior pointer, or make it illegal/impossible to hold one without holding a pointer to its container for a superior duration, then you can rightly say that your GC handles all legal references properly, while ignoring interior pointers and having generally less overhead.

                                                    2. 1

                                                      I think the this in

                                                      Most GC systems do not support this at all (e.g. Java)

                                                      is unclear. I believe you are referring to interior pointers, which aren’t a thing at all in Java. I believe mjn thought you were referring to the general pattern of returning a pointer to an instance variable.

                                                      1. 1

                                                        Yes, was talking about interior pointers :)

                                                1. 2

                                                  Hey folks, decltype(x) seems so useful that I wonder how people write code without it but in my limited experience I’ve never had to write C++ code that would need this. What kind of algorithms/use cases would this be aimed at? Thanks!

                                                  1. 2

                                                    I think auto, for each and lambdas allow programmers to basically ignore decltype?

                                                    1. 1

                                                      I believe (not a day-to-day C++ programmer) that it would allow you to refer to the return type of a function that returned an anonymous type. Could be useful if you had to give it some kind of declaration (like if you were store it somewhere) since auto is really only useful when calling the function.

                                                    1. 9

                                                      Link to provisional docs.

                                                      I’m really excited about this. The interface reminds me of Glide somewhat, which I’ve toyed with in the past and seems like the most mature of the unofficial dependency management options. This seems like a better option because (AFAICT) Glide has no support for pegging to a SemVer release (I could be wrong here).

                                                      With better support for dependencies on a per-project level, I’m curious if there will eventually be elimination of GOPATH. I can’t really think of a good reason for it still to exist, other than maybe for simple projects where you don’t want to have to bother with project-level dependencies.

                                                      1. 3

                                                        Here’s hoping that GOPATH dies a fiery death. The requirement, by default, to keep my code 4 layers beneath my home directory always seemed completely obnoxious to me. Obviously there are ways around the various inconveniences this causes, but it’s still irritating, particularly because it’s so incredibly unnecessary.

                                                        1. 5

                                                          My pet peeve with GOPATH is that I sometimes write libraries which support multiple languages. I want to use a single git repo with all the code in it but GOPATH forces me to either split up my archive or put GOPATH tendrils everywhere. It’s ugly. I just want to compile it in place like I can with any other language.

                                                          1. 2

                                                            Not sure why it’s:

                                                            • Specifically your home directory (surely you put it anywhere)
                                                            • Specifically four levels deep (I assume you refer to src/<domain>/<user>/<repo>)
                                                            • Unnecessary (there are trade-offs, to be sure, but it’s not like there are no nice features it enables)

                                                            The idea that something you dislike could not exist for a good reason is unhelpful in the extreme.

                                                            1. 1

                                                              Specifically your home directory (surely you put it anywhere)

                                                              Because I normally keep my code in my home directory. It makes no difference where it is, the point is that it’s four layers of directories I don’t need or want.

                                                              Specifically four levels deep (I assume you refer to src/<domain>/<user>/<repo>)

                                                              Yep, you assumed correctly. For me, this means ~/Go/src/<domain>/<user>/<repo> instead of ~/repo. I find this endlessly annoying.

                                                              Unnecessary (there are trade-offs, to be sure, but it’s not like there are no nice features it enables)

                                                              Like what? What does it provide that would be infeasible with a different, less restrictive directory layout? I’m genuinely curious because I’ve literally never been in a situation where I said to myself “Gosh, I’m sure glad Go imposes this weird directory structure on me.”

                                                              The idea that something you dislike could not exist for a good reason…

                                                              I never said that. In fact, I only criticized the directory structure as the default. I feel that the current default is no longer the most reasonable default. Perhaps at one time it was the best default. That has no bearing on my opinion or argument.

                                                              I’m not saying we should stone Rob Pike because he chose the default stupidly, if that were the case then I would need to demonstrate that there were no good reasons for the current default (I would also need to make a strong case for stoning as the most appropriate reaction, but that’s another matter). But I’m saying the default should be changed, in the present.

                                                              1. 1

                                                                For context, I use a GOPATH per project, so my app code is in eg src/server/main.go. I also use an editor with a good fuzzy file search implementation, so I can type usrpkm instead of github.com/user/package/main.go to find a file. These both substantially improve the situation.

                                                                Like what? What does it provide that would be infeasible with a different, less restrictive directory layout?

                                                                Making the name of the package map 1:1 to its location on disk makes navigation (‘where is this code’) easy. In eg ruby and node this is a runtime concern (and can be hooked by arbitrary code), which makes static analysis impossible.

                                                                An alternative directory layout would need to preserve the property of being easy for tools to navigate.

                                                                I’m saying the default should be changed, in the present.

                                                                One of the key reasons to use go is that they carefully avoid breaking changes.

                                                            2. 1

                                                              totally agree that GOPATH needs to go away, although not just because of the deep directory structure. We now use the vendor directory for managing our projects' dependencies which works well, until a dependency is forgotten and is pulled out of GOPATH instead. this breaks builds (because we do not commit our deps, preferring to use glide to restore them to the correct versions) and makes deterministic builds more difficult. Another issue I’ve hit up against is that it becomes impossible to have two different working copies of a repo without setting up a second GOPATH, at which point, why not just use a project-based build tool to begin with?

                                                            3. 1

                                                              Unsure what you mean by “pegging to a semver release” but with glide you can either specify the version as “1.2.3” which will use that exact release version and no others, or “^1.2.3” which means “semver compatible with 1.2.3” (it is shorthand for >=1.2.3, <2.0.0).

                                                            1. 2

                                                              It feels like just yesterday the project was announced! Props to the Microsoft for reaching a great and stable iteration* so quickly!

                                                              *cross-platform near-complete rewrite

                                                              1. 0

                                                                I dunno, call me a cynic, but it only adds support for Linux which they have been flirting with as a company for the last few years. If it was written half even half decently this shouldn’t have been too much work. Apart from open sourcing it (which is appreciated) they haven’t really done anything they weren’t going to do anyway. If they port or accept patches to port it to Mac that would be slightly more interesting as it is outside their apparent game plan.

                                                                1. 2

                                                                  It already does run on mac (officially). Unsure about all of .NET Core, but CoreCLR runs on FreeBSD and NetBSD as well (although I don’t believe those ports have the same level of support as on macOS/Linux/Windows).

                                                                  1. 1

                                                                    There were a huge swathe of api changes that everybody seems to really appreciate and wouldn’t have happened without community input. The api is very very very different for the .net core libs but in terms of the clr itself, I believe it’s a port.

                                                                    1. 1

                                                                      Do you have a few examples? Would really love to see what they changed. The last time I played with .NET the API design felt like overloading-from-hell.

                                                                      1. 1

                                                                        They unified MVC and Web API stuff so Json-batteries are included, it’s easier to return responses that aren’t just View(), a lot of framework stuff got moved from being class parameters to interface parameters, configuring your application at startup is a lot less unwieldy, there’s now built-in dependency injection (although dependency injection’s always put a bad taste in my mouth, autofac is very popular in the c# world), MTest is replaced with xUnit.net, better decoupling from IIS (kestrel is the http server used outside of Windows), tag helpers are a new thing, so you can use special attributes on your markup to add behavior instead of inlining a lot of razor markup, and nuget package management is way simpler

                                                                        Disclaimer for you: the mvc/net apis are still very very mvc/net-y. The changes made are (imo) mostly just updating practices in the framework to align with practices of developers using it.

                                                                1. 7

                                                                  As a former developer for xombrero, this article can’t be more spot on. We were unable to update to the newer WebKit2 API and the project has essentially become abandoned. Xombrero was even touted as a (slightly more) “secure” web browser as it included whitelisting javascript and cookies without having to download 3rd party plugins, among other features. However, all of those benefits fly out the window when the browser engine is riddled in 130+ well documented security flaws. Unfortunately, xombrero had a unique interface that resonated with the minimalism crowd and people even today still want to use it just for that. I am absolutely horrified at all of the new issues I constantly see reported on Github, not for the issues themselves, but because people are still using the software.

                                                                  On another note, I am extremely excited about where Servo is going. A browser engine written in a memory safe language will be incredible.

                                                                  1. 5

                                                                    I’d highly recommend Mike Acton’s CppCon 2014 talk (1h30m) explaining how data oriented design is done in C++ (in the games industry no less) since the author stated that C++ is not well suited for it. The result is obviously very different from object oriented or highly generic C++, and it’s funny how some members of the audience at the end were a bit appalled at the “bad design” :) but I don’t think this makes C++ a bad choice (or a worse choice than C) for it.

                                                                    It’s also an outstanding video for anyone wanting to get into games, regardless of language. The designs used here translate very well to both the C and C++ languages.

                                                                    1. 18

                                                                      This is absolutely legal for a compiler to do, and something that I would expect it to do. gcc loses points in my book by not doing the same. If you want to assign blame, blame the C and C++ language specifications, or the programmer who invoked undefined behavior, or both.

                                                                      1. 1

                                                                        Why not just use the iterator pattern?

                                                                        1. 2

                                                                          What is the iterator pattern?

                                                                          1. 4

                                                                            It is a pattern used in C++ that matches point semantics from C.

                                                                            Consider you have an array:

                                                                            int array[10];
                                                                            

                                                                            That is full, we have a beginning:

                                                                            int *it = &array[0];
                                                                            

                                                                            and an end:

                                                                            int *last = &array[10];
                                                                            

                                                                            The end being defined as one past the last element. The iterator pattern would be to use a for loop, say to sum:

                                                                            int sum = 0;
                                                                            for (int *it = &array[0]; it != &array[10]; ++it) {
                                                                                sum += *it;
                                                                            }
                                                                            

                                                                            This pattern can be generalized with the same interface to many different kinds of containers. If I had a vector<int> instead, I would do begin(myvec) and end(myvec) instead, but I could do the same operations: dereferencing, and incrementing.

                                                                            EDIT: I wanted to add that the purpose of the iterator pattern is to abstract away from dealing with indices, which can be problematic as the article does point out. You will find many languages supporting this to one extent or another, an example being Java.

                                                                            1. 1

                                                                              Ah, ok, pointer arithmetic can work in some cases. I have written quite some loops that are basically while (p < endp). Although for the troublesome scenario, working backwards, it’s not always any better because while a pointer one past the end is legal, a pointer one before the beginning is not. (And not all loops iterate over basic arrays.)

                                                                              1. 1

                                                                                We got you covered here too.

                                                                                rbegin and rend do the same thing as begin and end, except they operate in reverse. So rbegin is the last item, and rend is one before the beginning — yet operate in the normal way with ++.

                                                                                vector<int> array { 3, 6, 8, ... };
                                                                                int sum = 0;
                                                                                for (auto &&it = rbegin(array); it != rend(array); ++it) {
                                                                                    sum += *it;
                                                                                }
                                                                                

                                                                                Now C++ doesn’t actually define what the end iterator has to be, but it is clear that you do not dereference it under any circumstances. The power of these things amazes me sometimes.

                                                                                1. 1

                                                                                  I think tedu’s saying that while C++ reverse iterators may work in C++, he can’t solve his problem in C the same way, because a pointer one before the beginning isn’t legal C even if you don’t dereference it. I don’t know C inside and out myself; that’s just how I read the comment you’re replying to.

                                                                                  If you’re saying he should be writing C++ rather than C to be able to use iterators, you probably won’t get far with that.

                                                                                  1. 1

                                                                                    You are correct, it appears that C++ reverse iterators are really just adaptors around regular iterators.

                                                                                    I do not advocate C++, just observing a useful pattern that got its start in C and now used all over the place in a more general way than its original.

                                                                              2. 1
                                                                                int *last = &array[10];
                                                                                

                                                                                Isn’t this undefined behavior when the array only has 10 elements? (even when the 11th element is never dereferenced)

                                                                                1. 2

                                                                                  It is not undefined behaviour.

                                                                                  int *last = &array[10];
                                                                                  

                                                                                  Is the same as:

                                                                                  int *last = array + 10;
                                                                                  

                                                                                  The undefined behaviour would be beyond the one past the end.

                                                                                  In section 5.7 of the N3797 draft it says:

                                                                                  Moreover, if the expression P points to the last element of an array object, the expression (P)+1 points one past the last element of the array object, and if the expression Q points one past the last element of an array object, the expression (Q)-1 points to the last element of the array object. If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined.

                                                                                  So there are limits to where you can add or subtract. Further I was able to find

                                                                                  begin() returns an iterator referring to the first element in the container. end() returns an iterator which is the past-the-end value for the container.

                                                                                  From section 23.2.1 of the N3797 draft. The ‘past-the-end’ value is not defined anywhere, but for an array or a vector it could easily be implemented directly as adding one to the last pointer in the container.

                                                                                  1. 1

                                                                                    Ah, thanks. I remembered there was some undefined behavior possible here during pointer arithmetic, turns out it’s for when comparing pointers to different array objects.

                                                                                    I knew that the end iterator in c++ always “pointed” one beyond the last element but figured it was implemented as just an index.

                                                                                    1. 1

                                                                                      Well, you might actually be able to find a container where the iterator itself would be best implemented that way. That is certainly the benefit of the abstraction.

                                                                          1. 1

                                                                            Very cool, I was just going to play around with execution modes next week too!

                                                                            Any ideas if there are any gotchas regarding linking things that rely on Go’s runtime like goroutines?

                                                                            1. 2

                                                                              None of this new stuff is really well documented at this point (so forgive me if I make a mistake) but as I understand it, the runtime is spawned on its own thread(s) the first time the library is called into, package init runs (but main is not executed), and then the called function is executed. The runtime will manage goroutines across threads (‘P’s in the runtime nomenclature) like normal, and these continue executing independently of Rust threads. I also have a hunch that each time Rust makes a Go call, some kind of thread context switch is required so that the function can be called on a new 'G’ (goroutine) the same way it currently works when Go code makes a C call using cgo.

                                                                              I did get a simple toy program running that used a channel returned by Go to receive integers and print them from Rust. I added the following Go functions:

                                                                              //export CounterTo
                                                                              func CounterTo(max int) <-chan int {
                                                                                      c := make(chan int)
                                                                                      go func() {
                                                                                              for i := 0; i < max; i++ {
                                                                                                      c <- i
                                                                                              }
                                                                                      }()
                                                                                      return c
                                                                              }
                                                                              
                                                                              //export RecvInt
                                                                              func RecvInt(<-chan int) int {
                                                                                      return <-c
                                                                              }
                                                                              

                                                                              And here was the Rust code: http://sprunge.us/jUVi

                                                                              All of the usual memory management caveats apply. Memory allocated by Go cannot be safely passed across the C boundary without retaining a reference to prevent it from being collected and causing a use-after-free. I made this mistake with my channel example (and it’s a good thing I had not pushed that code yet), since the counter goroutine would exit after the max was reached and the channel could be collected, making recv unsafe to call on the Rust side.

                                                                              Another bummer for me is that these new execution modes only seem to be supported on linux/amd64. This is unfortunate in my case, as I had wanted to use this to integrate Go code into a .NET Windows GUI application. Since I believe Go 1.5 is in feature freeze at this point, something like this may have to wait until 1.6 or later.

                                                                              1. 1

                                                                                That’s very informative, thank you!

                                                                            1. 4

                                                                              I’m not sure why I don’t use bitbucket. I have to imagine it’s either a) because it’s made by Atlassian or b) because everyone else seems to use github. More the latter, I imagine.

                                                                              1. 2

                                                                                There is a discussion on “why do you use github” on the discussion page for Leaving Github

                                                                                1. 1

                                                                                  I love bitbucket, but then again I use hg

                                                                                  1. 1

                                                                                    agreed, it’s the latter. Github even says it on their homepage, “GitHub is the best way to collaborate with others.”

                                                                                    bitbucket says, “Unlimited DVCS Code Hosting, Free.”

                                                                                    Free isn’t an advantage. There’s no shortage of free git hosting: Assembla, Codeplex, and I’m sure many others. So I keep my private stuff on Assembla and public on Github.