I’ll agree that the push to Apache seems to be coming from a few unnamed corporates rather than “the LLVM community” as a whole. But Theo is being quite unfair with the assertion that proper signoff won’t be sought from contributors. As Chris Lattner says “Here we are discussing what the best end game is – not the logistics of how to get from here to there.”
Step 1: decide to investigate changing license
Step 2: decide what license to change to <– we are here
Step 3: decide a plan for getting sign-off from contributors on the licensing change, and dealing with missing contributors or those who do not want to support the license move (e.g. rewrite or revert the change)
Step 4: enact the plan
From the outside, it does seem a bit odd to only talk about “where you want to get” (apparently in a years time it seems like the answer to this just ended up being “apache 2.0”) without considering how you would get there (or if you even can).
Overall it seems like kind of a bummer[1] though. I would think using a license to cover copyright, and a patent grant to cover patents, would be the clearest solution – apache 2.0 mixing the two always struck me (non-laywer) as a bit dubious. Further, a CLA seems like it would result in the least change (no new license), and would put the burden on corporate contributors – who are the ones raising the issue in the first place.
For example, doesn’t react (someone in the office who is familiar with it mentioned it as an example) use a BSD license with an explicit patent grant / CLA process?
[1]: I assume it would mean that clang would never be included as the default copiler in openbsd if it were apache-2.0?
Yeah, I’d honestly be very surprised if they do manage to switch. There are ~700 contributors to llvm alone, and I’d be surprised if they could get them all to respond, let alone agree to a license change.
That said, it’d be quite a shame if it did happen. I’m super excited to see llvm + clang in OpenBSD.
I’ve been through a lot of todo management tools (even wrote my own minimal solution) and I have to say my favourite todo tool is taskpaper.vim. It’s easy to overlook it as being just a syntax file for the taskpaper format, but actually the few keybindings it provides to mark tasks as done and archiving them are all you need. Key features it offers that are surprisingly hard to find elsewhere:
[Comment removed by author]
Care to share any details about your language+implementation? Statically typed or dynamically typed? VM or AoT compiled?
[Comment removed by author]
Have you looked at the Sparrow language before? It offers a very complete set of metaprogramming features http://forum.dlang.org/post/ne3265$uef$1@digitalmars.com
Sounds fun, when do you think you might share it with the world? I’m always interested in new language designs.
I feel that having Cloudflare host a subset of DNS Linode servers would be great to add extra resiliency. Putting all DNS in the hands of an external provider and retiring Linode’s own servers seems a shame though.
Inspired by the weekly threads here on lobste.rs and in the Rust community, I’ve actually started a “What are you working on” thread on the lowRISC mailing list. lowRISC is our project to produce a completely open source System on Chip implementing the RISC-V architecture.
Last week I was at DAC, talking to various vendors and delivering a section of the RISC-V tutorial. Our section made use of the untethered lowRISC release and associated documentation.
Thanks to hard work from my colleague Wei, lowRISC contributor Stefan, and others we are now approaching the point where we can make a new code release including trace debug support. The previous untethered release still had support for the host-target interface (HTIF) in the kernel but had a small program to map the HTIF calls to underlying hardware peripherals (e.g. UART, SD card). This week, I want to look at ensuring we have a kernel that properly supports communicating with the platform peripherals directly.
I wonder how many of these repositories aren’t software (e.g. dotfiles or other config, websites). You could of course put a license on these (though tracking provenance for snippets in your vimrc might be a pain), but I can understand why most wouldn’t bother.
Yes, agreed. My point was that not putting a license on a project, however small and trivial, is contributing to making no-licensing (and thus ignoring the legal aspects) a cultural norm. Small projects today, big projects tomorrow…
I take day-by-day notes in a markdown file in vim (simple chronological order). I keep my todo-list in Taskpaper format using the taskpaper.vim plugin, and my preferred method of working involves adding lots of notes under each task I work.
“It’s the world’s tiniest open source violin”
Phabricator. It’s used successfully by Wikimedia, LLVM, FreeBSD, Blender, and many more communities. A bot to help bridge would be great (e.g. submit a pull request on Github, the bot creates a Phabricator review and directs the submitter there).
Side note: anyone using Phabricator know of a good Not Rocket Science testing system? I’m a little new to it still and am not sure how to make Revisions work how I want.
Gitlab. Open-source, with a hosted option if that’s the service you need, but open-source so you can run it yourself, or pay someone else to, and contribute changes if you need them.
I’ve run a small/mid-sized project on here for the past few months, and I’ve been quite happy with it. Does everything I need, except the primary gitlab.com instantiation does not allow commenting over email, though this can be enabled for private installs.
IMO, BitBucket is superior to GitHub in every way except for CI/CD integration. Which I believe they are working on. It’s still possible to at least kick off jenkins jobs and what not but it’s a bit janky and there is no feedback yet. Otherwise, I find BitBucket to be very well done.
EDIT: I’m responding to the above from a feature/quality perspective. Not based on the xkcd cartoon.
Bitbucket recently got CI status integration. As an Atlassian employee I’ve seen some really cool Bitbucket and CI integration being used internally. I’m sure some of this slickness will be shown using public projects soon.
you can’t even search in repositories in bitbucket online.
why do you prefer it?
i use both, and find bitbucket mostly worse in most web user experience: no searching, can’t see sources vs forks easily, dashboard shows repos and not activity of people you follow as primary thing (i use this on github a lot).
The two things you mention are two things I basically never use. Most of the repositories I interact with are ones I’m using locally and have in my various tooling already and most of the programming I do is in organizations where forks aren’t really useful at all. BitBucket has robust branch permissions which I make more use of.
The Pull Request system, which is my main use for any tool like this, is significantly superior to GitHub’s for my usecases. It has Reviewers, real Approve buttons, and Tasks, all of which I use a lot. I don’t really care about the social/activity aspect that GitHub is aiming for, I mostly care abotu a tool around development, which I find BitBucket does a lot better. I also have to use GHE at work which I find very aggravating to use.
I used self hosted gogs for a bit, but ended up returning to github because I missed the social/community features. Sure, they technically exist on gogs too, but who’s going to sign up for my gogs instance just to say post an issue, or star/what/whatever it?
One can use cgit and use email for reviews. No need to create an account. Although the barrier of entry may be a little bit higher as not many people use git format-patch/apply-patch, this is more an issue familiarity than something inherent to the process. I like it more than github’s pull requests as it is easier to go back and forth.
For open-source projects with outside contributions/contributors, dead right. For my purposes though gogs is ideal. I’ve been using it for personal projects for a few months. Works well enough that I moved all my private repos from Github onto it and saved myself cash money. Fast, simple and regularly updated, often with nice new features that so far have all seemed pretty well-tested and working. For my v low-complexity requirements, natch. YMMV.
Well, sure, in terms of raw git operations, no reason - but private repos can still have multiple contributors, and even single-contributor projects can benefit from organisational tools like the issue tracker, milestones, wiki for notes, etc. Mostly though I just like the UI, the graphical, easily-click-through-able display of a range of projects at a glance, and the visual diffs are simple and easy to get at. Sure, none of this is anything Github/Bitbucket/etc doesn’t do, but it does all the bits that I need and like, well enough for me, for free, on my server.
I agree that there’s no shortage of OSS GitHub alternatives out there, and most of them work really well.
What kills me is the lack of a hosted free-software alternative to Google Groups. I have a couple projects on librelist.com, but it’s been down for almost a month now, and I haven’t gotten a response about what’s up. Hosting your own mailing list is really easy to screw up.
Kallithea, although it desperately needs a larger community of contributors to add features like pull requests and CI integration.
I see no one has mentioned Launchpad yet. Launchpad supports git repositories now, and they’re improving it steadily. The Launchpad blog has info on their progress.
Keep in mind that I work for Canonical, who started Launchpad and who employ everyone I know of who works on Launchpad development (I’m not really up on who’s doing what, though). There are other organizations who use LP, e.g. Openstack.
My own opinions of LP are mixed. I like it, and I used it heavily for a couple of years, but eventually moved to git, and moved off to mostly use GitHub, back before LP added git support.
LP’s bug tracking is more featureful than github’s issues. There are lots of other features that may or may not be useful, such as PPAs, translation support, blueprints, etc etc.
That’s great news, I really hope RISC-V and lowRISC projects will be successful. I know this is premature, but taking the opportunity to ask when can we expect dev boards to reach general availability?
Thanks for the words of support. We intend to tape out a test chip next year, and we’d get ~100 dies through a multi-project wafer (possibly more if there’s spare space on the wafer and the fab is being nice to us, or if we pay for extra wafers). This would produce some dev boards for key contributors and project partners. Assuming a successful test chip, I’d expect general availability to follow in 2017.
Hey, that’s awesome. I’m on the RISC-V team at Berkeley, and we’re trying to move towards un-tethered systems as well. I won’t be able to attend the RISC-V workshop, but some of the other students will. I think it would be great if you could share some of your experience with them so we know what pitfalls to avoid.
So far, we have most of the required hardware features like MMIO and a hardware device tree implemented. From here on, it’s mostly implementing peripherals and updating all the software.
It’s a shame you can’t make it. Both Wei Song and I are flying over for the workshop and presenting - Wei will give more detail on the untethering work. It will be good to catch up with your colleagues.
Our current implementation uses a separate I/O bus for simplicity and to minimise the invasive changes to the Rocket codebase. It seems like your MMIO implementation may have sorted most of the bugs now, so we may want to move over to that.
Nice! Any insight into lowRISC vesus the work that the Open Proscessor Foundation is doing with J2/SuperH?
Sure. First of all, the focus of OPF right now is on much smaller core designs (microcontroller class) while we’re heading towards a SoC that can “run Linux well”. They’re releasing new designs as the original SH patents expire. My understanding is it won’t gain a MMU (memory management unit) until 2017 with 64-bit support coming sometime after that. Instead, we are using the open RISC-V ISA and basing our application cores on the Rocket design out of Berkeley. The upside is you get a clean, modern ISA that anyone can use right now. The downside is there’s a fair bit of work still to be done on the software side - but there is already a Linux port, decent GCC toolchain etc. Aside from producing RTL for a complete SoC reference design, we also intend to produce the SoC in volume and sell low-cost development boards. In the future, it would be great to be able to offer a regular (e.g. every 18-24 months) tapeout schedule, so people know they’ll be able to see their contributions in silicon in a reasonable timeframe.
In summary: we’re both interested in pushing forward open hardware, but are going about it different ways and have a different focus for the time being. I’m very interested in their DSP design http://0pf.org/working-groups.html.
As ever, I try to highlight interesting developments from the LLVM project in LLVM Weekly (just hit issue 100). Sign up if you’re interested in keeping track of future LLD dev.
Even though my low-level-fu is weak, and my day-to-day work is miles away from LLVM, the LLVM Weekly newsletter is one of my favorites! It’s a great way to keep up-to-date with the project, and that part of the development spectrum in general. Thanks for the effort!
Mostly chiptunes on the excellent Rainwave internet radio station http://chiptune.rainwave.cc/
So have us in the lowRISC project. We have a range of project ideas across many languages and different levels of the hw/sw stack at http://www.lowrisc.org/docs/gsoc-2015-ideas/
Should be a great summer!
I’m very excited that lowRISC has been accepted to take part. We’re working with a range of projects in the FOSS and OSHW community to provide what I think is a pleasingly broad set of project ideas. You can help on our mission to produce a fully open source SoC while getting paid to hack in Python, Go, Javascript, C, C++, Chisel, or SystemVerilog on projects at any level of the hw/sw stack.
Surprisingly, some organisations who have been taking part for years didn’t make the cut this time. Most notably, Mozilla weren’t accepted. No Blender, Tor, Xen, or Linux Foundation this year either.
“Take care when publishing a crate, because a publish is permanent. The version can never be overwritten, and the code cannot be deleted. There is no limit to the number of versions which can be published, however.”
This is good news!
I don’t know… What if you accidentally published private information? I feel like there should at least be some sort of window within which you could undo the push.
You can contact us and we’ll pull it in cases like this. There’ll be actual policy before 1.0: this is like a ‘beta prerelease’ for everyone to try out the infrastructure now.
But setting the expectation that you can’t just yank versions whenever you want is good.
I know hex supports this with a one hour grace period. After that, it’s locked in for good. Maybe crates.io would benefit from a feature like this.
Then it’s been published. Time to dust off that harm minimization document.
Deleting the version isn’t going to take it back. Especially as rust becomes more popular, and there are mirrors all over the net.
But it may erroneously contain third-party IP. Continuing to distribute it would put you at legal risk.
I won’t put any words in their mouths. There will almost certainly be a way to takedown specific URLs. (DMCA)
But, as a serious issue, if you have that as a risk then implement a delay / sign-off process. You can’t assume a central replication system is going to implement an “I take it back” system.
It is good news! A big beef I have had with maintaining node related ports on OpenBSD is people publishing updates to existing versions! Totally breaks the checksums in the port tree!
Squeee - Testing out my nifty hat!
This week, I’ve been trying to get back on the horse. I didn’t post an update last week, not because I didn’t do anything, but because I just didn’t feel like talking.
This has been an interesting week though and I’m finally ready to start talking about my new project.
I haven’t done much with Open Dylan as I’ve been busy with some other stuff. I did, however, start the creation of a Homebrew tap for Dylan that will work once the 2014.1 release is out. I also added some very basic bash completion for the Dylan compiler. And, speaking of the 2014.1 release, I’ve decided to do the release around the start of December, on whatever platforms work at the time. Anyone interested in non-OS X platforms should get in touch to do builds, testing and fixes.
For my work, I’m moving to a new project for the client this week. I’ll be working, in part, to track down some performance issues in their code base. This is all emscripten related, and some of the issues may require me to use IRHydra2 to track some issues under V8. This should be pretty interesting and a good challenge. I look forward to learning a lot.
My new project is something that I’ve only talked about in a relatively private setting so far in any detail. I call it Debug Workbench and it is a programmable, extensible visual debugger. I’ll start talking about real details fairly soon, but if you’re interested, follow along on Twitter at @DebugWorkbench.
Debug Workbench is being built using Atom Shell to provide the user interface and currently with LLDB as the debugger backend (but this is going to be pluggable). The current target is to deal with native debugging (since I’m using LLDB), but I may expand in the future.
One of the things that makes this different from existing debuggers is that it uses plug-ins to extend the system. Once you have the ability to set breakpoints that can execute scripts, a lot of magic things can happen.
One example of this could be a performance advisor plug-in that watches
for common API mis-usage, like calling regcomp with the same regular
expression many times, or calling write with small buffers frequently.
Another, much more involved use case would be an extension that watches
for calls to accept, bind, connect and reads data from the debugee
process and emits events to indicate (to other plug-ins) what is happening
in terms of network activity. From there, another plug-in could start
a packet capture and monitor the actual network traffic generated by
the program being debugged. This is why I wrote about packet capture and
dissection from within JavaScript recently.
By using Atom Shell, I have full access to a huge ecosystem of useful code and capabilities. Want to use D3 to graph some data from your program? Want to use WebAudio to play back an audio buffer or look at a spectrum analysis of the audio buffer?
The goal here is to open up a whole new world of debugging tools and ways to interact with your running program and this is something that I’m pretty excited about. This has been taking almost every spare minute of my time for the last few weeks and the pieces are starting to come together.
I’m not entirely sure of the funding model for this yet. I want to keep as much open source as I can, but I recognize that this will need full-time efforts from some people to make it come true. To that end, I’m looking at doing a crowd-funding or looking for corporate sponsors.
I don’t want to talk about this on other websites yet though as I want to have a better working prototype and some screenshots first.
Until next week!
What are your thoughts on Trace-based debugging? ARM’s CoreSight seems popular in industry circles but has relatively little support in open-source tools.
My overall feeling is that there are a lot of interesting debugging tools, techniques and research projects that don’t get enough attention.
CoreInsight does look interesting. I’ve talked with some people about the state of debugging in the embedded space, especially some using the MSP430 and things look pretty dire / terrible.
Another interesting area of research is that of replay / reversible debugging.
But for now, I want to get the basics working, build a solid platform and keep growing from there.
I have a lot more to say and a lot more to write … but right now, I want to focus on getting my prototypes of various things into a single coherent codebase.
Debug Workbench sounds very interesting to me! I would happily support via crowd funding or possibly contribute work as well.
Great to see you’re working on something like Debug Workbench, Bruce. Creating an environment for constructing operable, visible software is an interest of mine but I haven’t gotten all that far - it’s hard! It’s nice to have something (just about) out there for inspiration!
I feel the same way. I’ll often compose a tweet, spend a minute or two trying to rephrase it to fit in 140 chars, and then give up when I realize it’s lost its meaning in “translation”.
Customers tweet at my company account from time to time asking questions but usually all I can do is reply “send us a detailed e-mail and we’ll get back to you” because it’s such a useless platform for dialog. It’s mostly just a platform for shouting things and slacktivism these days.
I still don’t get why Twitter is holding onto the 140 character limit, making the vast majority of its users deal with a limitation imposed by an outdated delivery mechanism that I would guess a very small minority of its users still use.
Twitter’s API could even support long tweets while still keeping backwards compatibility. Make the first ~130 characters the actual tweet, and stuff the rest of the text in an entity. Old clients just show the first ~130 chars but with some kind of indication that it’s truncated, but Twitter’s website and newer clients with entity support strip out that indication, load the rest of the text in the entity, and display it all together as one long tweet.
It’s mostly just a platform for shouting things and slacktivism these days.
To be fair, it’s pretty good for jokes too.
Not much else though.
To be fair, it’s pretty good for jokes too.
All kidding aside, this is 99% of what I use twitter for nowadays: a cheap laugh when I have 30 seconds of downtime.
Given the way Twitter have embraced and promoted tweets with embedded images, there really doesn’t seem to be as much argument for the 140 character limit as there was.
I’ve noticed people tweeting links to gists on the rise, which does seem like a stupid workaround (in the sense of something that Twitter could absolutely do automatically).
I like to use it as a public status update or a way to quickly thank people for generous things they might have done e.g. contributions to libs etc.
This weekend I was in Munich for the OpenRISC conference. I presented the lowRISC project, in particular our plans for tagged memory and ‘Minion’ cores for I/O. My slides are available here. I’ve completed the tagged memory part of our whitepaper, we just need to work in some more detail on the Minion cores and that should hopefully be ready for internal review in the next week or so, then put out soon after.
I hope to find some time to pick back up work on Pyland this week.
Sweet! lowRISC sounds pretty awesome. It would be amazing to eventually be able to run my software on completely open hardware. Can’t wait to see a Rasberry Pi type computer using this thing!
I know you worked on the Pi, would such a computer use the Pi name? I know they are separate projects, but I think it would be in the spirit of the Rasberry Pi.
Our plan is to release a low-cost development board which would be used much in the way many hobbyists currently use the Pi. We remain separate projects with no plan for a Raspberry Pi with a lowRISC inside badge. The first revision is not going to have a GPU which is important for many current educational uses of the Pi. That said, we hope another spin or two down the line it would be a compelling choice for projects like Raspberry Pi.
I frequently use
git add -pto stage chunks of diffs, so that I can make separate commits of unrelated changes in a single file. I would miss that feature greatly.Gitless has a
--partialflag that can be used to interactively select segments of a file to commit.As durin42 mentioned elsewhere, hg gets arround this (without having a staging area) by having support for interactive commit selection with
hg commit --interactive.For non-trivial changes, how do you know for sure that your code still compiles afterwards? Is this not a problem in practice?
For non-trivial changes, I don’t know. But if it gets to the point where I have non-trivial changes in my working copy, I’ve already lost.
In my experience, a common pattern is:
git add -pthe changes to the helper function and the tests, which I’m confident can stand alone because the existing tests still pass, plus my new ones…times as many stack-levels of yak-shaving as you can stand.
Once you have made small commits, you can do an interactive rebase to build every version and make edits if necessary. With e.g. Gerrit and Jenkins you can automatically get feedback whether each commit builds/passes tests.
You can use
git stash -kto stash unstaged changes, run your compiler / test suite, and commit them if they all pass.I think I could make do with their
--partialflag, but the lack ofgit rebase -iis a bit of a deal-breaker for me.