Author of the post here – here’s the link to the project if you want to skip my bad poetry and just check out the product!
Creator of Thyme here. We made this to track application usage and find possible interventions to boost our productivity at Sourcegraph. Happy to answer any questions and hope you find it useful!
I guess my main question is how this differs from RescueTime?
or ManicTime Whose main advantage is that all records are stored locally, however you can have your own Manictime server that you can publish your time records to.
This is interesting. I use Consul for healthchecks. What are advantages/disadvantages of using this over straight consul?
Consul is a great project, and I think it’s a bit of an apples/oranges comparison. Consul health checks are a bit lower level (checking the health of individual nodes and services in your Consul cluster rather than checking the availability of your app to end-users, although you can certainly use them for that, too) and are one component of the Consul suite of tools.
Checkup aims to be more of a standalone OSS alternative to services like Pingdom and StatusCake, and should work with any web service. It also comes with a status page dashboard out of the box.
I’m an early user of Checkup and work at the company that created it (Sourcegraph). Happy to answer any questions people have!
I don’t see how this is fair. The license can be summed up as: “Contribute your work, I keep the money. Forever.”
It is basically a source view license, similar to many used by Microsoft. (https://en.wikipedia.org/wiki/Shared_source)
Sourcegraph CTO here. Our goal with Fair Source is NOT to have people in the community write 90% of the code and then profit off their hard work. Rather, it’s to make the code that we’ve written for Sourcegraph publicly available for all to view (rather than keep it closed off). If a user notices an issue or wants to create a small enhancement or plugin, they can do so, instead of filing an issue and waiting for a response. But we are also glad to receive issues and will resolve them as quickly as we can. Retaining the ability to sell the core product means we’re incentivized to make the core product as good as possible, all while having the source code publicly available.
That’s precisely what I am saying: it’s for viewing. That’s okay, but not novel and was heavily criticized when Microsoft did it - for good reasons. The software doesn’t contribute to the commons.
That’s all okay and you are perfectly within your rights to do this.
There’s one thing I find rather ugly though: the appropriation of the term “fair”. You’re on a high horse there.
With all the recent open source authors losing it and quitting open source, are they really on a high horse?
We place too much burden on open source authors. I think your summary, “Contribute your work, I keep the money. Forever.” is unfair. In reality they will be producing the vast majority of the work. How many people actually contribute to someone else’s open source project, enough that they could feel entitled to money for their contributions?
All of this complaining is the same bullshit people cite when leaving open source. These  posts are even highly upvoted here on lobste.rs, and yet when someone tries to solve this problem for themselves our reaction is to bitch about it? For fuck’s sake.
By picking the term “fair” for them? Yes. “public source viewing licence” (PSVL) would been a much tamer term.
That is okay, but the result is not open source and not free. It cannot be reused. Even the FAQ for the license says so. As I said: I am fine with the terms of the license, but it should not pose as an open source or free license. Contributing to the commons is an important part of open source. A problematic funding situation that needs to be solved comes with it and I don’t want to question that.
I’m sorry, but I don’t see how we are even discussing the same thing. Handwaving that everyone is “bullshitting” when they don’t agree with certain solutions isn’t helping either.
So, users are free to contribute fixes to their own problems, but can still pay full-price for the privilege!
More seriously, why not dual-license under a viral copyleft (say, AGPL3) and a normal commercial license? That’s what id did.
I want to reiterate the fact that we don’t expect users to fix bugs or issues on their own. Our team is fully committed to responding to feedback and making Sourcegraph into the best code host possible.
The reason we didn’t do a dual-license model is because we wanted to retain the ability to sell the core product to companies. This preserves the incentive for us to invest in improving the core product and technology.
You can still sell the dual-licensed product…especially if it’s with a super-viral copyleft license. You just explain “Hey, if you don’t want to be legally liable to disclose your entire product’s code, buy our commercial license”.
The people that would ignore that and not pay would do the same under the proposed license.
There are many examples of dual-licensed software which work exactly this way. Sidekiq is a recent example, which makes the core functionality, which is good enough for a large percentage of users, free under the LGPL, and has a “Pro” and now an “Enterprise” version which adds significant feature improvements to the library and is “commercially” licensed.
MySQL is, of course, another major example of this.
Indeed, MongoDB uses dual-licensing to what seems like a great success!
It seems fine to me, but more as a way to encode business practices in the license. It seems like what I’d want if I were writing software that I was perfectly cool with hobbyists and small shops using, but would want enterprises to be explicitly on the hook for payment.
What I’m interested in is whether or not any enterprises will actually touch software licensed in it.
Well, but one of the core ideas of open source and free software is contribution and collaboration.
I would see no reason to contribute to such a project.
The license however does allow you to make your own changes to the software, which is a huge reason why big companies prefer FOSS, they can change stuff to their own means as they need to for their own needs.
How frequently does that actually happen, though? For companies that are not in the software business, I would say just about never.
I don’t know how often it happens in general, but I’ve seen companies make changes to OSS code quite a bit. It usually only gets publicized when they are caught violating licenses, or contribute back a huge amount of code.
In my experience, most companies make relatively small changes and obey licenses, so they don’t raise much interest.
For example, if a company has an internal application with a custom logging system, they might modify the OSS libraries they’re using to use their logging system. Since it’s an internal app, the GPL doesn’t require making the modified source available, and other licenses are even less restrictive.
I guess some kind of bounty system that would allow you to target individual bugs could be a good way for companies of paying for work. Or whatever money received from commercial licences going on bounties
I’m interested in this but I’m hung up on the bespoke ‘Fair Source’ license. You mention that it is meant to be used as Fair Source ___ where blank is the number of users before you have to start paying. But I don’t see a user limit anywhere on the site. Without that limit specified can it be assumed to be infinite? It’s hard to sell new licenses inside an environment where the lawyers have already taken on the GPL vs LGPL vs BSD vs MIT vs APLv2 and drawn lines on what they want to risk litigation on.
Thanks for the question. The use limit (15) for self-hosted Sourcegraph is specified in the LICENSE file: https://src.sourcegraph.com/sourcegraph@master/.tree/LICENSE. Sourcegraph.com is free to use for everyone.
We worked with a well-known open-source lawyer to draft Fair Source. If you’re using Sourcegraph for a team of above 15 people (and paying us to do so), then it would be a standard commercial license. Fair Source enables us to make the source code publicly available and to let teams with fewer than 15 users try it out both free as in freedom and free as in beer.
So, I think you’d raise a lot fewer hackles if you totally reworded your elevator pitch on fair.io:
The Fair Source License functions just like an open-source license—up to a point. Once your organization hits the license’s specified user limit, you will pay a licensing fee to continue using the software.
It’s not at all open-source. You’re restricting my right to redistribute, I can’t meaningfully sell my modified version, etc. It’s shared source with free for N users. Maybe something like:
The Fair Source License grants everyone the ability to see the source code and makes the license free for a limited number of users. It attempts to offer some of the benefits of open source software while retaining the ability to profit from a codebase.
I have nothing wrong with trying to sell software - but when you (unintentionally or not) make it look like you’re being “open source” you’re going to have a bad time.
Excellent to know. I should’ve checked the source… :) I looked at your lawyer’s blog and they do seem to be legit which is a selling point. I would say that I wouldn’t have a hard time selling this to the powers that be.
That said I’d really like to see C/C++ support in srclib. Thanks for the response.
C/C++ is on the roadmap in the next couple of months. Shoot me an email at firstname.lastname@example.org and we can let you know when that’s ready :)
In the meantime, feel free to check out our code analysis library, which is completely open source: https://srclib.org
Sourcegraph CEO here. We have large enterprise customers using Sourcegraph and are excited to release it publicly (with source code) so anyone can start using Sourcegraph for their team’s code.
Feedback is much appreciated!
More info at the announcement blog post and at Sourcegraph.com.
Looks neat, I like the features. I often find myself browsing github for something, then having to jump back to IntelliJ so I can follow method calls and “find usage”.
How does the “inline discussion” stuff work once the code has completely mutated past what it once looked like? Is there some kind of timeout that makes these notes fade over time?
This might be a strange question, but is it possible to register via github sign-in? The biggest benefit (imho) to github for OSS projects is the network effect: anyone can stop by and send a PR, file a bug, etc. Having your own tracker definitely brings tech benefits, but the downside is that there is more friction to getting outside people to contribute. Or perhaps this is really targeted more for internal development that isn’t necessarily OSS?
Probably just my connection (DSL yay), but many page loads are super slow for me. 3-14 seconds in many cases, and it looks like the json XHR RPC calls are doing a lot of it (long load but only a few bytes).
Any chance of getting Rust support? :D
EDIT: srclib looks neat! Will have to poke around that a bit more :)
Thanks for checking it out! We’ve resolved some perf issues and the site should be faster now. If it’s still slow, can you send me the page URLs that are slow (email@example.com)?
Inline code discussions are currently tied to a line range in a file at a specific version, which makes them useful as a jumping off point to filing an issue in Sourcegraph Tracker. In the near future, we’d like to make them attachable to specific pieces of code (e.g., a function definition), as well, and in this case, the comment can remain as a relevant contextual item for the duration of the lifetime of the function. I like the idea of fading discussions out over time, as well.
Rust is on the roadmap, but we welcome any and all contributors to srclib, our open-source core code analysis library :)
We don’t support GitHub sign-in currently, but that’s something we’ll consider in the future.
It’s a great announcement!
Hey, thanks for checking out Sourcegraph! Apologies for the slowness on the site yesterday. We’ve been working through some performance issues and it should be much faster now.
We use server-side rendering for the mostly static parts of the site – good for SEO, performance, etc. For the more interactive parts (code browser, etc.), we use React to make more dynamic features like the code browser pop up box and smooth jump-to-def within a file.
This looks really nice. Particularly once there is more language support it seems like it’ll be better than the other tools I’ve used. I did notice a bug, in Safari (9.0.1 on OS X 10.10) the following link appears to have a redirect loop:
Edit: Beyond Go & Java I’m particularly interested in Python (painful to analyze, I know). The other items on my wish-list would be Protobuf (if Sourcegraph could understand the correspondence between the generated code and the original file), Swift, and Objective-C. In an ideal world you’d be to jump-to-definition across RPC boundaries (say iOS client into Go server code) but there’s probably no sufficiently general way to do it to make it worth doing.
If you find this valuable or have any issues / questions / suggestions, please tweet @srcgraph!
[Comment removed by author]
Over-aggressive filtering is often counter-productive
Absolutely agree on that point. We think having someone write real code is a better filter than asking them to write it on a whiteboard. I’ve had the displeasure of turning away candidates who I thought were awesome at previous jobs due to their inability to figure out some esoteric algorithmic trick to get the function they had 30 minutes to write down to O(n) time, and I’d like to avoid that in the future.
This is why it pays to have a personal github account
It absolutely does. In fact, we love people who make substantial contributions to open source, because it’s such a strong indicator of programming ability. It’s a very strong factor in our decision process, and for people who have many years of experience, only more so.
tell me why I should work for you?
Shouldn’t an experiment have a “control” group?
3 hour test is a big investment just to find out I might not like working there.
Author of the post here. Yeah, we’ve worried that 3 hours upfront might turn away some people. But when you think about it, when you sign up for a phone interview, you’re blocking off an hour of time on your schedule anyway and you probably won’t know for sure that you want to work somewhere until you go onsite. Plus, reading the programming challenge instructions takes about 2 minutes and you can decide right away if you don’t want to do it. A lot of people who’ve done the challenge have told us that they wanted to do it because it’s just an interesting problem.
A control group is difficult when you’ve got limited bandwidth. What we’re comparing against is our own past experiences interviewing and being interviewed. You’re right, it’s not an experiment in the scientific sense; it’s just a thing we’re trying out to see if it goes better, and so far, it’s been going very well.
Well, thanks for sharing what you’re company is doing. Hopefully, it keeps working for you.
How much time did you spend developing the test?
We spent about 5-6 hours writing it up, doing the task ourselves, and then updating it. It didn’t take too long to come up with the idea, because we adapted a cool problem we came across developing the site.
“For prospective candidates, three hours might seem like a lot of time for an initial “interview”, but in reality, it saves time for everyone.”
I may be a bit jaded here, but it seems that this tactic saves time mostly for the employer who no longer has to sift through the ‘chaff’, and makes the interviewee assume a large portion of the risk up front. As a potential candidate, I would find this worrying, and barring anything else that made the company truly stand out, I wouldn’t bother with the test. I am sure it is not the interviewers intent, but the technique comes across as being in bad faith.
Also, for anyone with responsibilities (deadlines, teams to manage, children, etc), a “24 hour window” to work on a 3+ hour project is going to be a bit unrealistic.
Thanks for the feedback. Regarding the 24 hour window, so far most of the candidates we’ve interviewed are in college or recent graduates. We’d definitely consider expanding the window for anyone who has a more demanding schedule. Our intent is to be more flexible than phone interviews by allowing someone to work around their schedule, instead of setting a specific time block.
And we’d definitely hop on the phone with a candidate to tell them more about Sourcegraph up front. All the people who’ve done the challenge so far we met previously at a job fair or through a mutual contact. The post doesn’t do a good job of mentioning this, but we’re definitely not saying, “Here, before we even talk to you, you must do this 3 hour task.”
I hope I can convince you that this experiment is not in bad faith. Having gone through a bjillion of these ourselves and sunk many hours into interviews that didn’t pan out, we wanted to try to make things better for all parties involved.
I’m working on Sourcegraph, code search for open source. It provides good usage examples and docs for you when you’re coding. This week, I’m focused on improving the snappiness of search and fixing outstanding errors that have popped up (let me know if you find any more!)
Nice project! I like how methods are linked in the source code to their method pages.
What search backend are you using?
Sourcegraph uses a combination of PostgreSQL and Elasticsearch for searching. We’ll write up a tech stack post one of these days, but ask away if you have any other questions.