The nice thing about the activity stream spec is that it can capture many different types of interaction. Writing a blogpost, sending an email, listening to a song, sharing something etc. All of these are easy to model so its a very general spec.
I like the tech, it’s just that distributed protocols are struggling at the moment. I doubt it will succeed.
The problem is that the spec doesn’t specify what you are supposed to do with these activities beyond some very basic things, so essentially the first implementation of a certain object type (e.g. ‘Video’ for peertube) becomes the ‘standard’ that isn’t documented anywhere.
It seems like the EU is defending the rights of other smart phone manufacturers more so than the rights of consumers. I’d like less bloatware, not more.
Love the tech, going to donate. Web is moving the opposite direction though. Even email is having a rough time. Quite a bit of the communication moved to Slack, social networks, chat etc. IRC & RSS are having a rougher time.
Would I like more Mastodon and ActivityPub propaganda without much real substance and cute elephant friend pictures, reiterating how great federation is? You bet your top dollar I would! Take my upvotes, Mastodon blog! And congrats on self-hosting your blog and moving the propaganda machine off Medium!
I’m @JordiGH@mathstodon.xyz in case anyone wants to say hi.
“If Twitter shuts down, you’ll lose your followers. If Facebook shuts down, you’ll lose your friends. For some platforms, it’s not a question of “if”, but “when”. Such events are usually followed by a scrambling into a variety of different platforms, where you inevitably lose some people as you have to make a choice which one to stay on. This happened before. But it doesn’t have to happen again. Use the federated web. Join Mastodon.”
Adding to your comment, this is particularly such bullshit. More like you won’t have many friends, esp in real life, if you’re on Mastodon instead of those sites. The people on those major sites also use more than one. If one shuts down, they almost always have another way to contact them. The shutdowns also don’t usually happen overnight. There’s often time to see things are in decline. For instance, there’s plenty of posts about Twitter’s financial troubles where anyone with sense will have some other account on the side. It’s also amusing to see them saying “if X shuts down” after seeing a headline about Instagram alone being worth $100 billion, watching Facebook’s revenues grow over time despite all its threats, Slack doing what IRC isn’t in growth/profits, and knowing that previous shutdowns sent users to sites like Facebook instead of P2P software or Mastodon.
The article talked of companies disappearing. That can happen due to users leaving or financial reasons. Twitter’s money problems & mismanagement are a warning sign that should signal users to have another option. That’s all I was saying.
Sorry, my reply was more opaque than I meant it to be. I think the toxic things Twitter is doing to chase a profit are why so many people are leaving. I don’t think most users cared about their financial situation other than to wonder why the service was free.
Oh OK. Yeah, that makes a lot of sense. It’s even more ridiculous when a company gets a nearly impossible amount of users, has tons of money to make off them, and just… lets them float away…
You’d think rational self-interest wouldve made them take action sooner. Not how it often works in practice, though.
Make the weekly! We do standups every Monday and have a slack channel where you post what you’re working on that day.
Well yes there are limitations… I think you can work around most of them though. So maybe there was too much hype. But AI is here to stay. It will change everything because it allows more automation. It’s just really far away from a general AI, or sentient AI. Pattern recognition on steroids would be a better name, though not so sexy as AI.
Featuring lobste.rs in the onboarding flow: https://github.com/GetStream/Winds/blob/master/api/src/workers/featured.json#L474
Interesting to learn more about how you stay up to date on news/tech. Are you still using RSS, relying on Twitter, mailing lists, Reddit etc…
What does your backend setup look like for something like this? Would it be possible for you to allow self hosting in the future?
The backend API and frontend are both open source: https://github.com/getstream/winds We rely on Algolia, Stream and Mercury though which are closed source. Now that doesn’t stop you from running your own backend or changing the app’s functionality as you see fit.
There are many companies offering to help with GDPR for 6 figure amounts. Cost of compliance is in the millions for many larger companies. (this author clearly doesn’t understand the true cost of things)
So far there are no real privacy benefits for me as a user. I don’t care about people tracking my IPs personally, or running analytics, retargeting, or doing split testing. I care about people losing access to my passwords, social, credit cards, messages, pictures, location data etc. I haven’t seen much improvement in that area. End result so far seems to be more checkboxes and the ability to delete my user account. #awesome
It’s too early to tell if anything good will come out of GDPR. Fingers crossed though, there are real privacy issues to solve and I hope it helps with that.
Why ask for our work email to download it? It comes across as very creepy and also unfortunate for those of us who don’t have one…
well… it used to be a beta registration page. which we copied from another landing page that required your work email. it doesn’t make sense at all and I’ll remove it on the next deploy.
which platform are you on?
I’m still fine-tuning the build for Mac and the app store. You can try it out here (but it won’t automatically update) https://s3.amazonaws.com/winds-2.0-releases/releases/Winds-2.0.173.pkg
I would say JS already won. I prefer Python & Go. Still think Django or Rails are miles ahead in terms of productivity. However the vast majority of new projects are choosing for Node. We actually use Node for all our example/marketing projects since it’s just so much more popular than other languages. One interesting development is that Node adopted most of Python’s features over the past years, it really improved as a language. I still don’t like the async callback approach to handling concurrency, but other than that its a pretty decent language.
I don’t know, depending on your company, country and position you may feel this differently. In my few working experiences, people would run away from Javascript. Backend people moving to golang majoritarily and frontend people going from javascript and moving to Typescript, Elm or Reason. I think it totally depends of where, who and what you’re working on…
Do people use Go for enteprise-y CRUD apps? I see a lot of Go for services and things that require eating through a bunch of data but I don’t hear about its use in other domains
We use golang in this capacity. Backend services that don’t need tons of front end tooling are really nice to write in golang.
Have you been basically hand-rolling most of the functionality (thinking in particular about ORMs and outputting HTML for the client)
To be honest when reading Go code it tends to look very “nice C”-y, but that feels like it might lead to frustration when dealing with a bunch of strings to concatenate.
haha, why yes we have: https://github.com/blend/go-sdk
the golang stdlib gets you most of the way there, that sdk is really just a web helper, a logging / eventing helper, and a lite orm with a bunch of other random stuff thrown in for services that needed it
Yes we did. As API serving JSON but also serving templates HTML. It’s quite nice but to be honest we didn’t grow it too big so we didn’t have much trouble maintaining it.
Well yes a lot of their issues are caused by having APIs that are too open. To be fair, back in those days, the tech ecosystem was definitely pushing for this openness. It was considered a good thing. Now, not so much..
In our buzzwords-driven field?
Probably people considered API access “a good thing” just because “Facebook/Google is doing this too!”
But the problem was not the technology back then, just like AI is not the solution right now.
It’s the business model.
I remember a younger Zuckerberg explaining the world how privacy had no value for modern people.
He meant it!
back in those days, the tech ecosystem was definitely pushing for this openness.
I would hardly call 2015 “those days”.
It’s impressive how much money they raised. Wonder how well their monetization is working for them.
sighs
Replacing a corporate data aggregator with a distributed one doesn’t actually reduce the amount of data gathered.
If you don’t want your information online and searchable don’t put it online.
It doesn’t matter if it’s a friendly mastadon instead of a Harvard dudebro–sharing data means your data is shared. Staaaaaahp.
EDIT: Mastadon also has some interesting history.
If you don’t want your information online and searchable don’t put it online.
This is not a panacea. Facebook has my phone number because other people chose to upload their contacts. Google has incredibly personal conversations because other people chose them for email. Equifax has my credit history because nearly every banking institution reports to them. Nielsen-Catalina Solutions knows my shopping preferences because retailers secretly sell it to them.
If you don’t want your information online and searchable, get data protection laws.
Laws help, but we also have to take responsibility for not sharing our data (or the data of our friends) online.
[Comment removed by author]
Please elaborate. I thought it was an interesting look into experience of having vastly different cultures using the same messaging fabric, and the issues that that gives rise to.
I don’t think it’s garbage. I think it could have been better written, but as you point out the culture clash thing is an interesting phenomena.
I also don’t think said history would have any bearing on which social media platform you choose for most people.
That article is absolute, complete garbage.
Do you see it as garbage because of an abundance of factual inaccuracies, or something else?
The reason I ask is that clearly there’s an absolutist free-speech position being promoted, but certainly all the stuff about Japanese and Spanish speaking Mastodon activity correlates well with what I saw at the time. I don’t know anything about people getting upset about Eugen being paid though, or any of the behind the scenes stuff.
Replacing a corporate data aggregator with a distributed one doesn’t actually reduce the amount of data gathered.
It does if the data you share is subject to aggregator influence. And it is, since the aggregator controls the platform and its defaults.
Facebook went through a period where everytime I checked my privacy settings I found something open that I didn’t want to be open. The years of the Cambridge Analytica scrape line up pretty well with that phenomenon. Facebook used to be hugely incented to make as much of your data public to the world (search engines and, it turns out, CA) as possible. Mastodon has no such incentives.
Yes, if I share something with someone I share it with them. But I’d like to not share it with everyone else.
It does if the data you share is subject to aggregator influence
I’m not quite sure what this means, do you mind elaborating?
I thought I did in the rest of my comment? Basically I’d enter some data in my profile with some understanding of what was visible to whom. Then I’d come back a month or three later, and somehow stuff I intended to be visible only to friends would somehow be visible to some new vector (apps) or API. Facebook’s privacy settings sprawled out of control for a couple of years. Here’s some links I was able to dig up in a quick search:
http://mattmckeon.com/facebook-privacy
https://www.eff.org/deeplinks/2009/12/facebooks-new-privacy-changes-good-bad-and-ugly
I agree with this sentiment but I think all the bruhaha is currently about something entirely different. When you use an account on Mastodon, your toots are federated across the global timeline. That, along with an email address that stays local to the server you signed up on, and maybe some HTTPS traffic logs on your server, is the sum total of the information you are exposing via Mastdon until you choose to add more.
This is, from where I stand at least, a vastly different kettle of fish than Facebook.
I agree. To some extend the distributed nature even makes it harder to remove data you don’t want online anymore.
On the other hand the data is also distributed across many instances as opposed to being owned by a single entity. There’s also the fact that Mastodon doesn’t try to track your personal identity, and the interactions can be completely anonymous. Meanwhile, the whole purpose of a site like Facebook is to build an intimate profile of you and your friends.
Depends, some instances have ElasticSearch enabled, ostensibly to enable full-text search, but ES can be used for more insidious ““big data” purposes, to profile users with. Tools like Kibana from the ES people make such tasks trivial compared to writing tedious queries by hand. And due to the nature of federation, if someone from that instance follows you, they have your toots, which the admin can use for said purposes.
I’ll bite.
General industry trends~
I’ve got other fun ones, but that’s a good start I think.
(5 years) Security, cost, and regulatory concerns are going to move people back towards running their own hardware.
As of today, public cloud is actually solving several (and way more than people running their own hardware) of these issues.
(10 years) Containers will be stuck in Big Enterprise, and everybody else will realize they were a mistake made to compensate for unskilled developers.
Containers are actually solving some real problems, several of them already were independently solved, but containers bring a more cohesive solution.
Containers are actually solving some real problems, several of them already were independently solved, but containers bring a more cohesive solution.
I am interested, could you elaborate?
The two main ones that I often mention in favor of containers (trying to stay concise):
Containers are a solution to some problems but not the solution to everything. I just think that wishing they weren’t there, probably means the interlocutor didn’t understand the benefits of it.
I just think that wishing they weren’t there, probably means the interlocutor didn’t understand the benefits of it.
I’ve been using FreeBSD jails since 2000, and Solaris zones since Solaris 10, circa 2005. I’ve been writing alternative front-ends for containers in Linux. I think I understand containers and their benefits pretty well.
That doesn’t mean I don’t think docker, and kubernetes, and all the “modern” stuff are not a steaming pile, both the idea and especially the implementation.
There is nothing wrong with container technology, containers are great. But there is something fundamentally wrong with the way software is deployed today, using containers.
But there is something fundamentally wrong with the way software is deployed today, using containers.
Can you elaborate? Do you have resources to share on that? I feel a comment on Lobsters might a be a bit light to explain such a statement.
You can actually set resource isolation on various levels; classic Unix quotas, priorities (“nice” in sh) and setrusage() (“ulimit” in sh); Linux cgroups etc. (which is what Docker uses, IIUC); and/or more-specific solutions such as java -Xmx […].
So you have to use X different tools and syntax to, set the CPU/RAM/IO/… limits, and why using cgroups when you can have cgroups + other features using containers? I mean, your answer is correct but in reality, it’s deeply annoying to work with these at large scale.
Eh, I’m a pretty decent old-school sysadmin, and Docker isn’t what I’d consider stable. (Or supported on OpenBSD.) I think this is more of a choose-your-own-pain case.
I really feel this debate is exactly like debates about programming languages. It all depends of your use-cases and experience with each technologies!
I’ll second that. We use Docker for some internal stuff and it’s not very stable in my experience.
If you have <10 applications to run for decades, don’t use Docker. If you have +100 applications to launch and update regularly, or at scale, you often don’t care if 1 or 2 containers die sometimes. You just restart them and it’s almost expected that you won’t reach 100% stability.
I’m not sure I buy that.
Out testing infrastructure uses docker containers. I don’t think we’re doing anything unusual, but we still run into problems once or twice a week that require somebody to “sudo killall docker” because it’s completely hung up and unresponsive.
We run at $job thousands of container everyday and it’s very uncommon to have containers crashing because of Docker.
Easier local development is a big one - developers being able to quickly bring up a full stack of services on their machines. In a world of many services this can be really valuable - you don’t want to be mocking out interfaces if you can avoid it, and better still is calling out to the same code that’s going to be running in production. Another is the fact that the container that’s built by your build system after your tests pass is exactly what runs in production.
(5 years) VR fails to revitalize the wounded videocard market. Videocard manufacturers are on permanent decline due to pathologies of selling to the cryptobutts folks at expense of building reliable customer base. Gamers have decided graphics are Good Enough, and don’t pay for new gear.
While I might accept that VR may fail, I don’t think video card companies are reliant on VR succeeding. They have autonomous cars and machine learning to look forward to.
(10 years) No significant changes in core count or clock speed will be practical, focus will be shifted instead to power consumption, heat dissipation, and DRM. Chipmakers slash R&D budgets in favor of legal team sizes, since that’s what actually ensures income.
This trend also supports a shift away from scripting languages towards Rust, Go, etc. A focus on hardware extensions (eg deep learning hardware) goes with it.
(10 years) Containers will be stuck in Big Enterprise, and everybody else will realize they were a mistake made to compensate for unskilled developers.
One can dream!
Would you (or anyone) be able to help me understand this point please? My current job uses containers heavily, and previously I’ve used Solaris Zones and FreeBSD jails. What I see is that developers are able to very closely emulate the deployment environment in development, and don’t have to do “cross platform” tricks just to get a desktop that isn’t running their server OS. I see that particular “skill” as unnecessary unless the software being cross-platform is truly a business goal.
I think Jessie Frazelle perfectly answer to this concern here: https://blog.jessfraz.com/post/containers-zones-jails-vms/
P.S.: I have the same question to people that are against containers…
(5 years) Mesh networks still don’t matter. :( (10 years) Mesh networks matter, but are a great way to get in trouble with the government.
Serious attempts at mesh networks basically don’t exist since the 200#s when everyone discovered it’s way easier to deploy an overlay net on top of Comcast instead of making mid-distance hops with RONJA/etc.
It would be so cool to build a hybrid USPS/UPS/Fedex batch + local realtime link powered national scale network capable of, say, 100mB per user per day, with ~ 3 day max latency. All attempts I’ve found are either very small scale, or just boil down to sending encrypted packets over Comcast.
Everyone’s definition of mesh different, but today there are many serious mesh networks, the main ones being Freifunk and Guifi
(10 years) There will be at least two major unions for software engineers with proper collective bargaining.
What leads you to this conclusion? From what I hear, it’s rather the opposite trend, not only in the software industry…
(5 years) All schools will have some form of programming taught. Most will be garbage.
…especially if this is taken into account, I’d argue.
(10 years) Some schools will ban social media and communications devices to promote classroom focus.
Aren’t these already banned from schools? Or are you talking about general bans?
It’s really easy to see what state a container is in because you can read a 200 line text file and see that it’s just alpine linux with X Y Z installed and this config changed. On a VM it’s next to impossible to see what has been changed since it was installed.
ate a container is in because you can read a 200 line text file and see that it’s just alpine linux with X Y Z in
I just check the puppet manifest
It’s still possible to change other things outside of that config. With a container having almost no persistent memory if you change something outside of the dockerfile it will be blown away soon.
All schools will have some form of programming taught. Most will be garbage.
and will therefore be highly desirable hires to full stack shops.
I would add the bottom falling out of the PC market, making PCs more expensive as gamers and enterprise, the entire reason why it still maintains economies of scale, just don’t buy new HW anymore.
(5 years) All schools will have some form of programming taught. Most will be garbage.
My prediction: Whether the programming language is garbage or not, provided some reasonable amount of time is spent on these courses we will see a general improvement in the logical thinking and deductive reasoning skills of those students.
(at least, I hope so)
Cloudformation, Cloud-Init, Puppet, Boto and Fabric. Works like a charm, but none of these tools are perfect.
Working on it: https://getstream.io/blog/winds-2-0-its-time-to-revive-rss/ It’s not so easy though, it’s a vicious cycle. Less people use RSS, less publishers support RSS, RSS tools degrade in quality and so on.
You wouldn’t believe the number of if statements in the Winds codebase just to make RSS work (ish). The standard isn’t really much of a standard with everyone having small variations. Here’s an example, not all feeds implement the guid properly, so you end up with code like this: https://github.com/GetStream/Winds/blob/master/api/src/parsers/feed.js#L82
Now, that looks like an interesting project. I have updated my SaveRSS page to include a link to Winds in the RSS clients section. You might also consider linking to the SaveRSS page for arguments on why to use RSS/Atom as a publisher.
Personally, the project isn’t for me, though. I’m a happy user of elfeed, but I can absolutely see how your project can benefit the RSS/Atom community.
Dang, this bloatware is pushing 6k stars on github already. Nothing like an RSS reader that combines Electron, Mongo, Algolia, Redis, machine learning (!), and Sendgrid
The goal is build an RSS powered experience that people will actually want to use. The tech stack is based around the idea of having a large group of people being able to contribute. (We use Go & RocksDB for most of our tech, so it was a very conscious move to use Node & Mongo for Winds to foster a wider community adoption)
Makes sense. Thanks for the gracious reply, I feel bad about my grumpy comment.