1.  

    I’m severely dissapointed that npm developers think that if I run something as sudo, I obviously want my file permissions set as the invoking user instead.

    It’s frustrating, really.

    When I invoke something with sudo, it should be assumed I have a damn good reason to invoke with sudo. If your tool doesn’t like root then abort if it runs as uid=0, otherwise nobody should try being smart about sudo.

    Being smart about sudo is how you get stuff like this. Don’t try to outsmart sudo. Either it works with sudo as sudo is intended to work or you print an error and abort. (Like my GUI text editor, which aborts with an error message that I shouldn’t run it as root)

    Additionally and on top of this, the functions in question are barely tested (read: not at all) and the released version number did not properly indicate this was a pre-release, leading to places installing it in production.

    Pip and NPM are the only package managers I ever had problems with. Pip is bearable with virtualenv and various other hacks. I’ll go back to using apt and pacman and yum. Atleast they won’t chown me.

    1.  

      I agree that NPM is a joke, but this happens all throughout the stack: https://github.com/systemd/systemd/issues/2402#issuecomment-174565563

      1. 7

        It happens at every level of the stack, but it doesn’t happen to everything.

        1.  

          Wouldn’t even be my hardest complain, it was more annoying that systemctl tries to check if sudo is being invoked by a service (atleast as far as I can tell) and prevents it from escalating to root.

          Understandable but if I put a service’s user in /etc/sudoers to run some systemctl as root I think I have a reason for doing so.

          The entire stack is fucked tbh, though the higher level it is the more likely it is that some normal office users runs into these problems. NPM sees far more usage than rm -rf /

        2.  

          Pip and NPM are the only package managers I ever had problems wit

          I take it you never used easy_install or zc.buildout then?

        1. 10

          Yeah… No. I don’t want AMP in my emails.

          EMail already has a reduced HTML subset with most email clients blocking a large set of HTML stuff by default. Most emails are also not big unless you spam pictures in there (which you can strip out and simply not download) so I don’t really see any tangible advantage of AMP over regular HTML email (or Plaintext Mail)

          1.  

            Dear god, give me plain text emails, please!

            1.  

              I block all pure-HTML e-mails. Occasionally I check out what was blocked. It’s all spam. I suspect AMP e-mail will be the same. Regular people will send three copies of their e-mails (in text, HTML, and AMP), which I will read in plain text, and spammers (also known as marketers) will send only HTML and/or AMP.

              That being said, this attack of Google over the open Internet is the last straw that made me ditch all Google programs and services.

          1. 4

            Back to work this week so my plans are limited. However, I have some solid plans on implementing macros and maybe even variables for my configuration parser. I’ll also clean up a lot of code and remove dead features like trimming strings (might move that into a function). I feel like macros are something most configuration languages are missing, with them you could make deploying a PHP app with nginx or Apache much much easier. I might also look into the viability as a template language for HTML pages, it should be possible, maybe it’s fun (Go lacks some alternatives to html/template)

            1. 1

              I’m working on a design approach for macros in SECL, atm I’m favoring a template based approach; a macro definition would contain specially formated elements which would be replaced with the proper AST elements when the macro expands.

              The macro defining function will likely break behaviour a bit as I don’t want to run functions in the context of a macro definition yet, so I might have a go on how to figure that one out.

              I’m also thinking about including a special internal function to merge AST trees with a parent node, this would make macros more useful as they would expand into the containing context instead of creating a distinct context below it.

              1. 2

                I dropped a bit on it, good luck everyone <3

                1. 2

                  Yesterday my order for various bits and pieces for my new RGBW lighting system arrived. Soldered some basic stuff onto the PCB and I might get around to finishing the ardunio code for it today. Also load testing, I have no idea how much power the LEDs pull in reality, only some quick napkin math. Might burn a wire or three.

                  Otherwise, not much planned, I was tinkering with a small data protocol on a notepad, might be fun to reinvent most of TCP and write a basic userspace lib for it.

                  1. 2

                    bad performance on low-end devices (and I suspect higher battery consumption, but can’t really proof this one)

                    I’d actually argue the opposite here. With a traditional web app you’re sending HTML across, and you’re doing a lot of parsing each time a page loads. Parsing HTML is an expensive operation. With SPA style apps, you load the page once and pass JSON around containing just the data that needs to be loaded. So, after initial load, you should expect to get better resource utilization.

                    1. 6

                      I’m not sure that parsing HTML is as expensive as parsing (and compiling) Javascript though. Of course you’d pay a high price at each request of an e-commerce web app, but if you want to read an article on some blog, it is faster when you don’t have to load all of Medium’s JS app.

                      Browser vendors are trying really hard to fasten the startup time of their VM but the consensus is that to get to Interactive fast, you should ship less JS or at least less JS upfront.

                      Obligatory pointer to Addy Osmani’s research on the topic https://medium.com/@addyosmani

                      1. 1

                        Parsing XML is notoriously expensive. In fact, it’s one of the rationales behind Google’s protocol buffers. Furthermore, even if the cost of parsing XML and JSON was comparable, you’d still be sending a lot more XML if you’re sending a whole page. Then that XML has to be rendered in the DOM, which is another extremely expensive operation.

                        To sum up, only pulling the data you actually need, and being able to repaint just the elements that need repainting is much faster than sending the whole page over and repainting it on each update.

                        1. 3

                          The problem is that incremental rendering is often paired with a CPU intensive event listener and digest loops and other crud causing massive amounts of Javascript for every click and scroll.

                          1. 1

                            That’s not an inherent problem with SPAs though, that’s just a matter of having a good architecture. My team has been building complex apps using this approach for a few years now, and it results in a much smoother user experience than anything we’ve done with traditional server-side rendering.

                      2. 4

                        This seems like the exact kind of thing we can empirically verify. Do you know of any good comparisons?

                        1. 1

                          I haven’t seen any serious comparisons of the approaches. It does seem like you could come up with some tests to compare different operations like rendering large lists, etc.

                        2. 2

                          I’m not so sure, a modern HTML parser is fairly efficient. On top of that, a lot of stuff is cached in a modern browser.

                          My blog usually transfers in under 3 KB if you haven’t cached the page, around 800 B otherwise (which includes 800 bytes from Isso). My website uses less than 100KB, most of which is highly compressed pictures.

                          Most visitors only view one page an leave so any SPA would have to match load performance with the 3 KB of HTML + CSS or the 4KB of HTML+CSS plus 100KB of images…

                          A similar comparison would be required for any traditional server-side rendering application; if you want to do it in SPA, it should first match (or atleast come close to) the performance of the current server for the typical end user.

                          SPAs are probably worth thinking about if the user spends more than a dozen pages on your website during a single visit and even then it could be argued that with proper caching and not bloating the pages, the caching would make up a lot of performance gains.

                          Lastly, non-SPA websites have working hyperlink behaviour.

                          1. 1

                            I think that if your site primarily has static content, then server side approach makes the most sense. Serving documents is what it was designed for after all. However, if you’re making an app, something like Slack or Gmail, then you have a lot of content that will be loaded dynamically in response to user actions. Reloading the whole page to accommodate that isn’t a practical approach in my opinion.

                            Also, note that you can have working hyperlink behavior just fine with SPAs. The server loads the page, and then you do routing client-side.

                            1. 1

                              Also, note that you can have working hyperlink behavior just fine with SPAs. The server loads the page, and then you do routing client-side.

                              That’s how it would work in theory, however, 9/10 SPAs I meet don’t do this. The URL of the page is always the same, reloading looses any progress and I can’t even open links in new tabs at all or even if I can, it just opens the app on whatever default page it has.

                              Even with user content being loaded dynamically, I would considering writing a server app unless there will be, as mentioned, a performance impact for the typical user.

                              1. 1

                                That’s a problem with the specific apps, and not with the SPA approach in general though. Moving this logic to the server doesn’t obviate the need for setting up sane routing.

                                1. 1

                                  I’ve sadly seen SPA done correctly only rarely, it’s the exception rather than the rule in my experience.

                                  So I’m not convinced it would be worth it, also again, I’m merely suggesting that if you write an SPA, it should be matching a server-side app’s performance for typical use cases.

                                  1. 1

                                    I agree that SPAs need to be written properly, but that’s just as true for traditional apps. Perhaps what you’re seeing is that people have a lot more experience writing traditional apps, and thus better results are more common. However, there’s absolutely nothing inherent about SPAs that prevents them from being performant.

                                    I’ve certainly found that from development perspective it’s much easier to write and maintain complex UIs using the SPA style as opposed to server-side rendering. So, I definitely think it’s worth it in the long run.

                                    1. 1

                                      I’ve built enough apps both ways now to feel confident weighing in.

                                      If you build a SPA, your best case first impression suffers (parsing stutters etc), but complex client side interaction becomes easy (and you can make it look fast because you know which parts of the page might change).

                                      I no longer like that tradeoff much; I find too few sites really need the rich interactivity (simple interaction is better handled with jquery snippets), and it’s easier to make your site fast when there are fewer moving parts.

                                      This might change as the tooling settles down; eg webpack is getting easier to configure right.

                                      1. 2

                                        The tooling for Js is absolutely crazy in my opinion. There are many different tools you need to juggle, and they’re continuously change from under you. I work with ClojureScript, and it’s a breath of fresh air in that regard. You have a single tool for managing dependencies, building, testing, minifying, and packaging the app. You also get hot code loading out of the box, so any changes you make in code are reflected on the page without having to reload it. I ran a workshop on building a simple SPA style app with ClojureScript. It illustrates the development process, and the tooling.

                        1. 15

                          It’s only dead if you follow Apple blindly into the abyss. On other phones it’s not dead yet.

                          1. 13

                            Not yet.. Remember when you could get a smartphone with a keyboard?

                            1. 10

                              Those are only dead if you’re not following Blackberry blindly into the abyss.

                            2. 11

                              I’ll agree there, I want my phone to have a 3.5mm jack. I can’t image how putting the DAC on the cheap end of the equation (the earplugs) can improve quality over a simple and sturdy analog cable with a magnet on one end.

                              1. 7

                                Or Google… I imagine it must be hard at a third party Android device manufacturer to avoid the temptation of following the lead of the two big players.

                                1. 9

                                  Google’s move with the Pixel was particularly shit because they made fun of Apple for getting rid of the jack, then got rid of it themselves.

                                  1. 2

                                    I thought you were going to say something about search … I miss Yahoo/Lycos/Hotbot/Dogpile and getting different results that lead to different places. Fuck the search monoculture.

                                1. 4

                                  I always was convinced that the headphone jack would be a great port to use to connect IoT devices around the house. Things like motors to open window blinds, or IR sensors to control the air conditioning. And readily available cables

                                  But are the common cables isolated enough for long distance use in a reasonable manner? Or for sending over a real amount of power?

                                  1. 2

                                    What do you mean by long distance? I’ve seen silly long 3.5mm cables that are meant for audio. I’m not a PCM expert or anything, but wouldn’t you have a codec here that’s resilient to glitches? Should also be possible to amp the signal on your devices.

                                    1. 5

                                      You can wire I²C over a 3.5mm Jack and Cable, wouldn’t be the first time this connector is used for this…

                                      It’s actually pretty clever considering a 3.5mm cable is essentially available at any store with even a minimum amount of electronics.

                                  1. 25

                                    I think ads are the worst way to support any organization, even one I would rate as highly as Mozilla. People however are reluctant to do so otherwise, so we get to suffer all the negative sides of ads.

                                    I just donated to Mozilla with https://donate.mozilla.org, please consider doing the same if you think ads/sponsored stories are the wrong path for Firefox.

                                    1. 14

                                      Mozilla has more than enough money to accomplish their core task. I think it’s the same problem as with Wikimedia; if you give them more money, they’re just going to find increasingly irrelevant things to spend it on. Both organizations could benefit tremendously from a huge reduction in bureaucracy, not just more money.

                                      1. 9

                                        I’ve definitely seen this with Wikimedia, as someone who was heavily involved with it in the early years (now I still edit, but have pulled back from meta/organizational involvement). The people running it are reasonably good and I can certainly imagine it having had worse stewardship. They have been careful not to break any of the core things that make it work. But they do, yeah, basically have more money than they know what to do with. Yet there is an organizational impulse to always get more money and launch more initiatives, just because they can (it’s a high-traffic “valuable” internet property).

                                        The annual fundraising campaign is even a bit dishonest, strongly implying that they’re raising this money to keep the lights on, when doing that is a small part of the total budget. I think the overall issue is that all these organizations are now run by the same NGO/nonprofit management types who are not that different from the people who work in the C-suites at corporations. Universities are going in this direction too, as faculty senates have been weakened in favor of the same kinds of professional administrators. You can get a better administration or a worse one, but barring some real outliers, like organizations still run by their idiosyncratic founders, you’re getting basically the same class of people in most cases.

                                      2. 21

                                        So Mozilla does something bad, and as a result I am supposed to give it money?? Sorry, that doesn’t make any sense to me. If they need my money, they should convince me to donate willingly. What you are describing is a form of extortion.

                                        I donate every month to various organizations; EFF, ACLU, Wikipedia, OpenBSD, etc. So far Mozilla has never managed to convince me to give them my money. On the contrary, why would I give money to a dysfunctional, bureaucratic organization that doesn’t seem to have a clear and focused agenda?

                                        1. 9

                                          They may be a dysfunctional bureaucratic organisation without a focused agenda (wouldn’t know as I don’t work for it) which would surely make them less effective, but shouldn’t the question instead be how effective they are? Is what they produce a useful, positive change and can you get that same thing elsewhere more cost-effectively?

                                          If I really want to get to a destination, I will take a run-down bus if that is the only transport going there. And if you don’t care about the destination, then transport options don’t matter.

                                          1. 17

                                            They may be a dysfunctional bureaucratic organisation without a focused agenda (wouldn’t know as I don’t work for it) which would surely make them less effective, but shouldn’t the question instead be how effective they are? Is what they produce a useful, positive change and can you get that same thing elsewhere more cost-effectively?

                                            I am frequently in touch with Mozilla and while I sometimes feel like fighting with windmills, other parts of the org are very quick moving and highly cost effective. For example, they do a lot of very efficient training for community members like the open leadership training and the Mozilla Tech speakers. They run MDN, a prime resource for web development and documentation. Mozilla Research has high reputation.

                                            Firefox in itself is in constant rebuild and is developed. MozFest is the best conferences you can go to in this world if you want to speak tech and social subjects.

                                            I still find their developer relationship very lacking, which is probably the most visible part to us, but hey, it’s only one aspect.

                                            1. 9

                                              The fact that Mozilla is going to spend money on community activities and conferences is why I don’t donate to them. The only activity I and 99% of people care about is Firefox. All I want is a good web browser. I don’t really care about the other stuff.

                                              Maybe if they focused on what they’re good at, their hundreds of millions of dollars of revenue would be sufficient and they wouldn’t have to start selling “sponsored stories”.

                                              1. 18

                                                The only activity I and 99% of people care about is Firefox.

                                                This is a very easy statement to throw around. It’s very hard to back up.

                                                Also, what’s the point of having a FOSS organisation if they don’t share their learnings? This whole field is fresh and we have maintainers hurting left and right, but people complain when organisations do more then just code.

                                                1. 6

                                                  To have a competitive, web browser we can trust plus exemplary software in a number of categories. Mozilla couldve been building trustworthy versions of useful products like SpiderOak, VPN services, and so on. Any revenue from business licensing could get them off ad revenue more over time.

                                                  Instead, they waste money on lots of BS. Also, they could do whaf I say plus community work. It’s not either or. I support both.

                                                  1. 8

                                                    To have a competitive, web browser we can trust plus exemplary software in a number of categories. Mozilla couldve been building trustworthy versions of useful products like SpiderOak, VPN services, and so on. Any revenue from business licensing could get them off ad revenue more over time.

                                                    In my opinion, the point of FOSS is sharing and I’m pretty radical that this involves approaches and practices. I agree that all you write is important, I don’t agree that it should be the sole focus. Also, Mozilla trainings are incredibly good, I have actually at some point suggested them to sell them :D.

                                                    Instead, they waste money on lots of BS. Also, they could do whaf I say plus community work. It’s not either or. I support both.

                                                    BS is very much in the eye of the beholder. I also haven’t said that they couldn’t do what you describe.

                                                    Also, be aware that they often collaborate with other foundations and bring knowledge and connections into the deal, not everything is funded from the money MozCorp has or from donations.

                                                    1. 1

                                                      “Also, Mozilla trainings are incredibly good, I have actually at some point suggested them to sell them :D.”

                                                      Well, there’s a good idea! :)

                                                  2. 3

                                                    That’s a false dichotomy because there are other ways to make money in the software industry that don’t involve selling users to advertisers.

                                                    It’s unfortunate, but advertisers have so thoroughly ruined their reputation that I simply will not use ad supported services any more.

                                                    I feel like Mozilla is so focused on making money for itself that it’s lost sight of what’s best for their users.

                                                    1. 2

                                                      That’s a false dichotomy because there are other ways to make money in the software industry that don’t involve selling users to advertisers.

                                                      Ummm… sorry? The post you are replying to doesn’t speak about money at all, but what people carry about?

                                                      Yes, advertising and Mozilla is an interesting debate and it’s also not like Mozilla is only doing advertisement. But flat-out criticism of the kind “Mozilla is making X amount of money” or “Mozilla supports things I don’t like” is not it

                                                    2. 3

                                                      This is a very easy statement to throw around. It’s very hard to back up.

                                                      Would you care to back up the opposite, that over 1% of mozilla’s userbase supports the random crap Mozilla does? That’s over a million people.

                                                      I think my statement is extremely likely a priori.

                                                      1. 1

                                                        I’d venture to guess most of them barely know what Firefox is past how they do stuff on the Internet. They want it to load up quickly, let them use their favorite sites, do that quickly, and not toast their computer with malware. If mobile tablet, maybe add not using too much battery. Those probably represent most people on Firefox along with most of its revenue. Some chunk of them will also want specific plugins to stay on Firefox but I don’t have data on their ratio.

                                                        If my “probably” is correct, then what you say is probably true too.

                                                    3. 5

                                                      This is a valid point of view, just shedding a bit of light on why Mozilla does all this “other stuff”.

                                                      Mozilla’s mission statement is to “fight for the health of the internet”, notably this is not quite the same mission statement as “make Firefox a kickass browser”. Happily, these two missions are extremely closely aligned (thus the substantial investment that went into making Quantum). Firefox provides revenue, buys Mozilla a seat at the standards table, allows Mozilla to weigh in on policy and legislation and has great brand recognition.

                                                      But while developing Firefox is hugely beneficial to the health of the web, it isn’t enough. Legislation, proprietary technologies, corporations and entities of all shapes and sizes are fighting to push the web in different directions, some more beneficial to users than others. So Mozilla needs to wield the influence granted to it by Firefox to try and steer the direction of the web to a better place for all of us. That means weighing in on policy, outreach, education, experimentation, and yes, developing technology.

                                                      So I get that a lot of people don’t care about Mozilla’s mission statement, and just want a kickass browser. There’s nothing wrong with that. But keep in mind that from Mozilla’s point of view, Firefox is a means to an end, not the end itself.

                                                      1. 1

                                                        I don’t think Mozilla does a good job at any of that other stuff. The only thing they really seem able to do well (until some clueless PR or marketing exec fucks it up) is browser tech. I donate to the EFF because they actually seem able to effect the goals you stated and don’t get distracted with random things they don’t know how to do.

                                                2. 3

                                                  What if, and bear with me here, what they did ISN’T bad? What if instead they are actually making a choice that will make Firefox more attractive to new users?

                                                3. 9

                                                  The upside is that atleast Mozilla is trying to make privacy respecting ads instead of simply opening up the flood gates.

                                                  1. 2

                                                    For now…

                                                1. 6

                                                  Today I finished the v0.9.5 milestone for SECL which fixes some of the bugs someone reported to me via chat. The query language can now also access list indices or filter out either the map or the list of a value. Sadly I did break compatibility in two cases, though I don’t have enough adoption that many people should care. Might work a bit on how to get macros working and finalize v1.0 after some cleanups.

                                                  I also moved from Ansible to Fabric for my server configuration, it’s much nicer since I can script out the various steps of the configuration instead of having to fit it into the ansible templates.

                                                  Though some ansible .yml files will hang around until I can translate them into python.

                                                  For the next few days I’ll have to monitor the space usage of my backup scripts on the server to see how it develops.

                                                  Otherwise I might work on some more relaxed projects, ie that on archival/curator thing I wanted to work on, which I mostly threw under the table so far. I’ve started some doodles on leftover printer paper on the architecture and design stuff.

                                                  1. 4

                                                    I keep my blog and website light.

                                                    The blog frontpage uses about 64 kilobyte of data, most other pages are under 100 kilobyte too. My homepage takes about 90 kilobytes (170 without gzip) including my analytics script (which uses up almost under 1 kilobyte in total). Both sites are responsive and readable on both desktop and mobile (last I checked).

                                                    I prefer making and consuming such sites. Because for one, they are easier to preserve (a simple wget can preserve and archive the content effectively) and they’re also easier on my phone battery and on my local computer resources (medium actually is noticeable).

                                                    Simple text websites with CSS should be enough for every blog out there.

                                                    1. 3

                                                      I’m currently preparing for another exam but since it’s less stressfull than the first, I have a bit more breathing room.

                                                      Since my last comment, I’ve finished data recovery, lost a few days of work in total but nothing too serious. Thankfully I could recover some of the more sentimental data from an old harddrive.

                                                      With my data back, I’m working a bit on my toy kernel, global state management could be improved so maybe I’ll finish that. Since my task manager was partially nuked during recovery (I should have committed it), I’ll have to rewrite it.

                                                      I also started investigating web development with rust, the currently best options seem to be a combination of rocket and diesel as frameworks. I’ll see how that works out tho I’m optimistic.

                                                      I’m also fixing some bugs for SECL, the next milestone is closing in, though I managed to work in some much needed improvements and fixes in just a few commits, there is now a proper way to do type conversion and a few methods expose this functionality into the config file (so you can enforce variables to be a certain type).

                                                      1. 1

                                                        I’ve setup postal on a server and it has been working well (enough) for side projects (my IP reputation is still not the best, it’s getting better).

                                                        1. 4

                                                          Most importantly, data recovery. Windows Startup Repair was of the opinion that my LUKS drives need to be formatted as GPT and nuked all backups of the LUKS header along with that. So I’m recovering the most important stuff from backup atm (I did loose an ancient backup of mostly lower sentimental value and a small data collection which is a bit annoying, I only backed up /home really…). I’m still torn on whether to nuke the disks completely and setup a LUKS+RAID1 or try to recover anything.

                                                          Once I’ve got everything recovered, I’ll have to deal with a few bugs in the configuration language I’m writing, a few new features have been suggested to me and I’ll implement them this month for the v0.9.5 milestone

                                                          I’m also looking to rewriting some parts of my toy kernel to be more robust, especially the global kernel state (without the stdlib, having a globally mutable static variable is slightly annoying to do in Rust, even moreso if you don’t have heap when it needs to be setup).

                                                          I also think about starting a curator/archive project, I’ve got some design/architecture ideas for that, nothing concrete yet. I might experiment a bit in what to write it in too so I’m flipping through various webframeworks and languages.

                                                          1. 1

                                                            I also think about starting a curator/archive project

                                                            I’m interested in retro computing and have volunteered with / hope to volunteer again with some computing museums in the UK. Anything I can help with regarding requirements/UX?

                                                            1. 1

                                                              Probably not yet, it’s still in the early stages, as mentioned, I haven’t even decided on a language or framework yet.

                                                              It’s not really targeted for retro computing, more in the direction of archiving and curating user content from websites, it could help me to compensate my data loss from above (it’s all online but I curated what I archived)

                                                          1. 4

                                                            I assume this is just a set of someone’s bookmarks. Why fork? Because upstream can force-push, and if upstream doesn’t do that, the fork is cheap. I think forks could even survive upstream repository deletion, but I am not sure.

                                                            1. 1

                                                              It depends on how it was deleted, I believ when the author deletes the repository, forks survive, if github nukes the repo, all forks get nuked too.

                                                            1. 9

                                                              There’s an incredible lengthy reply in this thread, which is completely made up.

                                                              The thing is that to run those 1 & 0, it has to, technically, store them in a physical way so that it can be passed through to what’s next. As it’s 0 & 1, it’s not encrypted or protected. It’s pure raw data. The encryption and protection are usually done after the data has passed through the processor… by a task handled by the processor (ironically). Now, what they have “found” (which is false. it’s has been known since the 80’s) is that it’s possible to access this raw data by force feeding some 0 & 1 to the processor which can be hidden in anything and makes it start an hidden small software which, for example, could send a copy of the raw data through the web.

                                                              Fascinating.

                                                              1. 8

                                                                It’s not just completely made up, it’s gibberish.

                                                                1. 3

                                                                  This almost sounds like it was written by some AI…

                                                                  1. 3

                                                                    Looks more like a markov chain to me.

                                                                2. 1

                                                                  I saw hints of the truth in there which I thought were pretty funny. Like the bit about force feeding 1s and 0s I assumed was referring to specially crafted instructions to starve the CPU cache or trick the branch predictor or something. Hilarious.

                                                                  Permalink for those who want it: https://www.epicgames.com/fortnite/forums/news/announcements/132642-epic-services-stability-update?p=132713#post132713

                                                                1. 8

                                                                  Mastodon is indeed great, setup my own private instance for me and a few friends a few days ago. (still use my main mastodon account on mastodon.social though)

                                                                  I hope Mastodon catches on to prove that Federation can beat Centralization.

                                                                  1. 4

                                                                    Happy 2018, Lobsters!

                                                                    Heartbleed might have been a small hole in the ship but these two are a full broadside.

                                                                    1. 3

                                                                      My new year’s resolution is to have a better resolution ready for 2019.

                                                                      Also maybe loosing some weight.