1. 8

    Like the list. On this one:

    “Vultr is my go-to solution for cheap hosting. As of this writing, the cheapest plan they offer is $2.50 a month. Although their cheapest plan defintely won’t be able to fuel your next high-powered computing project, it is a great solution when you need to host small scale tasks such as cron jobs.”

    I’ll throw in Prgmr. Their plans start at $5 a month, they’re no BS, good service, transparent about problems, and (extra motivation) host this site for free.

    1. 3

      I use both, and have good things to say about both of them:

      prgmr is great when you need something different than your standard VPS, as they will work with you, and get whatever it is you want; like I wanted a bunch of extra storage space(ZFS), doing that on Vultr gets cost prohibitive quickly, but on prgmr it’s not bad at all.

      For pre-packaged things that vultr offers, vultr is pretty great, for a great price.

      1. 1

        pre-packaged things

        Could you give me some of your favorite examples?

        1. 2

          vutlr provides a DNS service(with API), Firewall service, private networks(with multiple interfaces per VM OK), etc.

          These are all things that come for free with a single VPS, no extra magic or asking needed. They also offer snapshots and backups(for a fee).

          Their DNS service API is great for doing LE * (wildcard) certs, and the other things are good to use as well.

          To be clear, I’m not suggesting you rely solely on their backups or FW services as your only FW and backup(s), as defence in depth is a thing. But they are nice to add/have.

          Also, @avitex has a good point about vultr offering multiple locations.

          Plus, for things you really care about, it’s not a bad plan to use 2 different providers, and host across both of them, so if one goes down for some reason, your stuff doesn’t also go down(which is what I do).

          1. 2

            Sounds like some nice value-adds. I’m with you on using multiple providers for redundancy. The neat thing is that all these cheap providers make high[er] availability cheaper than ever.

      2. 2

        The major difference of the two for me is Prgmr’s only hosting location is in CA, US from what I can find. Vultr provides hosting globally

        1. 2

          I’ll also add, for anyone trying to decide what to use in the endless sea of VPS providers, I recommend doing research on LowEndTalk.

          1. 2

            @zie and @avitex (esp avitex… location) got good points about where they might not be a fit for potential customers. I like sites like you mention when looking for deals. What’s missing is something key to my recommendations: can you trust and rely on the people involved in the company?

            Only a handful of companies where I can even attempt to establish that. Plenty of people here know one founder. We’ve read their articles seeing their transparency. These things help establish good character. That’s increasingly uncommon in tech companies. So, I just give our host a mention in these conversations so long as they keep earning it. It increases the odds readers will vote with their wallet towards more ethical suppliers.

            1. 1

              Yeah good point, trust is probably the biggest factor. The best I could find is to Google something like “buyvm site:lowendtalk.com” but I won’t pretend it’s foolproof, and doesn’t really compare to a recommendation like the one you made

        1. 6

          I understand the points being made here, but I disagree completely.

          I’d rather an open specification being defined for applications than we return to the day of flash, and java applets because customers want x to work easily, and we all know the rep java and flash applications have for how amazing and secure they are…

          A point that seems to keep coming up with webasm is obfuscation. I understand where the sentiment comes from, and yeah, it may not be as easy as just clicking view-source, and scanning through some uglified JS, but vendors can and do make things obfuscated regardless with just javascript alone, this is not webasm specific. Actually there are a few points I have when this argument comes up.

          • Who is looking at the source for each website they visit? General public? Hell no! I’m not going to vet every website’s page to check it’s source either.
          • Users want applications, hell there is a reason why the internet is something so much more than just a Wikipedia. Applications further than just crud need interactivity, something your not getting from basic forms and hyperlinks, and the alternative? Terrible browser extensions that are harder to examine and have holes galore? Or how about download my application packaged with adware!
          • Better than examining source, examine the darn output. Network tab? Excellent! I can see where, and what is being sent out. That’s much better than digging the source. If you want to provide users with more control, provide more functionality around the output/network. WebRTC, Websockets for example. We have a great network tab for HTTP requests, but examining these other two is not trivial.

          I’d much rather run applications run in a sandbox that I can control, than let loose applications on my filesystem.

          Regarding scripts in the browser, if you don’t like ‘em, disable them. Browsers provide these tools, and you can enable per-site. There are many sites that are going to loss partial, even total functionality sure, but that’s the trade-off you make as a user AND a customer.

          There is too much inertia behind the current web, and it’s not going to change radically anytime soon. Sure it’s easy to outline problems with it, nearly anyone can do this. The harder task is finding solutions that keeps everyone happy. Articles like this personally tick me off as it discounts the many problems we HAVE solved. Are there problems? Yup! No doubt, but addressing them like this doesn’t really help at all.

          1. 3

            I’d rather an open specification being defined for applications than we return to the day of flash, and java applets because customers want x to work easily, and we all know the rep java and flash applications have for how amazing and secure they are…

            I don’t think what Shamar is recommending is a return to flash & java applets. (In fact, one of the big complaints about web assembly is that it brings javascript closer to the status of flash and java applets.) Instead, he seems to be recommending that we return the web to the pre-javascript era & move applications entirely off the platform.

            You can have an open specification for a scripting language without embedding that scripting language into a browser & making it emit markup – and, in fact, if you move it out of the browser, then you free it from the never-stable ‘living standards’ that produce browser monopolies, since you end up with fully-versioned specifications. (Think of python, with its several implementations and specs.)

            Who is looking at the source for each website they visit? General public? Hell no! I’m not going to vet every website’s page to check it’s source either.

            As mentioned in the essay, while most people never look at the source, hackers sometimes look at the source, which is a barrier to malicious code that’s consistently served up. If malicious code is only served in response to some server-side process identifying a particular user, then the likelihood that anybody will catch it is much lower. If that code is also minified, then there’s greater friction, so even if it gets served up to a hacker, they are less likely to examine it closely enough to identify malicious code; this applies even moreso to binaries.

            In other words, we have some base probability that malicious code will be eventually identified (say, 1%), and with each of these layers of obfuscation, the probability that it’ll be identified & somebody will make a stink about it sinks lower.

            Users want applications

            Users want applications, but that doesn’t mean that the web is the appropriate way to deliver them. Users want pizza too, but if you serve it to them in a toilet bowl, they need to be pretty hungry to accept it.

            The success of applications managed by real package managers on mobile is an indication that users don’t particularly care whether or not the applications they run are delivered inside a browser. The lack of success of real package managers on consumer desktop OSes is largely a function of the lack of maturity of these package managers. (Nobody goes out of their way to download untrusted binary installers on debian, the way they do on windows, because even though apt-get often produces crusty old versions, it’s more reliable than googling.)

            I’d much rather run applications run in a sandbox that I can control, than let loose applications on my filesystem.

            Sandboxes are a feature of almost all scripting languages. The browser is a poor sandbox with a poor API, and survives as an application platform purely through inertia left over from the mid-90s, when newbie developers falsely believed that javascript in a browser was the only cross-platform sandboxed scripting environment.

            There is too much inertia behind the current web, and it’s not going to change radically anytime soon.

            This is unfortunately very true. Luckily, users don’t have any loyalty to web standards – only web devs do, and many web devs have been around the block enough times to know that it’s worth jumping ship.

            The harder task is finding solutions that keeps everyone happy.

            Keeping everyone happy is straightforward. It’s trivial to do all the tasks the browser does better than the browser does them, so long as you throw out all the web standards.

            The harder task is convincing people who are emotionally married to web technologies they dumped huge amounts of effort into understanding that they ought to switch to something that’s better for both them & users in the long term, even when it means slightly more work in the short term.

            The even harder task is to convince companies – for whom truly long-term planning is necessarily difficult – to force their own devs to make the switch. The most effective tool for this, even as much as we might hate it, is hype: suits won’t really understand technical arguments, but they will understand and respond to peer pressure, and likewise, they will understand and respond to influential bloggers who claim that doing something will save them money.

              1. 3

                I think Mozilla is too closely tied to existing web standards. (I felt more at home in Xanadu, but that doesn’t pay the bills!)

          1. 3

            The author claims they’re a programmer, but they still clicked 338 checkboxes manually? Sounds fishy :)

            Here’s what I’ve done on Tumblr, which also has something similar.

            for (var x of jQuery("input[checked]")) {jQuery(x).removeAttr("checked");}

            1. 11

              The author is a programmer, a software architect, an hacker, and a curious person in general.

              I can conceive several ways to fool your smart jquery script. If you cannot think of them yourself, you shouldn’t code in Javascript, actually.

              But also I’m a UI and UX designer, at work.

              I was surprised to see a nice UI with such a stupid mistake.

              I hoped the developer on the other end was cool enough to surprise me.

              After the first ten clicks I realized she was not that smart.

              I hit F12. But then I thought “my users cannot hit F12: lets walk their path and see how I feel”.

              I’m not stupid. I simply care.

              1. 2

                I can conceive several ways to fool your smart jquery script. If you cannot think of them yourself, you shouldn’t code in Javascript, actually.

                • I don’t think he was claiming his solution was a fit for all
                • So by your logic only people who know DOM JS should code in JS? ;)

                I know this was a reply to a slightly provocative comment in defense of the author, but this in particular seems a little silly

                1. 5

                  I’m the author. And actually I’m sorry for the tone of the reply: I’m tired, and I didn’t intended the @janiczek’s post as a joke for me, but as an attempt to justify InfoWorld by calling me fishy.

                  I’m fishy, definitely! :-)

                  But I also care about users. And I’m an European fish…

                  So by your logic only people who know DOM JS should code in JS? ;)

                  Nobody should code in JS. Really. Nobody should.

                  But yes, if you don’t know how DOM JS has been interpreted in the last 10 years, I think you shouldn’t code in JavaScript professionally. You might think I’m exaggerating to get a point, but trust me: everything is still there, under the hood. Ready to break.

                  1. 2

                    Thanks for the kind reply. I wasn’t trying to provoke myself, just point out something that seemed a bit off :) Professionally? Perhaps your right in a perfect world, but the fact remains there will always be code monkeys that build or maintain simple systems for a customer base that can’t pay for a seasoned developer. Regardless, I agree with the pain point of your article :)

                    1. 3

                      Mm, I kind of feel like as a profession we should try to have more respect for our own work. Software can cause significant harm, and we’ve all just collectively agreed that it’s okay to prop up companies that want to build broken things and not properly maintain them. Maybe companies that aren’t willing to spend the money to build safe software shouldn’t have the privilege of getting engineers to work for them.

                      I know that’s a tangent and not really what you were trying to address.

                      1. 3

                        I completely agree with your first statement, having respect for your own work is a great virtue.

                        The devil is in the details in regards to companies/individuals who provide shoddy services. Outside passionate and informed social circles, it’s customers vote with their pockets (counting data as a form of currency here), whether that be for trading for convenience or just a result of plain ignorance.

                        Unfortunately there aren’t any easy remedies to this problem. Shoddy companies/individuals will find ways to work their way around regulations, and customers will quite happily dig themselves into holes in pursuit of the cheapest or quickest solution. That doesn’t mean you don’t try, in fact I personally think one of the best tactics we can use for problems such as these, is informing the general public of the consequences (though that’s another problem in itself).

                        1. 2

                          Yes, I agree with all of that, and thank you for it.

                        2. 2

                          Maybe companies that aren’t willing to spend the money to build safe software shouldn’t have the privilege of getting engineers to work for them.

                          I see your point, but to me it’s like saying that companies that aren’t willing to spend the money to write proper English shouldn’t have the privilege of getting writers to work for them.

                          They can learn how to write by themselves.

                          I prefer a different approach: turn all people into hackers.

                          1. 1

                            Yeah, I see that point also. But, I mean, writers have historically been more willing to stand up to exploitative labor practices than hackers have… I think there’s a balance to be found, and getting to the right balance requires rethinking some things.

                            1. 3

                              We are just like scribes from Ancient Egypt.

                              Despite the groupthink, we are still at a very early stage of information technology.

                              Just like being a scribe back then, being hackers today does not mean understanding the power and responsiblity we have. But it’s just a matter of time.

                              We will forge the future.

                      2. 1

                        I’m sorry if my post came as provocative! (Maybe my definition of “fishy” – as English is not my native language – is slightly off compared to your definition)

                        Yeah, “I know I could do X instead of clicking, but common user can’t, so let’s walk in their shoes” is a fair motivation. Maybe I just expected the thought to be expressed in the post, given you’ve expressed you’re a programmer. But maybe that’s a silly expectation ¯_(ツ)_/¯ Thanks for the clarifications in the comments here.

                1. 1

                  Link is 404’d. Do we have other sources as to what’s going on?

                  1. 3

                    https://mobile.twitter.com/samphippen/status/987843354011586560

                    Going public on RT stuff was the wrong way to handle my concerns. I’m sorry. Members of the board and I are talking privately to come to a good outcome.

                    1. -1

                      Eventually we will stop investing in chemical rocketry and do something really interesting in space travel. We need a paradigm shift in space travel and chemical rockets are a dead end.

                      1. 7

                        I can’t see any non-scifi future in which we give up on chemical rocketry. Chemical rocketry is really the only means we have of putting anything from the Earth’s surface into Low Earth Orbit, because the absolute thrust to do that must be very high compared what you’re presumably alluding to (electric propulsion, lasers, sails) that only work once in space, where you can do useful propulsion orthogonally to the local gravity gradient (or just with weak gravity). But getting to LEO is still among the hardest bits of any space mission, and getting to LEO gets you halfwhere to anywhere in the universe, as Heinlein said.

                        Beyond trying reuse the first stage of a conventional rocket, as SpaceX are doing, there are some other very interesting chemical technologies that could greatly ease space access, such as the SABRE engine being developed for the Skylon spaceplane. The only other way I know of that’s not scifi (e.g. space elevators) are nuclear rockets, in which a working fluid (like Hydrogen) is heated by a fissiling core and accelerated out of a nozzle. The performance is much higher than chemical propulsion but the appetite to build and fly such machines is understandably very low, because of the risk of explosions on ascent or breakup on reentry spreading a great deal of radioactive material in the high atmosphere over a very large area.

                        But in summary, I don’t really agree with, or more charitably thing I’ve understood your point, and would be interested to hear what you actually meant.

                        1. 3

                          I remember being wowed by Project Orion as a kid.

                          Maybe Sagan had a thing for it? The idea in that case was to re-use fissile material (after making it as “clean” as possible to detonate) for peaceful purposes instead of for military aggression.

                          1. 2

                            Atomic pulse propulsion (ie Orion) can theoretically reach .1c, so that’s the nearest star in 40 years. If we can find a source of fissile material in solar system (that doesn’t have to be launched from earth) and refined, interstellar travel could really happen.

                            1. 1

                              The moon is a candidate for fissile material: https://www.space.com/6904-uranium-moon.html

                          2. 1

                            Problem with relying a private company funded by public money like SpaceX is that they won’t be risk takers, they will squeeze every last drop out of existing technology. We won’t know what reasonable alternatives could exist because we are not investing in researching them.

                            1. 2

                              I don’t think it’s fair to say SpaceX won’t be risk takers, considering this is a company who has almost failed financially pursuing their visions, and has very ambitious goals for the next few years (which I should mention, require tech development/innovation and are risky).

                              Throwing money at research doesn’t magically create new tech, intelligent minds do. Most of our revolutionary advances in tech have been brainstormed without public nor private funding. One or more people have had a bright idea and pursed it. This isn’t something people can just do on command. It’s also important to also consider that people fail to bring their ideas to fruition but have paved the path for future development for others.

                              1. 1

                                I would say that they will squeeze everything out of existing approaches, «existing technology» sounds a bit too narrow. And unfortunately, improving the technology by combining well-established approaches is the stage that cannot be too cheap because they do need to build and break fulll-scale vehicles.

                                I think that the alternative approaches for getting from inside atmosphere into orbit will include new things developed without any plans to use them in space.

                            2. 2

                              What physical effects would be used?

                              I think that relying on some new physics, or contiguous objects of a few thousand kilometers in size above 1km from the ground are not just a paradigm shift; anything like that would be nice, but doesn’t make what there currently is a disappointment.

                              The problem is that we want to go from «immobile inside atmosphere» to «very fast above atmosphere». By continuity, this needs to pass either through «quite fast in the rareified upper atmosphere» or through «quite slow above the atmosphere».

                              I am not sure there is a currently known effect that would allow to hover above the atmosphere without orbital speed.

                              As for accelerating through the atmosphere — and I guess chemical air-breathing jet engines don’t count as a move away from chemical rockets — you either need to accelerate the gas around you, or need to carry reaction mass.

                              In the first case as you need to overcome the drag, you need some of the air you push back to fly back relative to Earth. So you need to accelerate some amount of gas to multiple kilometers per second; I am not sure there are any promising ideas for hypersonic propellers, especially for rareified atmosphere. I guess once you reach ionosphere, something large and electromagnetic could work, but there is a gap between the height where anything aerodynamic has flown (actually, a JAXA aerostat, maybe «aerodynamic» is a wrong term), and the height where ionisation starts rising. So it could be feasible or infeasible, and maybe a new idea would have to be developed first for some kind of in-atmosphere transportation.

                              And if you carry you reaction mass with you, you then need to eject it fast. Presumably, you would want to make it gaseous and heat up. And you want to have high throughput. I think that even if you assume you have a lot of electrical energy, splitting watter into hydrogen and oxygen, liquefying these, then burning them in-flight is actually pretty efficient. But then the vehicle itself will be a chemical rocket anyway, and will use the chemical rocket engineering as practiced today. Modern methods of isolating nuclear fission from the atmosphere via double heat exchange reduce throughput. Maybe some kind nuclear fusion with electomagnetic redirection of the heated plasma could work, maybe it could even be more efficient than running a reactor on the ground to split water, but nobody knows yet what is the scale required to run energy-positive nuclear fusion.

                              All in all, I agree there are directions that could maybe become a better idea for starting from Earth than chemical rockets, but I think there are many scenarios where the current development path of chemical rockets will be more efficient to reuse and continue.

                              1. 2

                                What do you mean by “chemical rockets are a dead end”? In order to escape planetary orbits, there really aren’t many options. However, for intersteller travel, ion drives and solar sails have already been tested and deployed and they have strengths and weaknesses. So there are multiple use cases here depending on the option.

                                1. 1

                                  Yeah right after we upload our consciousness to a planetary fungal neural network.

                                1. 1

                                  I have mostly been ignoring the whole WASM thing as I feel it is merely going to make things worse…

                                  It’s the old, “Syntax doesn’t really matter, Syntax is just sugar. It’s the semantics of a language that make a language a language”.

                                  So it seems to me WASM is just JavaScript uglified by removing all syntactic sugar to lay bare the JavaScript semantics.

                                  ie. No matter which language you choose to sugar it with…. the semantics won’t be and can’t be changed.

                                  So you’re going to end up with pages like https://clojurescript.org/about/differences (and worse) for every language that compiles to WASM.

                                  Am I missing something?

                                  1. 2

                                    WASM isn’t JS and JS can’t compile to WASM. It’s a lower level that doesn’t even feature a GC. Think more bytecode, than uglified JS.

                                    A quick scroll through the instruction set gives you a pretty good idea of what it provides. Compiling to WASM allows for some substantial performance benefits over it’s JS counterpart in initial start time and a number of scenarios in runtime. Realistically, however, most web developers at this point in time will not have any use for it.

                                    There is still a bit of work needed to provide WASM with the ability to interact with the DOM directly rather than bridge through JS. How they are going to achieve this, I don’t know. It’s is the main feature I am looking forward to as it will allow VDOM implementations to be optimized further, and incorporated into languages like rust. See asm-dom for example

                                    1. 1

                                      So it seems to me WASM is just JavaScript uglified by removing all syntactic sugar to lay bare the JavaScript semantics.

                                      That’s not the case at all. There is JS interop but it’s unrelated to JS. It doesn’t even have GC, for starters (as mentioned in the post.)