1. 4

    Microsoft’s perceived lack of clarity in the roadmap (.NET Standard, .NET Core, .NET Framework, etc) and history of killing off or deprecating frameworks (Silverlight, Winforms, should I use WPF or UWP?) are a couple more reasons why startups don’t turn to .NET. Add what others have mentioned, the closed source and history of high cost, the lack of ecosystem, and the long history of being actively against open source and copyleft licenses, and Microsoft just doesn’t look like a startup choice. Microsoft was also relatively late as a cloud computing choice. Maybe something will emerge from their Bizspark program and their open source efforts to change their perceived position.

    I didn’t include PHP because there were a lot of startups that had nothing but PHP and Apache Server. That’s partly why I looked at 100 startups and ended up with 23. Startups with just PHP are probably e-commerce websites or non-software at all.

    I wonder if this is reasonable to exclude PHP? I could see the point of excluding it because there’s a Wordpress blog hanging off the domain or if, as the author states, an e-commerce startup kicked things off with Magento or the like. On the other hand, is PHP just being excluded because, well, PHP?

    1. 3

      I read it as the author saying that they couldn’t distinguish between shops using PHP for a webshop/CMS and doing new software development with it, so it was excluded from the analysis.

    1. 3

      This is great for debugging, but I’m not so keen to see it in production on user machines. From the gist it sounds like it would work on prod code.

      save user recordings when an exception is fired

      I’m not keen on surveillance of other users browsing sessions, especially without their consent. Building this kind of feature into a browser normalises it, especially if it’s not something that the user has to opt in to.

      1. 3

        No mention of app engine?

        1. 2

          Sorry, I should have mentioned that I only reviewed services I used. Due to load balancer upload limits on App Engine I wasn’t able to use App Engine as the application server so I didn’t look into it too deeply. It definitely looks good though.

          1. 4

            If you can make your use-case fit into AppEngine’s constrained data & runtime model then it is absolute nirvana. If you can’t then you’re stuck using something else.

        1. 3

          It’s probably way out of the intended scope, but could Mitogen be used for basic or throwaway parallel programming or analytics? I’m imagining a scenario where a data scientist has a dataset that’s too big for their local machine to process in a reasonable time. They’re working in a Jupyter notebook, using Python already. They spin up some Amazon boxes, each of which pulls the data down from S3. Then, using Mitogen, they’re able to push out a Python function to all these boxes, and gather the results back (or perhaps uploaded to S3 when the function finishes).

          1. 3

            It’s not /that/ far removed. Some current choices would make processing a little more restrictive than usual, and the library core can’t manage much more than 80MB/sec throughput just now, limiting its usefulness for data-heavy IO, such as large result aggregation.

            I imagine a tool like you’re describing with a nice interface could easily be built on top, or maybe as a higher level module as part of the library. But I suspect right now the internal APIs are just a little too hairy and/or restrictive to plug into something like Jupyter – for example, it would have to implement its own serialization for Numpy arrays, and for very large arrays, there is no primitive in the library (yet, but soon!) to allow easy streaming of serialized chunks – either write your own streaming code or double your RAM usage, etc.

            Interesting idea, and definitely not lost on me! The “infrastructure” label was primarily there to allow me to get the library up to a useful point – i.e. permits me to say “no” to myself a lot when I spot some itch I’d like to scratch :)

            1. 3

              This might work, though I think you’d be limited to pure python code. On the initial post describing it:

              Mitogen’s goal is straightforward: make it childsplay to run Python code on remote machines, eventually regardless of connection method, without being forced to leave the rich and error-resistant joy that is a pure-Python environment.

              1. 1

                If it are just simple functions you run, you could probably use pySpark in a straight-forward way to go distributed (although Spark can handle much more complicated use-cases as well).

                1. 2

                  That’s an interesting option, but presumably requires you to have Spark setup first. I’m thinking of something a bit more ad-hoc and throwaway than that :)

                  1. 1

                    I was thinking that if you’re spinning up AWS instances automatically, you could probably also configure that a Spark cluster is setup with it as well, and with that you get the benefit that you neither have to worry much about memory management and function parallelization nor about recovery in case of instance failure. The performance aspect of pySpark (mainly Python object serialization/memory management) is also actively worked on transitively through pandas/pyArrow.

                    1. 2

                      Yeah that’s a fair point. In fact there’s probably an AMI pre-built for this already, and a decent number of data-science people would probably be working with Spark to begin with.

              1. 7

                This version takes Clojars’ playbook runtimes from 16 minutes to 1 minute 30 seconds. It is my favourite piece of software in recent years. Highly recommended if you use Ansible.

                1. 4

                  Adding/testing support for Clojure’s tools.deps CLI for Deps. Theoretically there shouldn’t be much needed if anything but I need to document it for customers, and will probably write a guide for how to use it in CI.

                  I’m also accumulating instructions for enough different build tools that I need to add tabs or some other information hiding mechanism on the setup page.

                  1. 3

                    I like the Moderation Log for this post:

                    Story: Rails Asset Pipeline Directory Traversal Vulnerability (CVE-2018-3760)
                    Action: changed tags from “ruby” to “ruby security web”
                    Reason: Adding a couple tags… after checking the Lobsters production.rb.

                    1. 1

                      Unfortunately, while the headline is clever, it’s not true.

                      Palantir’s worst is done with code written in house, with the same open source codebase we all start with. So long as there are people willing to work there, bad things are going to be written into code and deployed.

                      1. 15

                        One note, the specific company wasn’t Palantir, but was in a similar space.

                        I agree that not serving this company has a very small effect on them, but it was better than the alternative. Additionally, if enough companies refuse to work with companies like Palantir, it would begin to hinder their efforts.

                        1. 8

                          not serving this company has a very small effect on them

                          It has a big effect, instead. On the system. On their employees. On your employes and your customers…

                          Capitalism fosters a funny belief through its propaganda (aka marketing): that humans’ goals are always individualistic and social improvements always come from collective fights. This contraddiction (deeply internalized as many other disfunctional ones) fool many people: why be righteous (whatever it means to me) if it doesn’t change anything to me?

                          It’s just a deception, designed to marginalize behaviours that could challenge consumerism.

                          But if you cannot say “No”, you are a slave. You proved to be not.

                          And freedom is always revolutionary, even without a network effect.

                          1. 1

                            Sounds like it was https://www.wired.com/story/palmer-luckey-anduril-border-wall/ ? Palantir at least has some positive clients, like the SEC and CDC.

                          2. 4

                            But….that wasn’t his moral question? He was being offered a chance to be a vendor of services to a palantir-like surveillance outfit engaged in ethnic cleansing, not offered a job with a workstation. So yeah, the headline was absolutely true. It is up to individuals to refuse, and by publicly refusing to engage, not necessarily internally, they will inspire others to not profit by these horrors.

                            1. 0

                              It wasn’t. But the quip implies that we can act like a village, when the sad truth is that the low barrier to entry in software development means we can’t really act like a village, and stop people with our skillset from putting vile stuff into code.

                              1. 3

                                yeah, not really understanding this from the original post. and for the record the low barrier to entry is absolutely not what is allowing people to put vile stuff in code. extremely talented, well educated, highly intelligent people do horrifying stuff every single day.

                                1. 1

                                  This is the best attitude one can desire from slaves. Don’t question the masters. It’s pointless.

                                  1. 1

                                    We can act like a village, we just can’t act like the entire population. Choosing not to work at completely unethical places when we can afford it does at the very least increase the cost and decrease the quality of the evil labor. Things could even reach a point where the only people willing to work there are saboteurs.

                              1. 6

                                The main comment themes I found were:

                                • Error messages: still a problem
                                • Spec: promising, but how to adopt fully unclear
                                • Docs: still bad, but getting a little better
                                • Startup time: always been a problem, becoming more pressing with serverless computing
                                • Marketing/adoption: Clojure still perceived as niche/unknown by non-technical folk
                                • Language: some nice ideas for improvement
                                • Language Development Process: not changing, still an issue
                                • Community: mostly good, but elitist attitudes are a turnoff, and there is a growing perception CLJ is shrinking
                                • Libraries: more guidance needed on how to put them together
                                • Other targets: a little interest in targeting non JS/JVM targets
                                • Typing: less than in previous years, perhaps people are finding spec meets their needs?
                                • ClojureScript: improving fast, tooling still tricky, NPM integration still tricky
                                • Tooling: still hard to put all the pieces together
                                • Compliments: “Best. Language. Ever.”

                                Lots of room for improvement here, but I still love working with Clojure and am thankful that I get to do so.

                                1. 3

                                  I’m running on Google Cloud Platform, but there’s enough similarities to AWS that hopefully this is helpful.

                                  I use Packer to bake a golden VM image that includes monitoring, logging, e.t.c. based on the most recent Ubuntu 16.04 update. I rebuild the golden image roughly monthly unless there is a security issue to patch. Then when I release new versions of the app I build an app specific image based on the latest golden image. It copies in an Uberjar from Google Cloud Storage (built by Google Cloud Builder). All of the app images live in the same image family

                                  I then run a rolling update to replace the current instances in the managed instance group with the new instances.

                                  The whole infrastructure is managed with Terraform, but I only need to touch Terraform if I’m changing cluster configuration or other resources. Day to day updates don’t need to go through Terraform at all, although now that the GCP Terraform provider supports rolling updates, I may look at doing it with Terraform.

                                  It’s just me for everything, so I’m responsible for it all.

                                  1. 3

                                    I just backed this project on Kickstarter. If it can be made to work like it promises, it would be a huge productivity boost for me on several projects. Currently with Deps, I bake an image with Packer and Ansible for every new deployment (based on a golden image). That has been getting a bit slow, so I was looking at other deployment options. Having super fast Ansible builds would be great, and make that not as necessary.

                                    1. 2

                                      Hi Daniel, I keep forgetting to reply here – thanks so much for your support! For every neat complementary comment I’ve been receiving 5 complex questions elsewhere. I’ve just posted a short update, and although it is running a little behind, it looks like the campaign still has legs. I’m certainly here until the final hour. :) Thanks again!

                                    1. 3

                                      At work we’ve adopted ADR’s - Architecture Decision Records. These are similar to RFC’s but a little bit lighter weight. We generally use them for any architectural decision we make which is likely to affect more than one person, which took a while to understand, or will be impactful over a long time.

                                      The great thing about them is that they’re structured to be able to be written in a stream of consciousness, to articulate the context (this is usually the most important thing), the decision, and its impact.

                                      If we don’t have a decision immediately we can leave it open as a PR for discussion before finishing it.

                                      1. 5

                                        I’m really pleased with the quality of projects that were submitted to Clojurists Together, and my only regret is that we couldn’t pick more of them. A huge thanks to our awesome members, we couldn’t do it without y’all.

                                        1. 26

                                          https://hackerone.com/reports/293359#activity-2203160 via https://twitter.com/infosec_au/status/945048806290321408 seems to at least shed a bit more light on things. I don’t find this kind of behavior to be OK at all:

                                          ”Oh my God.

                                          Are you seriously the Program Manager for Uber’s Security Division, with a 2013 psych degree and zero relevant industry experience other than technical recruiting?

                                          LULZ”

                                          1. 6

                                            The real impact with this vulnerability is the lack of rate limiting and/or IP address blacklisting for multiple successive failed authentication attempts, both issues of which were not mentioned within your summary dismissal of the report. Further, without exhaustive entropy analysis of the PRNG that feeds your token generation process, hand waving about 128 bits is meaningless if there are any discernible patterns that can be picked up in the PRNG.

                                            Hrm. He really wants to be paid for this?

                                            1. 3

                                              I mean, it’s a lot better than, say, promising a minimum of 500 for unlisted vulnerabilities and then repeatedly not paying it. Also, that’s not an unfair critique–if you’re a program manager in a field, I’d expect some relevant experience. Or, maybe, we should be more careful about handing out titles like program manager, project manager, product manager, etc. (a common issue outside of security!).

                                              At the core of it, it seems like the fellow dutifully tried to get some low-hanging fruit and was rebuffed, multiple times. This was compounded when the issues were closed as duplicate or known or unimportant or whatever…it’s impossible to tell the difference from the outside between a good actor saying “okay this is not something we care about” and a bad actor just wanting to save 500 bucks/save face.

                                              Like, the correct thing to have done would have been to say “Hey, thanks for reporting that, we’re not sure that that’s a priority concern right now but here’s some amount of money/free t-shirt/uber credits, please keep at it–trying looking .”

                                              The fact that the company was happy to accept the work product but wouldn’t compensate the person for what sounded like hours and hours of work is a very bad showing.

                                              1. 9

                                                Also, that’s not an unfair critique–if you’re a program manager in a field, I’d expect some relevant experience.

                                                No-one deserves to be talked to in that way, in any context, but especially not in a professional one.

                                                Or, maybe, we should be more careful about handing out titles like program manager, project manager, product manager, etc. (a common issue outside of security!).

                                                There is no evidence that the title was “handed out”, especially since we don’t even know what the job description is.

                                                1. 3
                                                  1. open the hackerone thread
                                                  2. open her profile to find her name
                                                  3. look her up on linkedin

                                                  I don’t presume to know what her job entails or whether or not she’s qualified, but titles should reflect reality or they lose their value. She certainly has a lot of endorsements on linkedin, which often carry more value than formal education.

                                                  It’s “Program Manager, Security” btw.

                                                  1. 2

                                                    There is no evidence that the title was “handed out”, especially since we don’t even know what the job description is.

                                                    There’s no evidence that it wasn’t–the point I’m making is that, due to practices elsewhere in industry, that title doesn’t really mean anything concrete.

                                              1. 11

                                                Hey @loige, nice writeup! I’ve been aching to asks a few questions to someone ‘in the know’ for a while, so here goes:

                                                How do serverless developers ensure their code performs to spec (local testing), handles anticipated load (stress testing) and degrades deterministically under adverse network conditions (Jepsen-style or chaos- testing)? How do you implement backpressure? Load shedding? What about logging? Configuration? Continuous Integration?

                                                All instances of applications written in a serverless style that I’ve come across so far (admittedly not too many) seemed to offer a Faustian bargain: “hello world” is super easy, but when stuff breaks, your only recourse is $BIGCO support. Additionally, your business is now non-trivially coupled to the $BIGCO and at the mercy of their decisions.

                                                Can anyone with production experience chime in on the above issues?

                                                1. 8

                                                  Great questions!

                                                  How do serverless developers ensure their code performs to spec (local testing)

                                                  AWS e.g. provides a local implementation of Lambda for testing. Otherwise normal testing applies: abstract out business logic into testable units that don’t depend on the transport layer.

                                                  handles anticipated load (stress testing)

                                                  Staging environment.

                                                  and degrades deterministically under adverse network conditions (Jepsen-style or chaos- testing)?

                                                  Trust Amazon / Microsoft / Google. Exporting this problem to your provider is one of the major value adds of serverless architecture.

                                                  How do you implement backpressure? Load shedding?

                                                  Providers usually have features for this, like rate limiting for different events. But it’s not turtles all the way down, eventually your code will touch a real datastore that can overload, and you have to detect and propagate that condition same as any other architecture.

                                                  What about logging?

                                                  Also a provider value add.

                                                  Configuration?

                                                  Providers have environment variables or something spiritually similar.

                                                  Continuous Integration?

                                                  Same as local testing, but automated?

                                                  but when stuff breaks, your only recourse is $BIGCO support

                                                  If their underlying infrastructure breaks, yep. But every architecture has this problem, it just depends on who your provider is. When your PaaS provider breaks, when your IaaS provider breaks, when your colo provider breaks, when your datacenter breaks, when your electrical provider blacks out, when your fuel provider misses a delivery, when your fuel mines have an accident. The only difference is how big the provider is, and how much money its customers pay it to not break. Serverless is at the bottom of the money food chain, if you want less problems then you take on more responsibility and spend the money to do it better than the provider for your use case, or use more than one provider.

                                                  Additionally, your business is now non-trivially coupled to the $BIGCO and at the mercy of their decisions.

                                                  Double-edged sword. You’ve non-trivially coupled to $BIGCO because you want them to make a lot of architectural decisions for you. So again, do it yourself, or use more than one provider.

                                                  1. 4

                                                    And great answers, thank you ;)

                                                    Having skimmed the SAM Local doc, it looks like they took the same approach as they did with DynamoDB local. I think this alleviates a lot of the practical issues around integrated testing. DynamoDB Local is great, but it’s still impossible to toggle throttling errors and other adverse conditions to check how the system handles these, end-to-end.

                                                    The staging-env and CI solution seems to be a natural extension of server-full development, fair enough. For stress testing specifically, though, it’s great to have full access to the SUT, and to be able to diagnose which components break (and why) as the load increases. This approach goes contrary to the opaque nature of the serverless substrate. You only get the metrics AWS/Google/etc. can provide you. I presume dtrace and friends are not welcome residents.

                                                    f their underlying infrastructure breaks, yep. But every architecture has this problem, it just depends on who your provider is. When your PaaS provider breaks, when your IaaS provider breaks, when your colo provider breaks, when your datacenter breaks, (…)

                                                    Well, there’s something to be said for being able to abstract away the service provider and just assume that there are simply nodes in a network. I want to know the ways in which a distributed system can fail – actually recreating the failing state is one way to find out and understand how the system behaves and what kind of countermeasures can be taken.

                                                    if you want less problems then you take on more responsibility

                                                    This is something of a pet peeve of mine. Because people delegate so much trust to cloud providers, individual engineers building software on top of these clouds are held to a lower and lower standard. If there is a hiccup, they can always blame “AWS issues”[1]. Rank-and-file developers won’t get asked why their software was not designed to gracefully handle these elusive “issues”. I think the learned word for this is the deskilling of the workforce.

                                                    [1] The lack of transparency on the part of the cloud providers around minor issues doesn’t help.

                                                    1. 3

                                                      For stress testing specifically, though, it’s great to have full access to the SUT, and to be able to diagnose which components break (and why) as the load increases.

                                                      It is great, and if you need it enough you’ll pay for it. If you won’t pay for it, you don’t need it, you just want it. If you can’t pay for it, and actually do need it, then that’s not a new problem either. Plenty of businesses fail because they don’t have enough money to pay for what they need.

                                                      This is something of a pet peeve of mine. Because people delegate so much trust to cloud providers, individual engineers building software on top of these clouds are held to a lower and lower standard. If there is a hiccup, they can always blame “AWS issues”[1]. Rank-and-file developers won’t get asked why their software was not designed to gracefully handle these elusive “issues”

                                                      I just meant to say you don’t have access to your provider’s infrastructure. But building more resilient systems takes more time, more skill, or both. In other words, money. Probably you’re right to a certain extent, but a lot of the time the money just isn’t there to build out that kind of resiliency. Businesses invest in however much resiliency will make them the most money for the cost.

                                                      So when you see that happening, ask yourself “would the engineering cost required to prevent this hiccup provide more business value than spending the same amount of money elsewhere?”

                                                  2. 4

                                                    @pzel You’ve hit the nail on the head here. See this post on AWS Lambda Reserved Concurrency for some of the issues you still face with Serverless style applications.

                                                    The Serverless architecture style makes a ton of sense for a lot of applications, however there are lots of missing pieces operationally. Things like the Serverless framework fill in the gaps for some of these, but not all of them. In 5 years time I’m sure a lot of these problems will have been solved, and questions of best practices will have some good answers, but right now it is very early.

                                                    1. 1

                                                      I agree with @danielcompton on the fact that serverless is still a pretty new practice in the market and we are still lacking an ecosystem able to support all the possible use cases. Time will come and it will get better, but having spent the last 2 years building enterprise serverless applications, I have to say that the whole ecosystem is not so immature and it can be used already today with some extra effort. I believe in most of the cases the benefits (not having to worry too much on the underlying infrastructure, don’t pay for idle, higher focus on business logic, high availability and auto-scalability) overcome by a lot the extra effort needed to learn and use serverless today.

                                                    2. 3

                                                      Even though @peter already gave you some great answers, I will try to complement them with my personal experience/knowledge (I have used serverless on AWS for almost 2 years now building fairly complex enterprise apps).

                                                      How do serverless developers ensure their code performs to spec (local testing)

                                                      The way I do is a combination of the following practices:

                                                      • unit testing
                                                      • acceptance testing (with mocked services)
                                                      • local testing (manual, mostly using the serverless framework invoke local functionality, but pretty much equivalent to SAM). Not everything could be locally tested depending on which services you use.
                                                      • remote testing environment (to test things that are hard to test locally)
                                                      • CI pipeline with multiple environments (run automated and manual tests in QA before deploying to production)
                                                      • smoke testing

                                                      What about logging?

                                                      In AWS you can use cloudwatch very easily. You can also integrate third parties like loggly. I am sure other cloud providers will have their own facilities around logging.

                                                      Configuration?

                                                      In AWS you can use parameters storage to hold sensible variables and you can propagate them to your lambda functions using environment variables. In terms of infrastructure as code (which you can include in the broad definition of “configuration”) you can adopt tools like terraform or cloudformation (in AWS specifically, predefined choice by the serverless framework).

                                                      Continuous Integration?

                                                      I tried serverless successfully with both Jenkins and CircleCI, but I guess almost any CI tool will do it. You just need to configure your testing steps and your deployment strategy into a CI pipeline.

                                                      when stuff breaks, your only recourse is $BIGCO support

                                                      Sure. But it’s kind of proof that your hand-rolled solution will be more likely to break than the one provided by any major cloud provider. Also, those cloud providers very often provide refunds if you have outages given by the provider infrastructure (assuming you followed their best practices on high availability setups).

                                                      your business is now non-trivially coupled to the $BIGCO

                                                      This is my favourite as I have a very opinionated view on this matter. I simply believe it’s not possible to avoid vendor lock-in. Of course vendor lock-in comes in many shapes and forms and at different layers, but my point is that it’s fairly unpractical to come up with an architecture that is so generic that it’s not affected by any kind of vendor lock-in. When you are using a cloud provider and a methodology like serverless it’s totally true you have a very high vendor lock-in, as you will be using specific services (e.g. API Gateway, Lambda, DynamoDB, S3 in AWS) that are unique in that provider and equivalent services will have very different interfaces with other providers. But I believe the question should be: is it more convenient/practical to pay the risk of the vendor lock-in, rather than spending a decent amount of extra time and effort to come up with a more abstracted infrastructure/app that allows switching the cloud provider if needed? In my experience, I found out that it’s very rarely a good idea to over-abstract solutions only to reduce the vendor lock-in.

                                                      I hope this can add another perspective to the discussion and enrich it a little bit. Feel free to ask more questions if you think my answer wasn’t sufficient here :)

                                                      1. 6

                                                        This is my favourite as I have a very opinionated view on this matter. I simply believe it’s not possible to avoid vendor lock-in. Of course vendor lock-in comes in many shapes and forms and at different layers, but my point is that it’s fairly unpractical to come up with an architecture that is so generic that it’s not affected by any kind of vendor lock-in.

                                                        Really? I find it quite easy to avoid vendor lock-in - simple running open-source tools on a VPS or dedicated server almost completely eliminates it. Even if a tool you use is discontinued, you still can use it, and have the option of maintaining it yourself. That’s not at all the case with AWS Lambda/etc. Is there some form of vendor lock in I should be worried about here, or do you simply consider this an unpractical architecture?

                                                        When you are using a cloud provider and a methodology like serverless it’s totally true you have a very high vendor lock-in, as you will be using specific services (e.g. API Gateway, Lambda, DynamoDB, S3 in AWS) that are unique in that provider and equivalent services will have very different interfaces with other providers. But I believe the question should be: is it more convenient/practical to pay the risk of the vendor lock-in, rather than spending a decent amount of extra time and effort to come up with a more abstracted infrastructure/app that allows switching the cloud provider if needed? In my experience, I found out that it’s very rarely a good idea to over-abstract solutions only to reduce the vendor lock-in.

                                                        The thing about vendor lock-in is that there’s a quite low probability that you will pay an extremely high price (for example, the API/service you’re using being shut down). Even if it’s been amazing in all the cases you’ve used it in, it’s still entirely possible for the expected value of using these services to be negative, due to the possibility of vendor lock-in issues. Thus, I don’t buy that it’s worth the risk - you’re free to so your own risk/benefit calculations though :)

                                                        1. 1

                                                          I probably have to clarify that for me “vendor lock-in” is a very high level concept that includes every sort of “tech lock-in” (which would probably be a better buzz word!).

                                                          My view is that even if you use an open source tech and you host it yourself, you end up taking a lot of complex tech decisions from which is going to be difficult (and expensive!) to move away.

                                                          Have you ever tried to migrate from redis to memcache (or vice versa)? Even though the two systems are quite similar and a migration might seem trivial, in a complex infrastructure, moving from one system to the other is still going to be a fairly complex operation with a lot of implications (code changes, language-driver changes, different interface, data migration, provisioning changes, etc.).

                                                          Also, another thing I am very opinionated about is what’s valuable when developing a tech product (especially if in a startup context). I believe delivering value to the customers/stakeholders is the most important thing while building a product. Whatever abstraction makes easier for the team to focus on business value I think it deserves my attention. On that respect I found Serverless to be a very good abstraction, so I am happy to pay some tradeoffs in having less “tech-freedom” (I have to stick to the solutions given by my cloud provider) and higher vendor lock-in.

                                                        2. 2

                                                          I simply believe it’s not possible to avoid vendor lock-in.

                                                          Well, there is vendor lock-in and vendor lock-in… Ever heard of Oracle Forms?

                                                      1. 2

                                                        You can configure CircleCI v1 through the web interface, and CircleCI v2 is configured via .circleci/config.yml.

                                                        1. 1

                                                          Thanks!

                                                          It seems a bit more complicated, though. CircleCI 1.0 configuration docs state at the very beginning that:

                                                          CircleCI automatically infers settings from your code, so it’s possible you won’t need to add any custom configuration. If you do need to tweak settings, you can create a circle.yml in your project’s root directory and CircleCI will read it each time it runs a build.

                                                          It doesn’t state anything about being able to provide such configuration w/o putting it in the repo. And there aren’t many details about this inference either.

                                                          1. 1

                                                            I don’t like that you cannot create standalone account in CircleCI. You have to sign-up via GitHub, BitBucket or Google. Even if you sign-up via Google, you have to connect to GitHub or BitBucket if you want to start building anything.


                                                            GitHub

                                                            CircleCI by circleci wants to access your account

                                                            Personal user data
                                                            Email addresses (read-only)

                                                            This application will be able to read your private email addresses.

                                                            Repositories
                                                            Public and private

                                                            This application will be able to read and write all public and private repository data. This includes the following:

                                                            • Code
                                                            • Issues
                                                            • Pull requests
                                                            • Wikis
                                                            • Settings
                                                            • Webhooks and services
                                                            • Deploy keys
                                                            • Collaboration invites

                                                            BitBucket

                                                            CircleCI is requesting access to the following:

                                                            • Read your account information
                                                            • Read your team’s project settings and read repositories contained within your team’s projects
                                                            • Read your repositories and their pull requests
                                                            • Administer your repositories
                                                            • Read and modify your repositories
                                                            • Read your team membership information
                                                            • Read and modify your repositories’ webhooks
                                                          1. 4

                                                            Have you tried Ada? I never looked at it myself, but that article[1] posted today looks very interesting. And there seems to be a well supported web server with WS support[2]

                                                            [1] http://blog.adacore.com/theres-a-mini-rtos-in-my-language [2] https://docs.adacore.com/aws-docs/aws/

                                                            1. 4

                                                              TBH I can’t believe Ada is still alive. I thought it is something that we did in Theory of Programming Languages course and called nothing other than obsolete systems use it. Would give it a shot for sure!

                                                              1. 4

                                                                This article trying to use it for audio applications will give you a nice taste of the language:

                                                                http://www.electronicdesign.com/embedded-revolution/assessing-ada-language-audio-applications

                                                                This Barnes book shows how it’s systematically designed for safety at every level:

                                                                https://www.adacore.com/books/safe-and-secure-software

                                                                Note: The AdaCore website has a section called Gems that gives tips on a lot of useful ways to apply Ada.

                                                                Finally, if you do Ada, you get the option of using Design-by-Contract (built-in to 2012) and/or SPARK language. One gives you clear specifications of program behavior that take you right to source of errors when fuzzing or something. The other is a smaller variant of Ada that integrates into automated, theorem provers to try to prove your code free of common errors in all cases versus just ones you think of like with testing. Those errors include things like integer overflow or divide by zero. Here’s some resources on those:

                                                                http://www.eiffel.com/developers/design_by_contract_in_detail.html

                                                                https://en.wikipedia.org/wiki/SPARK_(programming_language)

                                                                https://www.amazon.com/Building-High-Integrity-Applications-SPARK/dp/1107040736

                                                                The book and even language was designed for people without a background in formal methods. I’ve gotten positive feedback from a few people on it. Also, I encouraged some people to try SPARK for safer, native methods in languages such as Go. It’s kludgier than things like Rust designed for that in mind but still works.

                                                                1. 2

                                                                  I’ve taken a look around Ada and got quite confused around the ecosystem and which versions of the language are available for free vs commercial. Are you able to give an overview as to the different dialects/Versions/recommended starting points?

                                                                  1. 4

                                                                    The main compiler vendor for Ada is AdaCore - that’s the commercial compiler. There is an open source version that AdaCore helps to developed called GNAT and it’s part of the GCC toolchain. It’s licensed with a special GMGPL license or GPLv3 with a runtime exception - meaning you can use both for closed source software development (as long as you don’t modify the compiler that is).

                                                                    There is also GNAT AUX which was developed by John Marino as part of a project I was part of in the past

                                                                    1. 1

                                                                      Thanks for clearing up the unusual license.

                                                                    2. 2

                                                                      I hear there is or was some weird stuff involved in the licensing. I’m not sure exactly what’s going on there. I just know they have a GPL version of GNAT that seemed like it can be used with GPL’d programs:

                                                                      https://www.adacore.com/community

                                                                      Here’s more on that:

                                                                      https://en.wikipedia.org/wiki/GNAT

                                                              1. 18

                                                                This is something that the rest of the team and I have been working on for more than a year. I think open source sustainability is going to be one of the big issues the tech community needs to face in the next ten years, in all communities, but particularly in niche ones like Clojure. Ruby Together has a really good model, so we copied it and applied it to the Clojure community (with some tweaks). Happy to answer any questions people have.

                                                                1. 17

                                                                  Thank you for putting this together–all of you. I’m signing up Jepsen as a corporate sponsor right now.

                                                                1. 3

                                                                  I’m currently using Alice + Bob as known users in my tests. I use Eve as a user that is malicious and is probing security boundaries, instead of just submitting wrong data.

                                                                  1. 2

                                                                    This kind of attack isn’t possible (AFAIK) on Google Cloud, because you need to set a Metadata-Flavor header or it won’t respond with any data. https://cloud.google.com/compute/docs/storing-retrieving-metadata#querying. Obviously there’s lots of other things that can go wrong with reflected XSS, but defense in depth is always good. I suspect it might be well too late for AWS to switch to a method like this though.