1. 5

    Where YAML gets most of it’s bad reputation from is actually not from YAML but because some project (to name a few; Ansible, Salt, Helm, …) shoehorn a programming language into YAML by adding a template language on top. And then try to pretend that it’s declarative because YAML. YAML + Templating is as declarative as any languages that has branches and loops, except that YAML hasn’t been designed to be a programming language and it’s rather quite poor at it.

    1. 2

      In the early days, Ant (Java build tool) made this mistake. And it keeps getting made. For simple configuration, YAML might be fine (though I don’t enjoy using it), but there comes a point where a programming language needs to be there. Support both: YAML (or TOML, or even JSON) and then a programming language (statically typed, please, don’t make the mistake that Gradle made in using Groovy – discovery is awful).

      1. 4

        I’m very intrigued by Dhall though I’ve not actually used it. But it is, from the github repo,

        a programmable configuration language that is not Turing-complete

        You can think of Dhall as: JSON + functions + types + imports

        it sounds neat

        1. 1

          There is also UCL (Universal Config Language?) which is like nginx config + json/yaml emitters + macros + imports. It does some things that bother me so I stick to TOML but it seems like it is gaining some traction in FreeBSDd world. There is one thing I like about it which is there is a CLI for getting/setting elements in a UCL file.

      2. 1

        Yes! This is one of the reasons I’m somewhat scared of people who like Ansible.

        1. 1

          Yep! People haven’t learned from mistakes. Here’s a list of XML based programming languages.

        1. 10

          I was curious as to how this bug could have occurred and found on the compcert c page that

          This part of CompCert (transformation of C source text to CompCert C abstract syntax trees) is not formally verified.

          So it seems CompCert C’s guarantee is a bit weaker than

          the generated executable code behaves exactly as prescribed by the semantics of the source program

          as claimed on http://compcert.inria.fr/

          And it’s more like, “the generated executable behaves exactly as prescribed by the semantics of the AST we parse your C source program into”

          1. 4

            Right: the bug was actually in the parser, not the compiler proper. Maybe I should edit the title?

            Seemed like a big deal anyway.

            Edit: Noted parser in title.

            1. 6

              http://compcert.inria.fr/man/manual001.html#sec10 shows the project structure, and claims the parser is in fact verified.

              Edit: https://xavierleroy.org/publi/validated-parser.pdf has the full story on the verified parser. In brief, it’s verified with respect to an input grammar. Which, in this case, contained an error.

              1. 5

                [CompCert’s parser is] verified with respect to an input grammar

                If that’s so, why doesn’t the commit that fixed this bug contain a change to the grammar or the proof?

                1. 3

                  A look through the patch shows I was wrong about it being a grammar bug. Those do happen, but the actual cause of failure was a little more complex here. I should be more careful about speculating out loud!

                  All the changes were in the “typed elaborator” phase of the compiler, which takes the raw parsed AST into the type-annotated AST that is the top of the verified stack. It seems like a basic error when handling variable scopes that is triggered by the shadowed variable names.

                  Formally reasoning about scope extrusion is difficult and tedious… but I bet verifying the typed elaborator just rose a little higher in the CompCert development priority queue.

                2. 4

                  Deleted my other comment since I didnt know parser had been verified. My bad. Related to it, I will say the tally of bugs found in specs vs implementation will remain interesting as the weakest version of the proof says the implementation will be at least as accurate or failed as the specification. The Csmith bugs were spec errors IIRC that the implementation faithfully followed. ;)

                  Anyway, such results continue to remind folks formal methods need to be supplemented with other verification techniques.

                3. 2

                  I’m being pedantic, but the “runtime error” occurred in a program that was compiled by CompCert (and its parser), so the title still isn’t really accurate. (Just saying “error” would be enough…)

                  1. 1

                    Hey, no better venue for pedantry than a story like this. :-)

                    I’d edit the title again, but I can’t. I’d speculate that there’s some kind of time limit or score threshold on title edits… not going to go read the Lobsters source to find out exactly, though.

                4. -3

                  Formal methods is so intellectually dishonest.

                  Once I have a translator for language-X, I can claim to have created a formally verified compiler for that language, using the same arguments made by the authors of the Compcert papers.

                  Papers involving formal methods should have to pass the same truthfulness and accuracy requirements as soap powder adverts: http://shape-of-code.coding-guidelines.com/2013/03/10/verified-compilers-and-soap-powder-advertising/

                  1. 9

                    You can’t claim to have created a verified compiler until you have a formal semantics of source, of target, a spec of your transformations, and proof they do what they do. It’s so hard they started with mini-Pascals before working up to C subsets (eg C0->TALC) to the subset in CompCert. There’s also semantics by another team to catch undefined behavior.

                    So, you just made that crap up with no evidence. Further, what empirical evidence we have by looking up data in the field shows verification works at reducing defects in various parts of lifecycle. Far as CompCert, Csmith empirically confirmed the CompCert team’s claims of strong defect reduction when they through tons of CPU time at trying to find defects. They found just a few spec errors in CompCert but piles of errors in others. Regehr et al were amazed by its quality.

                    Comments about worth of formal methods should have to use the scientific method or something when making claims. In this case:

                    1. Define correctness requirements for compilers. There may be several.

                    2. Assess how well various compilers met them, how often, and developed with what methods

                    3. Determine if formally-verified compilers achieved higher quality than those without it. Also, compare to other QA methods such as code review or unit testing.

                    CompCert passes across the board with such criteria. Not that I still want, as in EAL7 requirements or DO-178C, to throw more exhaustive testing and review at it to check for any errors in specs. I wont rely only on one method as empirical data says use diverse array for highest confidence. It’s already orders of magnitude better in defect rate than other compilers, though.

                    1. 2

                      Formal methods is so intellectually dishonest.

                      I think you mean the compcert people.

                      Once I have a translator for language-X, I can claim to have created a formally verified compiler for that language, using the same arguments made by the authors of the Compcert papers.

                      I think you can only claim that starting from the C code, that what your frontend produces will be as faithfully be converted to machine code as what is technically possible using the best technology available.

                      Edit:

                      To your articles point about undefined behaviors, I’m pretty sure it is possible to prove absence of those problems in C code using frama-c and acsl. The problems will always be pushed to the boundaries until you have done this for the entire system.

                      1. 0

                        I think you mean the compcert people.

                        No, I mean formal methods people in general. The compcert people have done some interesting work, but they are not advertising what they have actually done.

                        … undefined behaviors, I’m pretty sure it is possible to prove absence…

                        I’m not sure about proving, but it should be possible to get a high level of confidence.

                        Yes, there are tools to do this, but compcert did (still does not?) use them (or do it itself these days?).

                        1. 3

                          Try using frama-c with -wp-rte plugin to see what I mean about runtime errors. compcert compiles valid C, other tools help you write valid C.

                          1. 2

                            “ The compcert people have done some interesting work, but they are not advertising what they have actually done.”

                            The papers I read note the specific goals, limitations, and techniques used in their methods. They did publish a lot of stuff. Maybe I missed something. What do you think they’re lying about in what papers about the CompCert tool?

                            “No, I mean formal methods people in general”

                            Much of the formal methods literature done by pros lists their specific methods plus their dependencies and what parts can fail. I rarely see people outside the formal methods community talking reliability or security do that. Looking for “TCB” is actually one of techniques I use in Google searches to find the papers I submit. It’s that rare for it to pop up in papers outside either formal methods or high-assurance systems despite it being foundation of their claims.

                            There’s certainly zealotry to call out. There’s certainly problems with peer review or reproducible results. These are true in most if not all sub-fields of CompSci. Maybe for experimental science in general since a lot of article in recent years are complaining about non-reproducible or fake research. For some reason, though, you only bring it up on formal methods threads. I wonder why you find the other threads acceptable with their imprecise and/or non-reproducible claims but drop in a formal methods thread to throw under the bus researchers who note strengths, weaknesses, and dependencies plus publish quite a bit of source code. Quite a double standard you have, Sir!

                    1. 7

                      While you’re in a Scheming frame of mind, I really like The Little Schemer and The Seasoned Schemer. They have an informal presentation style but before you know it you’ve internalized quite a lot.

                      In a totally different vein, I’ve been reading Type Theory and Formal Proof and I think this is a remarkable text because it makes the subject so accessible and sprinkles in interesting asides about the development of the theory and implementation of type theory along the way.

                      1. 3

                        Thank you, I was scheming to get into those!

                      1. 3

                        The author did a good job of demonstrating Haskell’s facility for working with a closed set of types but EDN is actually extensible. In EDN, users can define their own tags that aren’t in the base set of tags. In addition to the richer set of base types in EDN, the ability to add your own is the big advantage over JSON and the author doesn’t address that in this article.

                        1. 1

                          Is there some form of ‘extensible types’ in Haskell, potentially as part of a language extension?

                        1. 0

                          That was even rougher than the last time. That function fails in way more cases than it succeeds. So maybe, just maybe, Maybe isn’t that bad.

                          The unstated assumption seems to be that it’s bad for a program to error out when it receives invalid input, which I don’t think is uncontroversial. IMO the thing that’s rough here is that the type system mandates we write out twelve degenerate error cases to compile.

                          1. 2

                            I think the point was to demonstrate that there are a lot of implicit failure cases. Haskell doesn’t require you to provide a match clause for every data constructor. The author could have written the following with a wild-card pattern to error for all unsupported types:

                            `clmap :: (EDN -> EDN) -> EDN -> EDN
                             clmap f edn =
                               case edn of
                                 Nil -> Nil
                                 List xs -> List $ map f xs
                                 EdnVector xs -> List . map f $ toList xs
                                 EdnSet xs -> List . map f $ toList xs
                                 _ -> error "unsupported operation"`
                            

                            But a reason to avoid wild card patterns is that if you’re explicit about how you handle each pattern then the compiler can automatically check that you’ve covered all possible cases and let you know if you’ve forgotten one (like if you were to update the definition of EDN with a new constructor).

                          1. 7

                            I really wish if in defense of spec and dynamic types that rich hickey et al would use the same defense that aria haghighi used when he introduced schema: sure, you can verify these same kinds of things in a dependently typed language, but those are currently much harder to use and these dynamic checks get you most of the way there with less effort.

                            1. 8

                              There have been so many attempts to make the dream of the free software phone a reality that have failed or come up short. I really want this project to succeed.

                              1. 2

                                I wish the fairphone was a good free software phone. But it runs android and even the “open source” version ships binary blobs and the development process is sealed off from the public :(

                                And while purism has a much better approach to open source, the hardware supply chain is much less transparent than fairphone’s.

                                I support both efforts, but ideally I would like to buy a product that meets both of these goals.

                                1. 1

                                  Basically the Purism software should run on a Fairphone?

                                  1. 1

                                    Not sure, what about the binary blobs that are required on the FP ? (see https://code.fairphone.com/projects/fp-osos/dev/fp2-blobs-download-page.html)

                              1. 1

                                Would this help with learning OCaml?

                                1. 3
                                  1. 1

                                    I found it relatively easy to pick up OCaml after learning Standard ML first (I learned SML at university), and the overall feel of the languages is pretty similar. But if your goal is really to program in OCaml in the near future, I think starting via an SML tutorial might end up frustrating in the short term, because there are just enough miscellaneous differences to get bogged down in things that don’t quite work the same way in OCaml.

                                  1. 3

                                    More OCaml : Algorithms, Methods & Diversions by John Whitington and Handbook of Practical Logic and Automated Reasoning by John Harrison. Both are excellent. The former has been a good introduction to OCaml for people familiar with functional programming and what I’ve got through of the latter has been a pleasant refresher on logic. I’m excited to start getting to the juicy parts of Handbook where you start getting to implementing an actual theorem prover.

                                    1. 1

                                      every reason he lists is a reason to make packages with commonly used code. They are not a reason to have a package per function. The need to have them be a package per function is unique to javascript and the browser and it’s lack of dead code optimization intersecting with the need for shipping less stuff to the browser.

                                      1. 3

                                        as it turns out, Joe Armstrong of Erlang once wrote a post to the Erlang mailing list in which he said:

                                        I’m proposing a slightly different way of programming here The basic idea is

                                        • do away with modules
                                        • all functions have unique distinct names
                                        • all functions have (lots of) meta data
                                        • all functions go into a global (searchable) Key-value database
                                        • we need letrec
                                        • contribution to open source can be as simple as contributing a single function
                                        • there are no “open source projects” - only “the open source Key-Value database of all functions”
                                        • Content is peer reviewed

                                        i dunno. maybe it’s not so crazy? i haven’t decided.

                                        1. 2

                                          I’m getting close to a module per function in my Scala projects. I want to avoid having circular dependencies between classes (with a few specific exceptions). Having them in separate modules with explicit dependencies between them enforces that.

                                          1. 1

                                            If you’re running it through the Closure compiler, you can indeed get dead-code removal (see docs). Of course, that’s not common practice.

                                            1. 2

                                              Rollup.js also does this

                                              1. 2

                                                As does Webpack 2 (currently in beta). However, both Rollup and Webpack 2 require the use of ES6 modules in order to take advantage of their tree-shaking features.

                                              2. 1

                                                yes I was a little abbreviated there, but Closure does give you this, provided all the rest of the javascript is also in Closure. For most of the javascript ecosystem this is not true so it’s not really a solution either.

                                            1. 4

                                              This article is misleading in the sense that there are many more illegal numbers than is implied by it, in the sense that there are millions (likely billions) of numbers that are illegal to posses without paying for them. ;)

                                              1. 1

                                                Makes me wonder if there’s a file sharing community that exchanges numbers representing files rather than the files themselves.

                                              1. 2

                                                I understand Unikernels in the context of VMs, but what are they in the context of a container? Isn’t a container just running a process in some more secure context? What do Unikernels offer containers?

                                                1. 11

                                                  Full circle. Containers are like VMs without kernels. Now we are putting kernels into containers.

                                                  VM - kernel + kernel = bold new vision for the future.

                                                  1. 1

                                                    A unikernel isn’t like the kernel in a VM - it’s not separate from the process.

                                                  2. 1

                                                    Much smaller interface (and therefore attack surface) between the inside and the outside, but without the overhead of that interface being the x86 interface. I think the distinction between VM and container largely goes away (e.g. if you’re running Xen then you’re still running a full OS in dom0, so it’s largely equivalent to containerization), but that’s fine.

                                                    1. 1

                                                      If I understand this post correctly http://unikernel.org/blog/2015/unikernels-meet-docker/ it sounds like you can now use Docker to build applications composed of containers running a conventional OS, Rumprun unikernels or langauge specific unikernels like Mirage.

                                                    1. 11

                                                      I would be interested to see a comparison of dynamically typed languages with statically typed languages like Haskell and OCaml rather than C#/Java. As someone who programs in Ruby for work and recently completed the OCaml MOOC, I suspect the productivity differences would be much smaller.

                                                      1. 3

                                                        How were your experiences with the OCaml MOOC? Do you have any insights that you’d share with others?

                                                        1. 3

                                                          I enjoyed the course a great deal but I recommend it with reservation. I’d written some elementary Haskell before implementing H99 exercises and read John Whitington’s OCaml From The Very Beginning before taking this course so I was already familiar with writing tiny, beginner-level, typed functional programs. For me it was a fun collection of exercises that familiarized me with writing small programs in OCaml. I particularly enjoyed writing the arithmetic interpreter and the toy database.

                                                          But sometimes I struggled to understand the instructions for the exercises and without the benefit of the community forum I would have stayed stuck. From reading the frustrated posts of some students it seemed like those without prior exposure to functional programming were not having a good time. The pacing also felt uneven, some weeks I finished the exercises in hardly any time at all while others took a great deal of time.

                                                          So I think it was a good course for motivated students with some prior experience with functional programming basics like map and fold. But someone with no previous functional programming experience might want to look at other resources first. My favorite is The Little Schemer, which isn’t about typed functional programming but does a tremendous job getting you comfortable with recursion and higher order functions. I also enjoyed OCaml From The Very Beginning and think that would also be a good introduction.

                                                          I saw that the organizers of the course are planning a second edition. Hopefully they can even out some of the rough patches and make it a more enjoyable experience to programmers brand new to functional programming.

                                                      1. 1

                                                        Why is there a Hacker News bookmarklet at the end? Were they expecting to end up there when they posted?

                                                        1. 2

                                                          I think so. That’s where I first found this author’s writing and when I read HN I would regularly see his writing there.

                                                        1. 6

                                                          Can’t we all just use nix?

                                                          1. 5

                                                            I don’t think those firmly in the source based workflow camp will be persuaded to switch to nix

                                                            1. 4

                                                              What’s a “source based workflow”? (Asking because Nix itself is source-based and can optionally use binary caches)

                                                          1. 3

                                                            I just finished the first book of the Southern Reach trilogy. It was completely enthralling from the first page. Reminded me at times of The Thing and also of Andrei Tarkovsky’s Stalker.

                                                            As soon as it arrives, I’ll be reading the latest in the Little books, The Little Prover. https://mitpress.mit.edu/books/little-prover.

                                                            1. 3

                                                              Keep going with the Southern Reach trilogy, it gets even better.

                                                              I’d also recommend checking out Roadside Picnic (the book that Stalker was based on) and Solaris by Stanisław Lem.

                                                              1. 2

                                                                I’ve hardly been able to put the Southern Reach books down. Just started the final book, Acceptance. And thanks for the recommendations, I never knew those films were based on novels!

                                                            1. 6

                                                              I’ve been regularly attending the talks in the San Francisco and they’ve been consistently excellent. I highly recommend checking out your local chapter if you haven’t already.

                                                              1. 2

                                                                Yah, definitely looking forward to Ben’s discussion of Google’s “Spanner”.

                                                              1. 2

                                                                What a surprise! This talk is filled with humor, history and insight. Not at all what I was expecting from the title. Among the things I found most fascinating was that the paper most frequently cited to perpetuate the waterfall model of software engineering actually advocated that that model was, in its own words, doomed.

                                                                Well worth the time to watch.

                                                                1. 3

                                                                  Um. Does it mean Unix-like systems are illegal now? It’s all Bell Labs' APIs, after all.

                                                                  1. 7

                                                                    No - a lot of that was decided as part of previous cases; https://en.wikipedia.org/wiki/UNIX_System_Laboratories,Inc.v.Berkeley_Software_Design,Inc. was the original lawsuit in 1992, which settled that BSDs are not derived from Bell Labs Unix. Despite several other lawsuits, not much has materially changed since then, and https://en.wikipedia.org/wiki/SCO%E2%80%93Linux_controversies#UNIX_copyrights_ownership has a good summary of where things have stood since 2008.

                                                                    The particularly upsetting thing about this latest holding is that it affirms the novel element of claiming that copyright applies to symbol names (if one believes the rumors on Twitter, it applies even to type signatures with names omitted, but no technically-savvy lawyers seem to have written anything about that just yet). What that means is that any independently-written code which links against somebody else’s copyrighted code through its published API is, apparently, suddenly a derivative work. This is regardless of static or dynamic, since it’s the names that are the infringing element. This also applies to clean-room re-implementations of an API.

                                                                    This assertion was not made in the earlier cases, and it’s dubious that they could be re-tried in light of the changing law, but I’m not a lawyer.

                                                                    It’s extremely doubtful that the court had any understanding that if anyone takes its ruling seriously, it is now impossible to write software that interoperates with other software until a licensing framework that has never before existed is put into place.

                                                                    I should probably refrain from speculating on what happens next.

                                                                    Edit: The Lobsters Markdown code apparently doesn’t include a trailing period as part of a link; this one needed it. It’s now in an explicit […](…) construct.

                                                                    1. 4

                                                                      According to the EFF, “Today’s decision doesn’t mean that Oracle has won the lawsuit. The case will now return to the district court for a trial on Google’s fair use defense”. Fingers crossed for a ruling in favor of fair use.

                                                                      https://www.eff.org/deeplinks/2015/06/bad-news-supreme-court-refuses-review-oracle-v-google-api-copyright-decision

                                                                      1. 1

                                                                        Yeah. Which is good, certainly, but the holding that was affirmed is almost certainly more important than who pays penalties to whom, and stands even if the parties settle. That said, the court that issued it doesn’t necessarily set precedents for other circuits, and this case being appealed to it was only possible due to some unique history.

                                                                      2. 3

                                                                        Those lawsuits were about source code though, which is not what I understand as APIs. I was under the impression that creating compatible software (thus, compatible API reimplementations you mention) was specifically legal, and Unix-alikes are full of specifically that (though it’s a POSIX standard, which may or may not matter, IANAL).

                                                                        Also, what about GNU and other java implementations? Are they any different from Android in this respect?

                                                                        1. 5

                                                                          This time around, the case was specifically about that distinction, since the alleged infringement was, in fact, a clean reimplementation. I’m sure if there were any existing case law about that, it would have come up in the filings - and perhaps it did; I have yet to see commentary which goes into any real depth, and the filings are hundreds of pages long, and excuses excuses. :)

                                                                          So I can’t answer as to whether reimplementation was explicitly legal until now, vs. legal by default because nobody argued against it before. The obvious software I can think of that it would have been tested on is Wine, which of course reimplemented the Windows APIs at a time when there was very little awareness of what open-source was at all. But from a quick search, it doesn’t appear they’ve ever been sued (difficult to tell for sure, since there have been a lot of lawsuits about, you know, the alcoholic beverage). At the risk of being speculative, I believe Microsoft would have sued Wine if they thought they could prevail. I know that the project has always been extremely careful about not using information that one has to sign an agreement with Microsoft to get, and I’m sure that helps.

                                                                          1. 3

                                                                            nobody argued against it before

                                                                            I think there was something about file formats (MS Word?) and/or protocols (ICQ?) being legally reimplementable, because competition free capitalism market king. But not APIs? That’s peculiar. But, as a non-USAian non-lawyer, I don’t know what I’m talking about.

                                                                            Wine, which of course reimplemented the Windows APIs at a time when there was very little awareness of what open-source was at all

                                                                            Initial release 1993? Please don’t say “open-source” about free software projects that started five years before the term existed :)

                                                                            Around that time Stallman was advocating for boycotting Apple, who previously sued Microsoft for daring to have a window system, a concept Apple stole from Xerox PARC. But X and OS/2 were alright, for some reason. That, however, was about patents, not copyrights, so it probably doesn’t apply.

                                                                            1. 3

                                                                              The area around protocols seems to have gotten murkier, too. In the ‘90s, AOL was not happy about gaim, but they didn’t think they had a case regarding the protocol itself, so they limited themselves to trademark complaints about the name (one reason it was later renamed to pidgin), and to mucking with the protocol periodically to break third-party clients. But fast-forward to today, and you have companies like Snapchat and WhatsApp aggressively using the DMCA to take down third-party clients. Probably AOL would’ve done something similar today.

                                                                              1. 2

                                                                                Sorry. Point taken! And yes, I remember the GNU-Apple boycott. It was the reason I couldn’t start to learn about GNU software for roughly the next eight years. :)

                                                                            1. 1

                                                                              Your paste has the same problem - the trailing period isn’t included as part of the link. I fixed it above with […](…), but that’s not ideal. Obviously, matching URLs in text is a hard problem, and I don’t see a quick fix to the parser in this case.

                                                                              1. 2

                                                                                You forgot to add the period . to the (…) URL section of your link; you only added it to the displayed text in […]. Your link is still broken. Here’s a working one.

                                                                                In general, you can avoid duplicating the URL by using <…> instead of […](…):

                                                                                <https://en.wikipedia.org/wiki/UNIX_System_Laboratories,_Inc._v._Berkeley_Software_Design,_Inc.>
                                                                                

                                                                                This also prevents the _ underscores in the displayed text from italicizing the text between them, which happened in your link.

                                                                                However, when rendered on Lobsters, the above <…> syntax does not handle the ending period correctly. I have reported this bug in Lobsters.

                                                                                1. 1

                                                                                  If you include trailing full stops you’ll get bad links in other cases. Honestly I think people should expect to have to use explicit markdown in cases where the URL is funny like that. Arguably Wikipedia should avoid having their URLs end in full stops - even though they’re technically allowed, they’re problematic for users.

                                                                                  1. 1

                                                                                    A nice example of how technical issues are usually also social issues. :)

                                                                                    Anyway, I don’t have a strong opinion as long as there’s a way I can make it work. I wish the failure had been immediately obvious, though.

                                                                                  2. 1

                                                                                    Argh!

                                                                                    1. 1

                                                                                      Indeed :(

                                                                            1. 5

                                                                              I actually just updated my key to 4096 bits and am surprised at how low of a percentage of users on GitHub had this as well. I suppose 2048 is the default when using ssh-keygen.

                                                                              1. 4

                                                                                Me too, especially since their tutorial on key generation uses 4096 bit keys

                                                                                1. 3

                                                                                  It’s not possible to generate ECDSA keys with the Apple shipped OpenSSH in OS X (confirmed on 10.7 to 10.9) which is annoying

                                                                                  % ssh-keygen -t ecdsa

                                                                                  unknown key type ecdsa