Sounds similar to what thrift does (I believe thrift has a JSON transport option?) - what are the advantages of this over a thrift implementation that generates the same kind of type information?
I think this is meant to be language agonostic serialization codegen, but for arbitrary algebraic datatypes (at least that’s what I took away from the readme). Think protocol buffers without the emphasis on binary serialization.
That’s exciting. Protocol buffers really did bring something new to the self-describing-data table, with the ability to generate code targeting a variety of languages. I have a strong personal interest in the problem area, and I’m happy to see the codegen part getting played with more broadly.
Agreed. And I think there is plenty of room for designing (de)serialization codegen that targets ADTs specifically. As much as I like protocol buffers, their schemas are inherently coupled to the types expressible in Java/C/Python, which can make working with them in other languages (Scala, in my experience) rather inorganic.
Honest question, I’m new to Haskell. I remember reading several comparisons of parser combinator libraries in Haskell and the general census has been use attoparsec when performance matters, and trifecta when error messages matter. I presume data (de)serialization is one of those areas that performance does matter?
Sounds similar to what thrift does (I believe thrift has a JSON transport option?) - what are the advantages of this over a thrift implementation that generates the same kind of type information?
What is it? The README is not very explicative and the website (http://www.typed-wire.org) is empty
I think this is meant to be language agonostic serialization codegen, but for arbitrary algebraic datatypes (at least that’s what I took away from the readme). Think protocol buffers without the emphasis on binary serialization.
That’s exciting. Protocol buffers really did bring something new to the self-describing-data table, with the ability to generate code targeting a variety of languages. I have a strong personal interest in the problem area, and I’m happy to see the codegen part getting played with more broadly.
Agreed. And I think there is plenty of room for designing (de)serialization codegen that targets ADTs specifically. As much as I like protocol buffers, their schemas are inherently coupled to the types expressible in Java/C/Python, which can make working with them in other languages (Scala, in my experience) rather inorganic.
Honest question, I’m new to Haskell. I remember reading several comparisons of parser combinator libraries in Haskell and the general census has been use attoparsec when performance matters, and trifecta when error messages matter. I presume data (de)serialization is one of those areas that performance does matter?
It’s a bit weird to me that it uses parsec. I would expect/want attoparsec here.
I think it uses aeson for parsing json. It looks like it uses parsec for parsing the IDL.
Oh! I see. It doesn’t have aeson listed as its own dep, but users would need one.