I’m seriously impressed by the way they abstracted away the significant incidental complexities of data storage, concurrency, ordering, and composition. I hope to see a lot more of this quality in programming.
Yes, I think these particular technical feats need some kind of non-strict evaluation (Haskell) or very powerful macros (Lisp). Some way for the program to introspect at a very fundamental level.
I’m not suggesting these languages should be used for device drivers or anything, though. When I say “quality”, I mean that they took a very thorough look at their problem and solved it thoroughly by making very smart tradeoffs. I want to see more business goals achieved with very thoughtful technical decisions.
It seems like this is only possible in a few languages (maybe only Haskell, even). Do you think there is a reason to believe that is not true?
To get the same level of safety and composability, you’d be fighting uphill with anything that isn’t non-strict by default or doesn’t have explicit effects. Another bit is HKTs with ML modules replace but it can be harder to see the relationship between the types in the latter case. It seems like it would be pretty noisy to mix and discharge all the constraints in a module based system as well.
You could fake it, maybe, in another language. But with the same niceness, composability, and safety? I’m not sure how close you could get.
I don’t see how strictness comes into it. This seems clearly applicable to OCaml, with no compromises I can see. I’ve talked a bit with Simon about this, and nothing Haskell specific came up.
I don’t think it’s Haskell-specific. Purity is not a binary so much as a question of what effects you consider equivalent. Haskell is famous for explicitly sequencing I/O effects, but how often do you actually care about explicitly ordering all I/O? If you don’t care about that (which is not to say you don’t care about sequencing e.g. your HTTP API calls - just that for random log lines or local file reads you don’t care), Scala can do everything Haskell can and more (and I find the way Haskell hides the distinction between a value and a thunk for a value to be a much more important inequivalence). And OCaml or F# can do the same if you’re willing to use modules to replace HKTs.
I am slowly trying to put together a blog post with the thesis that functions are the biggest problem with modern functional programming, because you can’t compare functions for equality. We see an aspect of this in the presentation with their “applicative where possible” hackery. I think we may see a move towards free objects that make equational reasoning possible. If you can form an actual value - not a monadic value that might be hiding Turing-complete computation - to express your interpreter command, then it’s much easier to implement a reduction to canonical form (eliminating e.g. redundant fetches of the same data) before interpreting.
Thanks - don’t hold your breath, mind, it could be months or years yet. At this stage it’s more an idea I’m turning around and trying to gather examples of. I think that free objects idea is one of the most important pieces though.
I am slowly trying to put together a blog post with the thesis that functions are the biggest problem with modern functional programming, because you can’t compare functions for equality…
I’m having trouble understanding what you’re saying here, maybe you can you unpack it a little bit? I’m not sure why being able to test equivalence is important here.
I see the modern functional culture as being about values, declarative code, denotational rather than operational semantics. And a commonly cited advantage of functional programming is being able to use equational reasoning in place of testing.
Functions obstruct these goals. A function is inherently operational (the only thing you can do with it is call it), and while you can still reason equationally in the presence of functions, you can’t actually test by comparing directly for equality. Concretely I’m thinking about monads, and thinking about examples like this one. The idea of something like this is that you have some piece of complex business logic that involves redis calls, and you want to test the logic separately from testing the actual access to redis, so the business logic evaluates to some value and there’s a separate interpreter that runs it. But the testing style there is still very operational - the way you test that your value is correct is by running it in a test interpreter, which is really no different from swapping out a different implementation of an interface. Whereas what we really want to do is to canonicalize the value by some kind of (pure) “normalization by evaluation” and then compare the value for equality against what we expect the complex business logic to evaluate to in this case.
No doubt there are things I’m missing - at the very least I want to try putting this into practice before drawing sweeping conclusions. But I think the idea has promise.
I still don’t understand your claim about functions obstructing reasoning or testing, not the relationship with monads. A monad is a data type, and something that has a monadic type is a value. You can compare those values for equality. There are functions that operate on monads, but I don’t see how they play a role in that.
The IO monad is not special in that either. You can compare two values that have an IO type. The only time when side effects, and the ordering of them, becomes relevant is when you run the program that that monad represent.
In which point are functions interfering and what is the alternative that doesn’t?
And what does it have to do with the question of whether haskell (or similar languages) support the style described in the post better than more widely used languages?
Apologies, I missed a word: I was thinking specifically of free monads, which can’t be compared for equality without executing, because they contain functions. A lot of other monads have the same issue, but of course there are some monads like Writer that you can just compare directly for equality. I’m thinking the “free monad + interpreter pattern” for Haxl-like operations would be better replaced with some other (free?) structure that could be compared for equality.
It affects language choice because Haskell has a strong tradition of anonymous functions where other languages might prefer callback objects, and of using function composition fairly freely. In a traditional-OO “command pattern” approach to the same problem you would be unlikely to allow arbitrary functions in the “command”. One of the things I want to try in my native Scala (particularly with the forthcoming SAM support) is using only identified objects or classes that implement function interfaces.
As a simple example, too, consider Free Identity. It’s morally equivalent to Identity which clear equality and even structurally equivalent to just (Int, _) which is also clearly equatable.
Free monads do not (necessarily) contain functions. And you can define equality. See here
Hmm. In the implementation I’m used to, 2.point map {_ + 2} is Suspend(2, {x => Immediate(x + 2)}) which wouldn’t necessarily even compare equal to itself, never mind to 4.point i.e. Immediate(4). I guess Haskell being lazy obviates the need for trampolines and so allows you to “see through” suspend steps, but as soon as you have a “command” like Get (in Reader even) you can’t compare for equality “past” that (unless the topology of what you’re reading allows you to compare functions from that).
Sure, trampolines break you as do functors with functions. Neither is really critical to a free monad, though.
I think you’re fighting a losing battle avoiding functions everywhere. Having computationally reasonable equality is nice (functions have computationally unreasonable equality, of course) but avoiding it means (a) representing binding syntactically (b) having to quotient off your syntactic/structural equalities to the semantics of whatever language you’re building. Those steps have their own significant expense.
Note that equality in OO Commands is not equivalent to equality of the functions they represent, it is just equality of the objects which in turn would be comparing pointers or whatever the equals method do. It is possible to have two commands doing exactly the same but comparing as different (how to depends on the concrete implementation of equals). Whether this improves reasoning about the programs is not so clear to me, as we are deviating from mathematical concepts into implementation specific details, which in my view is a more difficult arena to move on.
If comparing functions were something you really needed, the same trick can be applied in any other language. For haskell you could just have some indexed structure as function registry and pass functions indirectly as positions in the registry, thus emulating OO style. What that solves is still out of my reach, sorry :)
Note also that, mathematically speaking, equality of functions is undecidable, so any way of comparing functions that any language implements is necessarily an approximation to it. The point being, again, commands are a “bastardised” version of functions with some apparent properties that can be misleading
Note that equality in OO Commands is not equivalent to equality of the functions they represent, it is just equality of the objects which in turn would be comparing pointers or whatever the equals method do. It is possible to have two commands doing exactly the same but comparing as different (how to depends on the concrete implementation of equals)
With a correct implementation of equals that’s not true. If you can normalize the command then you can compare it for true semantic equality. E.g. if your command is an Int => Int expressed as a tree of + and * operators and constant values, then you can put that into canonical form as a polynomial and then compare for actual equality by comparing the coefficients.
I’m not advocating using a table of pointers. I’m advocating eschewing functions in favour of data structures that can be evaluated as functions but which have additional structure on top of that.
With a correct implementation of equals that’s not true. If you can normalize the command then you can compare it for true semantic equality. E.g. if your command is an Int => Int expressed as a tree of + and * operators and constant values, then you can put that into canonical form as a polynomial and then compare for actual equality by comparing the coefficients.
That is only true for a constrained set of possible operations. And also is true for any language, there is nothing in, e.g., haskell that prevents you from representing expressions with + and * as a tree. If you want to have a turing complete representation of commands (or functions), my claim still holds (in all languages).
Sorry if I sound stubborn, I am trying to understand what is the essence of the problem. I still don’t see what property do objects have that solves the problem of function equality.
And also is true for any language, there is nothing in, e.g., haskell that prevents you from representing expressions with + and * as a tree
Sure - all languages are equivalent. But at the same time, syntax and culture guide you down a particular path. I think there’s a place for a language that has Haskell-style typing but guides you towards representing more as data structures rather than the liberal use of anonymous functions that Haskell encourages.
If you want to have a turing complete representation of commands (or functions), my claim still holds (in all languages).
Sure - I’m thinking about cases that don’t need to be turing complete (which I think is most programs really).
If I have data structures f1 and f2 where f1 represents looping over an array and calling x on each element then looping over the array again and calling y on each element, and f2 represents the same thing except y is applied first, then x, and x and y are commutative, what is the canonical form that lets f1 and f2 compare as equal?
A data structure containing a set {x, y}. (Ideally in a language that won’t let you iterate over a set unless you can prove that the operation you’re doing is commutative).
That’s a strong claim and not one believed by anybody I’ve known that knows Scala and Haskell very well.
(and I find the way Haskell hides the distinction between a value and a thunk for a value to be a much more important inequivalence)
Hide is an interesting way to put it, since you can reason about when something will be a thunk or a value pretty easily in GHC Haskell.
And OCaml or F# can do the same if you’re willing to use modules to replace HKTs.
Replacing HKTs isn’t the main problem with using ML Modules to do something like what Haxl is doing.
I am slowly trying to put together a blog post with the thesis that functions are the biggest problem with modern functional programming, because you can’t compare functions for equality.
What relevance do intensionality or extensionality have to @apy’s question?
I think we may see a move towards free objects that make equational reasoning possible.
Equational reasoning is possible with Haskell code without resort to free. Free objects, for the purposes of a working programmer, are a DSL pattern, not a remedy to all of the ills of PL design.
If you can form an actual value - not a monadic value that might be hiding Turing-complete computation - to express your interpreter command, then it’s much easier to implement a reduction to canonical form (eliminating e.g. redundant fetches of the same data) before interpreting.
This is the code-as-data point Hickey made years ago. It’s a good tactic to have in your bag, but not at all appropriate for what Haxl does nor does it seem relevant to @apy’s question.
It seems like this is only possible in a few languages (maybe only Haskell, even).
It’s hard to do in other languages, and it’s rarer. You absolutely can write reliable code in dynamic languages (see: Erlang, Clojure) but you need agreement among the developers that it’s important. You also need to invest a lot more effort into testing for dynamic languages, and coming up with good tests is hard, especially when you consider that a compiler can find the bulk of the random/stupid bugs (which is what most bugs, and even most costly bugs, are) in seconds.
I’m going to go out on a limb and say that Haskell is part of the success here, but it’s also that the developers have the autonomy and resources necessary to do things right. Those factors are intercorrelated, of course. If developers have that autonomy, they might choose to use different tools (e.g. OCaml, verified C) for application-specific reasons, but they probably won’t be slapping together enterprise-style Java classes.
I was thinking more about what languages it’s actually possible to abstract away so much incidental complexity. Doing that in Java quickly becomes really leaky, IME. Even Ocaml has trouble here (hard to wrangle mutability)
abstracted away the significant incidental complexities of data storage, concurrency, ordering, and composition
I may be naive but I am not sure how plain old SQL couldn’t be said to abstract these things away. A SQL select is merely a request for data having a particular logical SQL (SQL itself being a logic language, though naturally problematic compared to an ideal logic language).
It’s a problem because it’s not the context of what’s being discussed with Sigma, which is a turing complete solution that still manages to hide away a lot of incidental complexity.
given N functions running independently that may want to access data sources { D1, D2, … } concurrently, where access to D is at an arbitrary place in N and can take appreciable time, the standard way of handling it is to abstract access to D behind a cache with either function, function-group, or preset time lifetime. The only thing that I figure you get from reordering the cache fills to the front of a function or function-group is that none of the functions start until the cache is filled, which is irrelevant in the case of pure functions. Every language can employ a cache in this way. Some are less great at what happens when many functions are simultaneously waiting for a cache to fill, because sometimes running a function is relatively heavyweight; and sometimes it’s not entirely clear when you want to validate the cache; and keeping an explicit cache might make you mad operationally; but those are orthogonal to whether or not a language has the possibility. IMO.
This is not about N functions running independently, though, from how I understand Haxl, which could be wrong. It’s actually taking a piece of code that is expressed in a sequential looking way and reorganizing it such that data access can be batched (not just cached) for latency. And it looks like natural Haskell code because all of the “magic” is outside of the block one is writing. Doing the same in Java, for example, will produce very non-Java looking code. So that’s really what my question is about, not if I could brute force something into working, but can I do it with the same elegance in removing all of the complexity of what is happening underneath form the user.
This slide deck starts off talking about fighting spam with FXL. If anyone is interested in the original (2011) paper where FB introduces FXL, that is here. It’s quite an interesting read and expands on a lot of the details about their infrastructure that these slides only barely touch.
I’m seriously impressed by the way they abstracted away the significant incidental complexities of data storage, concurrency, ordering, and composition. I hope to see a lot more of this quality in programming.
It seems like this is only possible in a few languages (maybe only Haskell, even). Do you think there is a reason to believe that is not true?
Yes, I think these particular technical feats need some kind of non-strict evaluation (Haskell) or very powerful macros (Lisp). Some way for the program to introspect at a very fundamental level.
I’m not suggesting these languages should be used for device drivers or anything, though. When I say “quality”, I mean that they took a very thorough look at their problem and solved it thoroughly by making very smart tradeoffs. I want to see more business goals achieved with very thoughtful technical decisions.
To get the same level of safety and composability, you’d be fighting uphill with anything that isn’t non-strict by default or doesn’t have explicit effects. Another bit is HKTs with ML modules replace but it can be harder to see the relationship between the types in the latter case. It seems like it would be pretty noisy to mix and discharge all the constraints in a module based system as well.
You could fake it, maybe, in another language. But with the same niceness, composability, and safety? I’m not sure how close you could get.
I don’t see how strictness comes into it. This seems clearly applicable to OCaml, with no compromises I can see. I’ve talked a bit with Simon about this, and nothing Haskell specific came up.
[Comment removed by author]
I’ve seen similar libraries in OCaml, and they work well from a UX perspective. Honestly, I don’t know what you’re talking about.
[Comment removed by author]
I don’t think it’s Haskell-specific. Purity is not a binary so much as a question of what effects you consider equivalent. Haskell is famous for explicitly sequencing I/O effects, but how often do you actually care about explicitly ordering all I/O? If you don’t care about that (which is not to say you don’t care about sequencing e.g. your HTTP API calls - just that for random log lines or local file reads you don’t care), Scala can do everything Haskell can and more (and I find the way Haskell hides the distinction between a value and a thunk for a value to be a much more important inequivalence). And OCaml or F# can do the same if you’re willing to use modules to replace HKTs.
I am slowly trying to put together a blog post with the thesis that functions are the biggest problem with modern functional programming, because you can’t compare functions for equality. We see an aspect of this in the presentation with their “applicative where possible” hackery. I think we may see a move towards free objects that make equational reasoning possible. If you can form an actual value - not a monadic value that might be hiding Turing-complete computation - to express your interpreter command, then it’s much easier to implement a reduction to canonical form (eliminating e.g. redundant fetches of the same data) before interpreting.
I look forward to that blog post.
Thanks - don’t hold your breath, mind, it could be months or years yet. At this stage it’s more an idea I’m turning around and trying to gather examples of. I think that free objects idea is one of the most important pieces though.
I’m having trouble understanding what you’re saying here, maybe you can you unpack it a little bit? I’m not sure why being able to test equivalence is important here.
I’m not sure what intensionality or extensionality would have to do with your question either.
I see the modern functional culture as being about values, declarative code, denotational rather than operational semantics. And a commonly cited advantage of functional programming is being able to use equational reasoning in place of testing.
Functions obstruct these goals. A function is inherently operational (the only thing you can do with it is call it), and while you can still reason equationally in the presence of functions, you can’t actually test by comparing directly for equality. Concretely I’m thinking about monads, and thinking about examples like this one. The idea of something like this is that you have some piece of complex business logic that involves redis calls, and you want to test the logic separately from testing the actual access to redis, so the business logic evaluates to some value and there’s a separate interpreter that runs it. But the testing style there is still very operational - the way you test that your value is correct is by running it in a test interpreter, which is really no different from swapping out a different implementation of an interface. Whereas what we really want to do is to canonicalize the value by some kind of (pure) “normalization by evaluation” and then compare the value for equality against what we expect the complex business logic to evaluate to in this case.
No doubt there are things I’m missing - at the very least I want to try putting this into practice before drawing sweeping conclusions. But I think the idea has promise.
I still don’t understand your claim about functions obstructing reasoning or testing, not the relationship with monads. A monad is a data type, and something that has a monadic type is a value. You can compare those values for equality. There are functions that operate on monads, but I don’t see how they play a role in that.
The IO monad is not special in that either. You can compare two values that have an IO type. The only time when side effects, and the ordering of them, becomes relevant is when you run the program that that monad represent.
In which point are functions interfering and what is the alternative that doesn’t?
And what does it have to do with the question of whether haskell (or similar languages) support the style described in the post better than more widely used languages?
Apologies, I missed a word: I was thinking specifically of free monads, which can’t be compared for equality without executing, because they contain functions. A lot of other monads have the same issue, but of course there are some monads like
Writer
that you can just compare directly for equality. I’m thinking the “free monad + interpreter pattern” for Haxl-like operations would be better replaced with some other (free?) structure that could be compared for equality.It affects language choice because Haskell has a strong tradition of anonymous functions where other languages might prefer callback objects, and of using function composition fairly freely. In a traditional-OO “command pattern” approach to the same problem you would be unlikely to allow arbitrary functions in the “command”. One of the things I want to try in my native Scala (particularly with the forthcoming SAM support) is using only identified
object
s or classes that implement function interfaces.Free monads do not (necessarily) contain functions. And you can define equality. See here
https://hackage.haskell.org/package/free-4.12.4/docs/src/Control-Monad-Free.html#line-118
As a simple example, too, consider
Free Identity
. It’s morally equivalent toIdentity
which clear equality and even structurally equivalent to just(Int, _)
which is also clearly equatable.Hmm. In the implementation I’m used to,
2.point map {_ + 2}
isSuspend(2, {x => Immediate(x + 2)})
which wouldn’t necessarily even compare equal to itself, never mind to4.point
i.e.Immediate(4)
. I guess Haskell being lazy obviates the need for trampolines and so allows you to “see through” suspend steps, but as soon as you have a “command” likeGet
(inReader
even) you can’t compare for equality “past” that (unless the topology of what you’re reading allows you to compare functions from that).Sure, trampolines break you as do functors with functions. Neither is really critical to a free monad, though.
I think you’re fighting a losing battle avoiding functions everywhere. Having computationally reasonable equality is nice (functions have computationally unreasonable equality, of course) but avoiding it means (a) representing binding syntactically (b) having to quotient off your syntactic/structural equalities to the semantics of whatever language you’re building. Those steps have their own significant expense.
Note that equality in OO Commands is not equivalent to equality of the functions they represent, it is just equality of the objects which in turn would be comparing pointers or whatever the
equals
method do. It is possible to have two commands doing exactly the same but comparing as different (how to depends on the concrete implementation of equals). Whether this improves reasoning about the programs is not so clear to me, as we are deviating from mathematical concepts into implementation specific details, which in my view is a more difficult arena to move on.If comparing functions were something you really needed, the same trick can be applied in any other language. For haskell you could just have some indexed structure as function registry and pass functions indirectly as positions in the registry, thus emulating OO style. What that solves is still out of my reach, sorry :)
Note also that, mathematically speaking, equality of functions is undecidable, so any way of comparing functions that any language implements is necessarily an approximation to it. The point being, again, commands are a “bastardised” version of functions with some apparent properties that can be misleading
With a correct implementation of equals that’s not true. If you can normalize the command then you can compare it for true semantic equality. E.g. if your command is an
Int => Int
expressed as a tree of+
and*
operators and constant values, then you can put that into canonical form as a polynomial and then compare for actual equality by comparing the coefficients.I’m not advocating using a table of pointers. I’m advocating eschewing functions in favour of data structures that can be evaluated as functions but which have additional structure on top of that.
That is only true for a constrained set of possible operations. And also is true for any language, there is nothing in, e.g., haskell that prevents you from representing expressions with + and * as a tree. If you want to have a turing complete representation of commands (or functions), my claim still holds (in all languages).
Sorry if I sound stubborn, I am trying to understand what is the essence of the problem. I still don’t see what property do objects have that solves the problem of function equality.
Sure - all languages are equivalent. But at the same time, syntax and culture guide you down a particular path. I think there’s a place for a language that has Haskell-style typing but guides you towards representing more as data structures rather than the liberal use of anonymous functions that Haskell encourages.
Sure - I’m thinking about cases that don’t need to be turing complete (which I think is most programs really).
Ok. I misunderstood the scope of your claim then.
I am still not convinced about your idea of favouring constrained, comparable operations, but at this point I think is a matter of opinion.
I’m not convinced either! It’s an idea I want to investigate, not something I’ve got working yet.
If I have data structures
f1
andf2
wheref1
represents looping over an array and callingx
on each element then looping over the array again and callingy
on each element, andf2
represents the same thing excepty
is applied first, thenx
, andx
andy
are commutative, what is the canonical form that letsf1
andf2
compare as equal?A data structure containing a set
{x, y}
. (Ideally in a language that won’t let you iterate over a set unless you can prove that the operation you’re doing is commutative).That’s a strong claim and not one believed by anybody I’ve known that knows Scala and Haskell very well.
Hide is an interesting way to put it, since you can reason about when something will be a thunk or a value pretty easily in GHC Haskell.
Replacing HKTs isn’t the main problem with using ML Modules to do something like what Haxl is doing.
What relevance do intensionality or extensionality have to @apy’s question?
Equational reasoning is possible with Haskell code without resort to
free
. Free objects, for the purposes of a working programmer, are a DSL pattern, not a remedy to all of the ills of PL design.This is the code-as-data point Hickey made years ago. It’s a good tactic to have in your bag, but not at all appropriate for what Haxl does nor does it seem relevant to @apy’s question.
It’s hard to do in other languages, and it’s rarer. You absolutely can write reliable code in dynamic languages (see: Erlang, Clojure) but you need agreement among the developers that it’s important. You also need to invest a lot more effort into testing for dynamic languages, and coming up with good tests is hard, especially when you consider that a compiler can find the bulk of the random/stupid bugs (which is what most bugs, and even most costly bugs, are) in seconds.
I’m going to go out on a limb and say that Haskell is part of the success here, but it’s also that the developers have the autonomy and resources necessary to do things right. Those factors are intercorrelated, of course. If developers have that autonomy, they might choose to use different tools (e.g. OCaml, verified C) for application-specific reasons, but they probably won’t be slapping together enterprise-style Java classes.
I was thinking more about what languages it’s actually possible to abstract away so much incidental complexity. Doing that in Java quickly becomes really leaky, IME. Even Ocaml has trouble here (hard to wrangle mutability)
I may be naive but I am not sure how plain old SQL couldn’t be said to abstract these things away. A SQL select is merely a request for data having a particular logical SQL (SQL itself being a logic language, though naturally problematic compared to an ideal logic language).
It does, but it’s also not turing complete, which Sigma is.
Why is that a problem?
SQL has a variety of extensions that can solve many/most practical problems.
It’s a problem because it’s not the context of what’s being discussed with Sigma, which is a turing complete solution that still manages to hide away a lot of incidental complexity.
given N functions running independently that may want to access data sources { D1, D2, … } concurrently, where access to D is at an arbitrary place in N and can take appreciable time, the standard way of handling it is to abstract access to D behind a cache with either function, function-group, or preset time lifetime. The only thing that I figure you get from reordering the cache fills to the front of a function or function-group is that none of the functions start until the cache is filled, which is irrelevant in the case of pure functions. Every language can employ a cache in this way. Some are less great at what happens when many functions are simultaneously waiting for a cache to fill, because sometimes running a function is relatively heavyweight; and sometimes it’s not entirely clear when you want to validate the cache; and keeping an explicit cache might make you mad operationally; but those are orthogonal to whether or not a language has the possibility. IMO.
This is not about N functions running independently, though, from how I understand Haxl, which could be wrong. It’s actually taking a piece of code that is expressed in a sequential looking way and reorganizing it such that data access can be batched (not just cached) for latency. And it looks like natural Haskell code because all of the “magic” is outside of the block one is writing. Doing the same in Java, for example, will produce very non-Java looking code. So that’s really what my question is about, not if I could brute force something into working, but can I do it with the same elegance in removing all of the complexity of what is happening underneath form the user.
[Comment removed by author]
There are a lot of talks about this project as well. Looking on YouTube for something like “haskell production facebook” should get you some of them.
This slide deck starts off talking about fighting spam with FXL. If anyone is interested in the original (2011) paper where FB introduces FXL, that is here. It’s quite an interesting read and expands on a lot of the details about their infrastructure that these slides only barely touch.