• 0 Posts
  • 65 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle










  • It’s about making APIs more flexible, permissive, and harder to misuse by clients. It’s a user-centric approach to API design. It’s not done to make it easier on backend. If anything, it can take extra effort by backend developers.

    But you’d clearly prefer vitriol to civil discourse and have no interest in actually learning anything, so I think my time would be better spent elsewhere.



  • The semantics of the API contract is distinct from its implementation details (lazy loading).

    Treating null and undefined as distinct is never a requirement for general-purpose API design. That is, there is always an alternative design that doesn’t rely on that misfeature.

    As for patches, while it might be true that JSON Merge Patch assigns different semantics to null and undefined values, JSON Merge Patch is a worse version of JSON Patch, which doesn’t have that problem, because like I originally described, the semantics are explicit in the data structure itself. This is a transformation that you can always apply.






  • No, you divide work so that the majority of it can be done in isolation and in parallel. Testing components together, if necessary, is done on integration branches as needed (which you don’t rebase, of course). Branches and MRs should be small and short-lived with merges into master happening frequently. Collaboration largely occurs through developers frequently branching off a shared main branch that gets continuously updated.

    Trunk-based development is the industry-standard practice at this point, and for good reason. It’s friendlier for CI/CD and devops, allows changes to be tested in isolation before merging, and so on.



  • I’m not familiar with any special LLVM instructions for Haskell. Regardless, LLVM is not actually a commonly used backend for Haskell (even though you can) since it’s not great for optimizing the kind of code that Haskell produces. Generally, Haskell is compiled down to native code directly.

    Haskell has a completely different execution model to imperative languages. In Haskell, almost everything is heap allocated, though there may be some limited use of stack allocation as an optimization where it’s safe. GHC has a number of aggressive optimizations it can do (that is, optimizations that are safe in Haskell thanks to purity that are unsafe in other languages) to make this quite efficient in practice. In particular, GHC can aggressively inline a lot more code than compilers for imperative languages can, which very often can eliminate the indirection associated with function calls entirely. https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary/compiler/generated-code goes into a lot more depth about the execution model if you’re interested.

    As for languages other than Haskell without such an execution model (especially imperative languages), it’s true that there can be the overhead you describe, which is why the vast majority of them use iterators to achieve the effect, which avoids the overhead. Rust (which has mapping/filtering, etc. as a pervasive part of its ecosystem) does this, for example, even though it’s a systems programming language with a great deal of focus on performance.

    As for the advantage, it’s really about expressiveness and clarity of code, in addition to eliminating the bugs so often resulting from mutation.