@daftaupe @screwtape and succeeding!!
You guys have been doing an awesome work for the last years, always improving upon your previous work!.
Always pushing #dsynth to its limits, stressing the kernel/scheduler/fs/etc to new heights!.
I just wish some pkgs didn't hit some heavy ( #ghc #rust ), and that maybe some of the bigger projects ( #Firefox ?) could, someday, lend some support so you could have time to focus in other areas that are lacking 🖒
#dsynth #ghc #rust #firefox #runbsd #dragonflybsd
@julesh Haskell as defined by the report doesn't have unsafeCoerce. That's #GHC you are thinking of.
You can implement unsafeCoerce via the standard 2010 FFI, but you'd have to implement it in C (or something C ABI compatible).
Agda and Idris both have "believe me" escape hatches from their type system.
I just really don't get the point of discarding soundness though. That's the whole purpose of a type system! Well typed programs don't go wrong *because* the type system is sound.
@emc2 IIRC actually using a class that way is a bit of an anti-pattern as the (universally overlapping) instance is banned in the Haskell report, so everyone has to write instances or go via a newtype. Plus, the no-function class never gets introduced into type inference / checking, only propagated.
In #GHC you would use a constraint synonym or constraint family instead.
@hongminhee IME, the tooling with #PureScript is actually worse than the tooling with #GHC #Haskell, which really seemed to be the main problem raised in the article.
I haven't played around with the latest and greatest web backends for GHC, but I did try GHCjs and PureScript at the same time, and felt PureScript was already ahead of the game then. But what other people tend to call "tooling" (I find the concept nebulous) has never seemed to be a priority for me.
@sean I wish I had learned #Haskell as an undergrad (around '98 just after that report was published). I think it could have been enough of an inspiration to significantly affect the course of my career.
It might also have caused me to be in a better place to help with publishing new Haskell reports. I find it demotivating that Haskell is now effectively implementation-defined (by #GHC).
#GHC can infer where to apply the MapMotive function if the definition of the step is inlined, leading to a more compact definition.
constraints GHC reconstructs to be able to verify why this works for ourselves. The first problem we had was that the base case passed in had a different type than the result (Vec 0 a ~ Vec n a).
p @@ 0 ~ (MapMotive d) @@ 0 ~ Vec 0 d
p @@ n ~ (MapMotive d) @@ n ~ Vec n d
This problem does not occur for vmap because we no longer need to unify the base and end case; they are distinct types in the dependent fold. What about the second problem, where the step function changes the type? For that conundrum,
p @@ l ~ (MapMotive d) @@ l ~ Vec l d
p @@ (l + 1) ~ (MapMotive d) @@ (l + 1) ~ Vec (l + 1) d
SNat l -> a -> p @@ l -> p @@ (l + 1) ~ SNat l -> c -> Vec l d -> Vec (l + 1) d (from unifying the step function for dfoldr and the step function defined in vmap)
we can see that the last constraint is consistent between the definition of dfoldr and the definition of vmap.
dependently typed folds subsume the standard folds9, it would be possible to replace all folds with their dependent counterpart
while we can likely automatically derive the definition of the standard fold from the dependent fold, the developer implementing an instance of the Foldable typeclass would now need to use dependent types, which may be a rough barrier especially if the developer has no intention for the fold to be used in a dependent context
it seems to work when I add
{-# OPTIONS_GHC -fplugin GHC.TypeLits.Normalise -fplugin GHC.TypeLits.KnownNat.Solver #-}
{-# OPTIONS_GHC -fno-warn-incomplete-patterns -fno-warn-redundant-constraints #-}
and change the function to
toNetInp :: forall m g st stRep layers shapes net iShape iShapeK nr len x .
(Network layers shapes ~ net, iShape ~ S iShapeK
, len ~ Size iShapeK, HeadShape iShapeK nr ~ (Head shapes)
, KnownNat (nr+1), KnownNat nr, (nr + 1) ~ (1 + nr)
, S x ~ iShape) =>
QLearner m g st stRep iShape nr net -> S x -> S (Head shapes)
toNetInp ql (S1D inpV) =
S2D $ (SV.dfold (Proxy :: Proxy (Append len)) stepDim1 base (fmap fromS lasts))
where base :: Append len @@ 0
base = SA.col inpV
fromS :: S ('D1 len) -> SA.L len 1
fromS (S1D x) = SA.col x
lasts = fmap (normInput ql) (ql ^. lastInputs)
stepDim1 :: SV.SNat l -> SA.L len 1 -> Append len @@ l -> Append len @@ (l+1)
stepDim1 SV.SNat y x = x SA.||| y
Note the signature of stepDim1. I removed the forall constraint completely. However, this only works with the ghc plugins enabled.
https://hackage.haskell.org/package/ghc-typelits-extra
https://hackage.haskell.org/package/ghc-typelits-natnormalise
https://hackage.haskell.org/package/ghc-typelits-knownnat
https://www.reddit.com/r/haskell/comments/7mhafv/dfold_clash_dependent_types_and_knownnat_l/
#haskell #ghc #clash
What's the #GHC #haskell equivalent of a dependent pair as a record field?
In #Idris, I'd define `TypeFn : Int -> Type` and then have the field be of type `(Int ** TypeFn)`. Yes, I need access to both projections of the dependent pair.
ISTR having to use singletons.
I do need to cover the case where the body of `TypeFn` has a wildcard, if that complicates things.
I'd like the system to be closed, and not have dangling (type class) constraints at the interface.
#Haskell rules allow the use of this tool to analyse haskell_library coverage by haskell_test rules
Bazel supports the special persistent worker mode when, instead of calling the compiler from scratch to build every target separately, it spawns a resident process for this purpose and sends all compilation requests to it in the client-server fashion. This worker strategy may improve compilation times. We implemented a worker for GHC using #GHC API.
To activate the persistent worker mode in rules_haskell the user adds a couple of lines in the WORKSPACE file to load worker’s dependencies:
load("//tools:repositories.bzl", "rules_haskell_worker_dependencies")
rules_haskell_worker_dependencies()
module-level build parallelism (-j) in #GHC vs package-level build parallelism (-j) in #Cabal
"passing -j4 —ghc-option=-j4 to cabal can lead to 16 modules being compiled at the same time"
🔗
https://discourse.haskell.org/t/ghcs-j-n-flag-useful-enough-to-be-a-default/6333/3?u=danidiaz
#Haskell
STG machine is an essential part of #GHC, the world's leading Haskell compiler. It defines how the Haskell evaluation model should be efficiently implemented on standard hardware. Despite this key role, it is generally poorly understood amongst GHC users. This document aims to provide an overview of the STG machine in its modern, eval/apply-based, pointer-tagged incarnation by a series of simple examples showing how Haskell source code is compiled.
https://stackoverflow.com/questions/11921683/understanding-stg