I believe that many open source ecosystems have projects like Amazonka: projects which are large, important to their ecosystem, and stuck. These are my notes about how to unstick such a project, using Amazonka as a case study. It’s a fair amount of work, but a surprising amount of help can come out of the woodwork once someone makes the first move. That person could be you, and if it’s not you, then who’s it gonna be?

I started in almost the worst possible way, by barging onto the
Hackage Trustees’ issue tracker and asking to do a non-maintainer upload
of the whole `amazonka-*`

package family. While this was a
pretty rude thing to do, it at least got me and Brendan talking.

After that, I instead tried doing the actual work: looking for issues to close or PR, and classifying open PRs as “needs fixes”, “should merge”, or “should close”. When the project still has a somewhat responsive maintainer, helping to tame the bugtracker is a great way to move from being an occasional PR author and instead become an actual member of the project team. The biggest difficulty with resurrecting a large stuck project like Amazonka is working out what actually needs to be done: issues can become irrelevant with age, some things should be deferred until after the big cleanup, and the whole thing becomes a big unappealing tangle which the maintainer never quite gets to. Fixing this means making the maintainer’s job as easy as possible; my approach was to write more rather than less, and bring enough detail together that Brendan should be able to say “yep, closed” or “yep, merged”.

My experience with Amazonka and other Haskell libraries is that
maintainers tend to care deeply about their code, feel very responsible
for it, but are usually time-poor. Nearly every time I’ve showed up
willing to do the work, people have bent over backwards to accommodate
it. The best thing you can do as a contributor, then, is make the
maintainer’s job as easy as possible by putting effort into commit
messages, `CHANGELOG.md`

entries, PR descriptions, and so on.
A time-poor maintainer will have a much easier time approving things if
you provide all the necessary context.

But sometimes maintainers just aren’t able to get the work done, for any number of reasons. Burnout is a real problem, they could be working in other languages, in remote locations, or dealing with a major life event. In those instances you will need to take on more responsibility. My recommendation: take the minimum additional responsibility you need to get the job done, because that ruffles the fewest feathers. A handover or co-maintainership is better than a fork with maintainer blessing, which is much better than a hostile fork. I started opening PRs and posting recommendations to issues in early 2021. In September 2021 there was a flare-up on the issue tracker of the “just fork it, it’s never going to get done” type, so I suggested that Brendan lay out a road map and appoint additional maintainers. He offered me collaborator access, I accepted, and then the real work began.

The issue tracker is the map to the next release, and a map is no
good if it doesn’t match the territory. After being made collaborator, I
tagged every open issue and pull request (PR) with a new
`needs triage`

label, and read/triaged/split/closed all of
them. This sounds intimidating, and it is, but it’s quite doable. Having
the label made it easy to get a list of all untriaged items, and I would
go through 5–10 issues over breakfast each morning instead of reading
junk online. This proved to be extremely useful, and recommend it to
anyone picking up a dormant project. It gave me a handle on how the
library worked, what was on Brendan’s road-map, the pain points
affecting real users, and so on.

For this reason, I consider stalebots harmful and recommend against summarily closing old issues until you’ve thought about them properly and understood exactly what’s been reported. Each comment, issue, and PR against a dormant project exists because somebody cared enough to write it up, and that’s worth taking seriously.

After reading and triaging the issues and PRs, I had some sense of what things people needed, was able to cluster them into rough groupings, make guesses at what looked easier or harder, and start working through them in batches. This makes learning a large project much less intimidating, because you can go and learn how (say) service overrides work, clear off a bunch of those issues, and then move onto something else like request signing. A full mental model of the project then develops over time. (In hindsight, it may be possible to acquire this mental model more rapidly by using something like reflexion models.)

Cleaning up the issue tracker had a surprising side-effect: a few
people noticed the increased activity and came out of the woodwork to
submit features they’d developed for their own use. A few major features
came via PR, such as support for AWS ~~SSO~~ Identity Center and
the `sts:AssumeRoleFromWebIdentity`

API call (used to assume
IAM roles via OpenID
Connect, as well as from other identity providers).

These PRs often worked well, but sometimes lacked understanding of Amazonka’s architectural direction because that context was never written down. In these cases, I had to politely request a total rewrite, which is feedback that must be delivered carefully. I therefore put extra effort into these reviews to spell out the library’s architecture and make the contributors’ job as easy as possible. In almost every instance, the authors happily rewrote and resubmitted their PRs, even though the rewrites were a decent amount of work. I’m very grateful for their contributions, and for their flexibility.

To confidently make a final release, we needed more people using it. Many people were already using relatively recent versions of Amazonka from git, but we wanted people to move from their private forks to our version. The first release candidate was announced at the end of November 2021, to give people (especially industrial users) a chance to test it with real workloads.

As with maintainers, I think it’s really important to be respectful of your users’ time, and we needed to make it easy for them to try the release candidate. We therefore provided instructions for how to import Amazonka from git, for both Cabal and Stack. (For similar reasons, we also made sure to provide a migration guide from the 1.6.1 to the 2.0 final release.)

This had the desired effect — it brought a lot of reports out of the woodwork. Most said “yes, this is working great for us”, but it also caused a welcome flurry of bug reports. The proposed four-week stabilisation period turned out to be wildly optimistic, and it wasn’t until July 2023 that we were able to announce a second release candidate.

I would’ve preferred a smaller gap between RC1 and RC2, but it turned
out that there was a lot more work required, and some of it required
long stretches of focused work. One example: to bring
`amazonka`

’s authentication support in line with official AWS
SDKs, we needed to add support for unsigned requests to AWS and support
several new authentication methods. Doing this properly and in an
extensible way required a thorough rework of the authentication
subsystem.

This release would never have happened if not for the support and contributions of a great many people. Here is a partial list:

- Brendan Hay, for Amazonka itself and its releases, handling much of the build system work, shipping a much more ergonomic baseline interface, many code reviews, and for trusting me with the commit bit.
- Alex Mason, for many long
conversations about API design, developer ergonomics, and “I intend to
change
`Amazonka.Foo`

. Will it break`amazonka-s3-streaming`

?” - Bellroy, my employer, for giving me several weeks of work time to work on the bigger and more complicated changes. If not for their sponsorship, Amazonka 2.0 probably would not have arrived before 2024.
- The “regulars” on the Amazonka issue tracker, including @pbrisbin, @K0te, @mbj, @ysangkok, and @Fuuzetsu.
- Anyone else who raised PRs or issues once the project started to recover momentum.

I never expected to be the one to do this: I got into cloud relatively recently, and Amazonka initially looked too intimidating to tackle. But work started leaning into more AWS-specific offerings, and the need for a better SDK became more pressing. I also completed my AWS Certified Solutions Architect — Associate certificate, and became the one on the team with strong AWS and Haskell knowledge. On top of that, some close friends got in my ear, saying things like, “you care a lot about Haskell and about cloud. Your community needs this, and you have the skills to do it. If it’s not you, then who’s it gonna be?”

And that’s my challenge to you. Find a stuck project that’s important to your part of the ecosystem, and see if you can unstick it. Because if it’s not you, then who’s it gonna be?

]]>As Half-Life modding matured, some really interesting inventions appeared. MetaMod was a C++ framework that interposed itself between the server binary and the actual mod DLL, allowing you to inject custom behaviour into an existing mod. I didn’t understand enough C++ to write MetaMod plugins, but that didn’t matter: AMX Mod and later AMX Mod X let you write custom plugins using a simpler C-style language called Pawn (known back then as “Small”). This enabled an explosion of ways for operators to tweak their game servers: quality-of-life improvements for players, reserved player slots for members, and delightfully bonkers gameplay changes. I remember having my mind blown the first time I stumbled upon a game of CS with a class-based perks system, inspired by Warcraft 3, and that was just one instance of the creativity that came from the AMX(X) modding scenes.

And with the Half-Life-specific background covered, we are now ready to talk about NS: Combat and my gloriously dumb contribution to the AMXX world.

The original release of NS was hard to enjoy at low player counts. It was balanced for 6v6, so confining one marine to the command chair hurt the marine team a lot. This was also before the era of server-side match-making, so if nobody was around you’d join your local (often ISP-provided) game server and hang out, hoping enough people would come online to get a good game going.

To address these problems, the NS team added a simpler alternative mode called “combat” as part of the mod’s 2.0 release. Combat maps were much smaller and removed the resource-gathering and RTS elements in favour of a much simpler goal: the marines had to destroy the alien hive, and the aliens had to destroy the (unoccupied) command chair. With the resource system removed, players instead earned XP and levels for kills and assists, and could spend those levels on upgrades, advanced morphs (aliens), or weapons and equipment (marines).

Combat was perhaps too successful: it was designed as a lightweight
substitute for the real game, for when you didn’t have a lot of players.
But it quickly overtook classic NS in popularity and stayed that way for
the rest of the mod’s lifespan. Of course, AMXX modders extended the
combat mode in all kinds of broken ways; the main one raised the level
cap beyond 10 and added additional upgrades to spend those levels on. It
was colloquially known as “xmenu”, because it added a
`/xmenu`

player command, opening a menu of new upgrades to
spend those additional levels on.

But I liked NS for the buildings! To me, that was what made NS
special. Since I could code well enough to write AMXX plugins, I added
them to the combat game mode. The Combat Buildings plugin gave players a
new `/buildmenu`

command that let them spend their levels to
place structures.

The release was surprisingly controversial. Some people rather liked
it, but the people who hated it *really hated* it. One of the
ModNS forum moderators (in a long-deleted post, sadly) called it “the
most ridiculous concept I have ever seen on these fora”. And here is the
maddest
my code has ever made anyone:

But as absolutely terrible as /xmenu is, /buildmenu is thegod damneddevil. Buildmenu is an abomination upon the lord that is causing the universe to unravel and all heretics who follow the terribleness that is buildmenu shall perish in hell. I’d like to give a big thanks to whoever created /buildmenu for making THE WORST COMBAT PLUGIN EVER.

You’re welcome.

I was very taken aback when I first saw this comment, but these days I cherish it. It reminds me one of the first times my code had a big impact on a community. Enough people liked it that I made the final versions of Combat Buildings integrate with other plugins, allowing servers where the aliens could build on walls and ceilings, or allowing players to build in the custom marine vs. marine and alien vs. alien game modes. I loved the feeling of making a game play by my rules, of building on others’ work, of being part of a community and swapping knowledge, and of making cool (dumb) stuff happen just because I willed it. Those feelings don’t ever get old, and are a big reason why I still love hacking on things.

]]>- The fact and most of the phrasing comes from Mac Lane’s Categories for the Working Mathematician, but
- “What’s the problem?” is a cheeky addition from a funny 2009 blog post: A Brief, Incomplete, and Mostly Wrong History of Programming Languages

The meme words have become an annoying blot on the fringes of the Haskell universe. Learning resources don’t mention it, the core Haskell community doesn’t like it because it adds little and spooks newcomers, and it’s completely unnecessary to understand it if you just want to write Haskell code. But it is interesting, and it pops up in enough cross-language programming communities that there’s still a lot of curiosity about the meme words. I wrote an explanation on reddit recently, it became my highest-voted comment overnight, and someone said that it deserved its own blog post. This is that post.

**This is not a monad tutorial.** You do not need to
read this, especially if you’re new to Haskell. Do something more useful
with your time. But if you will not be satisfied until you understand
the meme words, let’s proceed. I’ll assume knowledge of categories,
functors, and natural transformations.

“A monad is a monoid in the category of endofunctors” is not specific enough. Let’s fill in the details and specialise it to Haskell monads, so that we build towards a familiar typeclass:

“Haskell monads are **monoid objects** in the
**monoidal category** of endofunctors on **Hask**, with **functor
composition as the tensor**.”

Let’s first practice looking for monoid objects in a monoidal
category that’s very familiar to Haskell programmers: **Hask**, the “category” where
the objects are Haskell types and the morphisms are functions between
the types. (I use scare quotes because we quietly ignore ⊥).

We will first explore the following simpler claim about
mon**oids**, and come back to mon**ads**:

“Haskell **monoids** are monoid objects in the monoidal
category **Hask**, with
`(,)`

as the tensor.”

We will need the categorical definition of bifunctors to define monoidal categories, and we’ll need product categories to define bifunctors:

**Definition 1:** The product of two categories is
called a **product category**. If **C** and **D** are categories, their
product is written **C** × **D** and
is a category where:

- The objects of
**C**×**D**are pairs (*c*,*d*), where*c*is an object from**C**and*d*is an object from**D**; and - The morphisms of
**C**×**D**are pairs (*f*,*g*), where*f*is a morphism from**C**and*g*is a morphism from**D**.

**Definition 2:** A **bifunctor** is a
functor whose domain is a product category.

In Haskell, we tend to only think about bifunctors **Hask** × **Hask** → **Hask**,
as represented by `class Bifunctor`

:

```
class (forall a. Functor (p a)) => Bifunctor p where
bimap :: (a -> b) -> (c -> d) -> p a c -> p b d
-- other methods omitted
-- Uncurrying bimap and adding parens for clarity:
bimap' :: Bifunctor p => (a -> b, c -> d) -> (p a c -> p b d)
= bimap f g p bimap' (f, g) p
```

`bimap`

and `bimap'`

are equivalent, and you
can see how `bimap'`

maps a morphism from **Hask** × **Hask**
to a morphism in **Hask**.
We use `bimap`

because it is more ergonomic to program
with.

**Aside 3:** `Iceland_Jack`

has an unofficial plan to unify the various functor typeclasses
using a general categorical interface, which has the potential to
subsume a lot of ad-hoc typeclasses. If done in a backwards-compatible
way, it would be extremely cool.

**Exercise 4:** Show that `Either`

is a
bifunctor on **Hask** × **Hask** → **Hask**,
by giving it a `Bifunctor`

instance.

```
{-# LANGUAGE InstanceSigs #-}
instance Functor (Either x) where
fmap :: (a -> b) -> Either x a -> Either x b
fmap _ (Left x) = Left x
fmap f (Right a) = Right (f a)
instance Bifunctor Either where
bimap :: (a -> b) -> (c -> d) -> Either a c -> Either b d
Left a) = Left (f a)
bimap f _ (Right b) = Right (g b) bimap _ g (
```

**Exercise 5:** Show that `(,)`

is a
bifunctor on **Hask** × **Hask** → **Hask**,
by giving it a `Bifunctor`

instance.

```
{-# LANGUAGE InstanceSigs #-}
instance Functor ((,) x) where
fmap :: (a -> b) -> (x, a) -> (x, b)
fmap f (x, a) = (x, f a)
instance Bifunctor (,) where
bimap :: (a -> b) -> (c -> d) -> (a, b) -> (c, d)
= (f a, g b) bimap f g (a, b)
```

The definition of monoidal category also relies on the definition of natural isomorphism, so let’s define and discuss them.

**Definition 6:** If *F* and *G* are functors from **C** to **D**, a **natural
isomorphism** is a natural transformation *η* : *F* ⇒ *G* where
*η* is an isomorphism for every
object *c* in **C**.

If you are used to the Haskell definition of “natural transformation”, you might be wondering what this “for every object” business is about:

```
{-# LANGUAGE RankNTypes, TypeOperators #-}
type f ~> g = forall a. f a -> g a
```

In Haskell, we use parametrically-polymorphic functions as natural
transformations between endofunctors on **Hask**. This is a stronger
condition than the categorical definition requires, where a natural
transformation is a *collection* of morphisms in the target
category, *indexed by objects in the source category*. The
equivalent in Haskell would be like being able to choose one function
for `f Int -> g Int`

and another for
`f Bool -> g Bool`

(subject to conditions).

I have been told that internalising the Haskell version of natural transformations may leave you unable to prove certain results in category theory, but I don’t know which ones. I know that it’s because you may find yourself trying to construct a parametrically-polymorphic function instead of just a natural transformation.

For today’s purposes, we can say that
`nt :: f a -> g a`

is a natural isomorphism if it has an
inverse `unnt :: g a -> f a`

.

**Counter-Example 7:**
`listToMaybe :: [a] -> Maybe a`

is a natural
transformation but not a natural isomorphism, because it is not
invertible.

We are now ready to define monoidal categories.

**Definition 8:** A **monoidal category**
is a triple (**C**, ⊗, *I*) where:

**C**is a category;⊗ is a bifunctor

**C**×**C**→**C**called the**tensor product**;*I*is an object of**C**called the**identity object**;Natural isomorphisms and coherence conditions showing that ⊗ is associative and has and

*I*is its left and right identity:*α*: − ⊗ (−⊗−) ⇒ (−⊗−) ⊗ −, standing for*α*ssociator, with components*α*_{A, B, C}:*A*⊗ (*B*⊗*C*) ≅ (*A*⊗*B*) ⊗*C*;( − ⊗ (−⊗−) means the functor that takes

**C**to**C**× (**C**×**C**).)*λ*: 1_{C}⇒ (*I*⊗−), standing for*λ*eft unitor, with components*λ*_{A}:*A*≅*I*⊗*A*;(1

_{C}is the identity functor on**C**.)*ρ*: 1_{C}⇒ (−⊗*I*), standing for*ρ*ight unitor, with components*ρ*_{A}:*A*≅*A*⊗*I*; andThe coherence conditions have nice diagrams at the Wikipedia definition.

We can now say that (**Hask**, `(,)`

,
`()`

) is a monoidal category:

**Hask**is a “category”;`(,)`

has a`Bifunctor`

instance, so it’s a bifunctor from**Hask**×**Hask**→**Hask**;`()`

is a type, so it is an object in**Hask**;We can write parametric functions to demonstrate the natural isomorphisms (the coherence conditions come for free, from parametricity):

`assoc :: (a, (b, c)) -> ((a, b), c) unassoc :: ((a, b), c) -> (a, (b, c)) left :: a -> ((), a) unleft :: ((), a) -> a right :: a -> (a, ()) unright :: (a, ()) -> a`

**Exercise 9:** Implement these natural
isomorphisms.

```
assoc :: (a, (b, c)) -> ((a, b), c)
= ((a, b), c)
assoc (a, (b, c))
unassoc :: ((a, b), c) -> (a, (b, c))
= (a, (b, c))
unassoc ((a, b), c)
left :: a -> ((), a)
= ((), a)
left a
unleft :: ((), a) -> a
= a
unleft ((), a)
right :: a -> (a, ())
= (a, ())
right a
unright :: (a, ()) -> a
= a unright (a, ())
```

**Exercise 10:** Show that (**Hask**, `Either`

,
`Void`

) is a monoidal category.

**Hask**is a “category”;`Either`

has a`Bifunctor`

instance, so it’s a bifunctor from**Hask**×**Hask**→**Hask**;`Void`

is a type, so it is an object in**Hask**;We can write parametric functions to demonstrate the natural isomorphisms (the coherence conditions come for free, from parametricity):

`import Data.Void assoc :: Either a (Either b c) -> Either (Either a b) c Left a) = Left (Left a) assoc (Right (Left b)) = Left (Right b) assoc (Right (Right c)) = Right c assoc ( unassoc :: Either (Either a b) c -> Either a (Either b c) Left (Left a)) = Left a unassoc (Left (Right b)) = Right (Left b) unassoc (Right c) = Right (Right c) unassoc ( left :: a -> Either Void a = Right -- It puts the identity (Void) on the left left unleft :: Either Void a -> a Left v) = absurd v unleft (Right a = a unleft right :: a -> Either a Void = Left right unright :: Either a Void -> a Left a) = a unright (Right v) = absurd v unright (`

**Remark:** The `assoc`

package defines `class Bifunctor p => Assoc p`

, with
`assoc`

/`unassoc`

methods.

Now that we have some monoidal categories, we can go looking for monoid objects. Let’s define them:

**Definition 11:** A **monoid object** in a
monoidal category (**C**,
⊗, *I*) is a triple (*M*, *μ*, *η*) where:

*M*is an object from**C**;*μ*:*M*⊗*M*→*M*is a morphism in**C**;*η*:*I*→*M*is a morphism in**C**; and- Coherence conditions, as per the diagrams in the Wikipedia definition.

What are the monoid objects in the monoidal category (**Hask**, `(,)`

,
`()`

)? To associate morphisms (functions) with an object
(type), we use a typeclass; the type variable `m`

identifies
*M*, and the rest is
substitution:

```
class MonoidObject m where
mu :: (m, m) -> m
eta :: () -> m
```

If you squint, you might be able to see why this is
`class Monoid`

in disguise: `mu`

is uncurried
`(<>)`

, and `eta`

is `mempty`

(laziness makes `m`

equivalent to the function
`() -> m`

).

`Either`

,
`Void`

)?
**EDIT: 2023-02-10** The previous solution here was
wrong, and has been replaced. Thanks to James Cranch for the
correction.

In any cocartesian monoidal category (i.e., a category using the
coproduct as the tensor), every object is a monoid object in a boring
way. To see this in **Hask**, write out the class
and instance definitions:

```
class MonoidObjectE m where
mu :: Either m m -> m
eta :: Void -> m
instance MonoidObjectE m where
= either id id
mu = absurd eta
```

`Hask`

Now we will do it all again, starting with the category of
endofunctors on **Hask**.
This category is sometimes written **Hask**^{Hask},
because of the connection between functions *a* → *b* and exponentials
*b*^{a}. Since
we don’t have to set up all the definitions, we can move faster. We
describe a category by identifying its objects and its morphisms, so for
**Hask**^{Hask}:

- Objects are functors from
**Hask**to**Hask**; and - Morphisms are natural transformations between these functors.

To turn **Hask**^{Hask}
into a monoidal category, we need to consider bifunctors from **Hask**^{Hask} × **Hask**^{Hask}
to **Hask**^{Hask},
and to do that, we need to consider what a functor from **Hask**^{Hask}
to **Hask**^{Hask}
would look like.

A functor sends objects to objects and morphisms to morphisms, and
for the sake of analogy let’s look back on functors from **Hask** to **Hask**. As Haskell
programmers, we represent them with type constructors of kind
`Type -> Type`

to fit our chosen domain and codomain, and
we use a typeclass to map morphisms (functions):

```
-- The one from `base`, plus a kind annotation:
class Functor (f :: Type -> Type) where
-- Parens added for clarity
fmap :: (a -> b) -> (f a -> f b)
```

So for endofunctors on **Hask**^{Hask},
we need a type constructor that turns an argument of kind
`(Type -> Type)`

into `(Type -> Type)`

. This
means we need an alternate version of `class Functor`

:

```
class Functor2 (t :: (Type -> Type) -> (Type -> Type)) where
fmap2 :: (forall x. f x -> g x) -> (t f a -> t g a)
```

**Remark:** This is very close to `class MFunctor`

from package `mmorph`

, but `MFunctor`

identifies functors on the category of Haskell monads, which is a
stricter condition.

Similarly, we will need to identify bifunctors from **Hask**^{Hask} × **Hask**^{Hask}
to **Hask**^{Hask},
with an alternate version of `class Bifunctor`

:

```
class (forall f. Functor2 (t f)) =>
Bifunctor2 (t :: (Type -> Type) -> (Type -> Type) -> (Type -> Type)) where
bimap2 ::
forall x. p x -> q x) ->
(forall x. r x -> s x) ->
(-> t q s a) (t p r a
```

So we need to find monoid objects in a monoidal category of endofunctors. That means we need to identify a bifunctor and identity object for our monoidal category. We will use functor composition as our tensor and the identity functor as our identity object:

```
-- From Data.Functor.Compose in base
newtype Compose f g a = Compose { getCompose :: f (g a) }
-- From Data.Functor.Identity in base
newtype Identity a = Identity { runIdentity :: a }
```

**Exercise 13:** Show that the composition of two
functors is a functor, by writing
`instance (Functor f, Functor g) => Functor (Compose f g)`

```
instance (Functor f, Functor g) => Functor (Compose f g) where
fmap f = Compose . (fmap . fmap) f . getCompose
```

**Exercise 14:** Show that `Compose`

is a
bifunctor from **Hask**^{Hask}
to itself by writing `Functor2`

and `Bifunctor2`

instances.

```
instance Functor x => Functor2 (Compose x) where
= Compose . fmap fg . getCompose
fmap2 fg
instance (forall x. Functor2 (Compose x)) => Bifunctor2 Compose where
= Compose . pq . getCompose . fmap2 rs bimap2 pq rs
```

**Exercise 15:** Write out and implement the natural
isomorphisms, showing that (**Hask**^{Hask},
`Compose`

, `Identity`

) is a monoidal category.

```
assoc :: Functor f => Compose f (Compose g h) a -> Compose (Compose f g) h a
= Compose . Compose . fmap getCompose . getCompose
assoc
unassoc :: Functor f => Compose (Compose f g) h a -> Compose f (Compose g h) a
= Compose . fmap Compose . getCompose . getCompose
unassoc
left :: f a -> Compose Identity f a
= Compose . Identity
left
unleft :: Compose Identity f a -> f a
= runIdentity . getCompose
unleft
right :: Functor f => f a -> Compose f Identity a
= Compose . fmap Identity
right
unright :: Functor f => Compose f Identity a -> f a
= fmap runIdentity . getCompose unright
```

We are now ready to answer the question posed by the meme words: what
are the monoid objects in the monoidal category (**Hask**^{Hask},
`Compose`

, `Identity`

)?

The monoid objects are objects in **Hask**^{Hask},
so they are functors; we will write our typeclass with a
`Functor`

superclass constraint. `mu`

is a natural
transformation from `Compose m m`

to `m`

, and
`eta`

is a natural transformation from `Identity`

to `m`

:

```
class Functor m => MonoidInTheCategoryOfEndofunctors m where
mu :: Compose m m a -> m a
eta :: Identity a -> m a
```

If we unwrap the newtypes, we see that `eta`

is
effectively
`eta' :: MonoidInTheCategoryOfEndofunctors m => a -> m a`

,
which is `pure`

from `class Applicative`

as well
as the old `return`

from `class Monad`

. Similarly,
`mu`

is effectively
`mu' :: MonoidInTheCategoryOfEndofunctors m => m (m a) -> m a`

,
better known as
`join :: Monad m => m (m a) -> m a`

.

And there we have it: Haskell’s monads are the monoid objects in the
monoidal category of endofunctors on **Hask**, with
`Compose`

as the tensor. Haskell uses
`(>>=)`

in `class Monad`

for historical
reasons, and because having `join`

in
`class Monad`

breaks
`-XGeneralizedNewtypeDeriving`

.

**Exercise 16:** Show that `join`

and
`(>>=)`

are equivalent, by implementing them in terms
of each other.

```
join :: Monad m => m (m a) -> m a
= (>>= id)
join
(>>=) :: Monad m => m a -> (a -> m b) -> m b
>>= f = join $ f <$> m m
```

Now that we’ve looked at the meme words properly, we see that the selection of tensor is extremely important. What happens if we choose a different one?

**Exercise 17:** Consider the tensor
`newtype Product f g a = Product (f a) (g a)`

, from `Data.Functor.Product`

in `base`

. What is the identity object *I* that makes (**Hask**^{Hask},
`Product`

, *I*) a
monoidal category? Write out the types and implement the natural
isomorphisms `assoc`

, `left`

, and
`right`

, and describe the monoid objects in this
category.

The identity object is `Proxy`

, defined
in `base`

:

`data Proxy a = Proxy`

`Proxy`

plays a similar role to `()`

— we don’t
want to add or remove any information when we write out the unitors, and
you can think of `Proxy`

as a functor
containing zero “`a`

”s.

```
instance Functor2 (Product x) where
Product x f) = Product x (fg f)
fmap2 fg (
instance (forall x. Functor2 (Product x)) => Bifunctor2 Product where
Product p r) = Product (pq p) (rs r)
bimap2 pq rs (
assoc :: Product f (Product g h) a -> Product (Product f g) h a
Product f (Product g h)) = Product (Product f g) h
assoc (
unassoc :: Product (Product f g) h a -> Product f (Product g h) a
Product (Product f g) h) = Product f (Product g h)
unassoc (
left :: f a -> Product Proxy f a
= Product Proxy f
left f
unleft :: Product Proxy f a -> f a
Product _ f) = f
unleft (
right :: f a -> Product f Proxy a
= Product f Proxy
right f
unright :: Product f Proxy a -> f a
Product f _) = f unright (
```

As before, the requirements on monoid objects lead us to write a typeclass:

```
class Functor m => MonoidObject (m :: Type -> Type) where
eta :: Proxy a -> m a
mu :: Product m m a -> m a
```

`Proxy`

argument to `eta`

contains no
information, so it’s equivalent to `zero :: m a`

. By
unpacking `Product m m a`

and currying `mu`

, we
find `(<!>) :: m a -> m a -> m a`

. We have
rediscovered `class Plus`

from `semigroupoids`

. (It is not `class Alternative`

from `base`

, because that has an `Applicative`

superclass.)
**Exercise 18:** Repeat Exercise 17 for covariant Day
convolution, given by the tensor
`data Day f g a = forall b c. Day (f b) (g c) (b -> c -> a)`

from `Data.Functor.Day`

in package `kan-extensions`

.

```
instance Functor2 (Day x) where
Day x f bca) = Day x (fg f) bca
fmap2 fg (
instance (forall x. Functor2 (Day x)) => Bifunctor2 Day where
Day p r bca) = Day (pq p) (rs r) bca
bimap2 pq rs (
assoc :: Day f (Day g h) a -> Day (Day f g) h a
Day f (Day g h dec) bca) = Day (Day f g (,)) h $
assoc (-> bca b (dec d e)
\(b, d) e
unassoc :: Day (Day f g) h a -> Day f (Day g h) a
Day (Day f g bce) h eda) = Day f (Day g h (,)) $
unassoc (-> eda (bce b c) d
\b (c, d)
left :: f a -> Day Identity f a
= Day (Identity ()) f (flip const)
left f
unleft :: Functor f => Day Identity f a -> f a
Day b f bca) = bca (runIdentity b) <$> f
unleft (
right :: f a -> Day f Identity a
= Day f (Identity ()) const
right f
unright :: Functor f => Day f Identity a -> f a
Day f c bca) = flip bca (runIdentity c) <$> f
unright (
class Functor m => MonoidObject (m :: Type -> Type) where
mu :: Day m m a -> m a
eta :: Identity a -> m a
```

To turn `Day mb mc f`

into an `m a`

, we need to
apply `f`

across `mb`

and `mc`

:

`Day mb mc f) = f <$> mb <*> mc mu (`

`mu`

is `liftA2`

ing `f`

, and
applicative Functors are monoid objects in the monoidal category (`Day`

, `Identity`

). `eta`

is
`pure`

, like it was for Monads.
What’s the point of working through all these definitions? Even
though I said “you do not need to read this”, I think there’s still a
payoff. What we have here is a method for generating abstractions: start
with a monoidal category that’s related to **Hask** in some way, turn the
handle, and a typeclass comes out. If the typeclass has interesting
instances, rewrite it into an ergonomic interface.

We can also start reversing arrows and seeing what else falls out.
There is a contravariant
form of Day convolution and if you follow that line of thought far
enough, you get contravariant forms
of `Applicative`

and `Alternative`

. I once
tried abstracting
over the covariant and contravariant versions of these classes to
make an abstraction unifying parsers and pretty-printers, but did not
get far. Ed Kmett used `Divisible`

(contravariant `Applicative`

) and `Decidable`

(contravariant `Alternative`

) to build `discrimination`

,
a library of fast
generic sorting/joining functions.

We can also look for the same patterns in different categories.
Benjamin ~~Pizza~~ Hodgson has a great article about functors
from `(k -> Type)`

to `Type`

, describing a
pattern that appears in the `hkd`

,
`rank2classes`

,
`Conkin`

,
and `barbies`

packages.

Sometimes there is no payoff, or the payoff is not immediately
obvious. We found no interesting monoid objects in (**Hask**, `Either`

,
`Void`

), and trying to write out a class for comonoids
doesn’t look fruitful, because we can trivially write an instance for
any type:

```
class Comonoid m where
comempty :: m -> ()
comappend :: m -> (m, m)
instance Comonoid a where
= ()
comempty _ = (m, m) comappend m
```

But comonoids suddenly become a lot more interesting when you have linear
arrows — `class Dupable`

is the typeclass for comonoids in linear Haskell.

And all that makes me think the meme words have some use after all,
but not as a way to understand deep secrets of the Haskell universe. I
think instead that they are *one* way to learn *one* tool
in *one* part of the category-theoretic toolbox.

E. Rivas and M. Jaskelioff, “Notions of Computation as Monoids”

]]>Writing recursive functions requires a lot of tacit knowledge in selecting the recursion pattern to use, which variables to recurse over, etc. Recursion was not immediately obvious to industry professionals, either: I remember an errata card that came with TI Extended Basic for the Texas Instruments TI 99/4A which mentioned that later versions of the cartridge removed the ability for subprograms to call themselves, because they thought it was not useful and mostly done by accident.

I want to share a recipe that helped my students write their first recursive functions. There are three steps in this recipe:

- Write out several related examples
- Rewrite the examples in terms of each other
- Introduce variables to generalise across all the examples.

Worked examples and some teaching advice after the jump.

`product :: [Int] -> Int`

Suppose we are asked to write a function
`product :: [Int] -> Int`

that multiplies a list of
numbers together. Begin by writing out several examples of what the
function should actually do:

```
product [2, 3, 4, 5] = 2 * 3 * 4 * 5 = 120
product [3, 4, 5] = 3 * 4 * 5 = 60
product [4, 5] = 4 * 5 = 20
product [5] = 5
```

There is a bit of an art to selecting the initial examples, so here are a few tips:

The shape of the

`data`

definition heavily influences the shape of the recursion. Because this function must recurse over cons lists, we choose example inputs with similar tails.It’s not usually necessary to choose big examples: three or four elements are usually enough.

Choose distinct values in each part of the data structure, so it’s clear which sub-parts need to align.

Avoid elements that behave strangely with respect to the function you’re writing. It’s tempting to use the list

`[1, 2, 3, 4]`

, but the fact that`1 * x == x`

means that we could confuse ourselves a bit. I chose the list`[2, 3, 4, 5]`

instead.

Next, rewrite the examples in terms of each other:

```
product (2:3:4:5:[]) = 2 * 3 * 4 * 5 = 2 * product (3:4:5:[])
product (3: 4:5:[]) = 3 * 4 * 5 = 3 * product ( 4:5:[])
product (4: 5:[]) = 4 * 5 = 4 * product ( 5:[])
product (5: []) = 5 = 5
```

Notes:

If lists are involved, desugar them to make the correspondence more obvious.

Aligning similar elements vertically often helps, but it’s more helpful to align them by their position in the data structure instead of by their value. In this example, I’ve put all the list heads into the same column. This makes it easier to see where to introduce variables.

Finally, generalise over the first and last columns by introducing variables:

```
x ,--xs--. x ,--xs--.
| | | | | |
product (2:3:4:5:[]) = 2 * 3 * 4 * 5 = 2 * product (3:4:5:[])
product (3: 4:5:[]) = 3 * 4 * 5 = 3 * product ( 4:5:[])
product (4: 5:[]) = 4 * 5 = 4 * product ( 5:[])
product (5: []) = 5 = 5 * product []
product (x: xs) = = x * product xs
```

Notes:

Rewriting the

`5:[]`

example into`5 * product []`

makes it fit the pattern of all of our other examples, which allows us to generalise over all our examples with a single equation.Knowing that

`5 * product [] = 5`

tells us that`product []`

must be`1`

.

We now have enough information to write out a function definition:

```
product :: [Int] -> Int
product [] = 1
product (x:xs) = x * product xs
```

`treeSum :: Tree Int -> Int`

Suppose we are asked to write
`treeSum :: Tree Int -> Int`

, given the following
definition of binary trees:

`data Tree a = Nil | Node a (Tree a) (Tree a)`

As before, write out an example, and generate equations from its sub-parts:

```
-- For the tree:
--
-- 4
-- / \
-- 3 x
-- / \
-- / \
-- 1 2
-- / \ / \
-- x x x x
--
Node 4 (Node 3 (Node 1 Nil Nil) (Node 2 Nil Nil)) Nil) = 4 + 3 + 1 + 2
treeSum (Node 3 (Node 1 Nil Nil) (Node 2 Nil Nil)) = 3 + 1 + 2
treeSum (Node 1 Nil Nil) = 1
treeSum (Node 2 Nil Nil) = 2 treeSum (
```

Then, line up the examples and rewrite them in terms of each other:

```
Node 4 (Node 3 (Node 1 Nil Nil) (Node 2 Nil Nil)) Nil) = 4 + 3 + 1 + 2
treeSum (Node 3 (Node 1 Nil Nil) (Node 2 Nil Nil)) = 3 + 1 + 2
treeSum (Node 1 Nil Nil) = 1
treeSum (Node 2 Nil Nil) = 2
treeSum (
-- Rewritten:
Node 4 (Node 3 (Node 1 Nil Nil) (Node 2 Nil Nil)) Nil) = 4 + treeSum (Node 3 (Node 1 Nil Nil) (Node 2 Nil Nil))
treeSum (Node 3 (Node 1 Nil Nil) (Node 2 Nil Nil)) = 3 + treeSum (Node 1 Nil Nil) + treeSum (Node 2 Nil Nil)
treeSum (Node 1 Nil Nil) = 1
treeSum (Node 2 Nil Nil) = 2 treeSum (
```

Nothing seems to line up! The problem is that the example isn’t
complicated to give us a complete picture, so we could try drawing
another tree with more nodes and working through that. But the equation
for `treeSum (Node 3 (Node 1 Nil Nil) (Node 2 Nil Nil))`

contains a big hint, as it’s adding three things together: the node
value, the `treeSum`

of the left subtree, and the
`treeSum`

of the right subtree. We can force the other
equations for `Node`

s into the right shape by adding
`+ 0`

a few times, and that gives a pretty big hint that
`treeSum Nil`

should be equal to `0`

:

```
Node 4 (Node 3 (Node 1 Nil Nil) (Node 2 Nil Nil)) Nil) = 4 + treeSum (Node 3 (Node 1 Nil Nil) (Node 2 Nil Nil)) + 0
treeSum (Node 3 (Node 1 Nil Nil) (Node 2 Nil Nil)) = 3 + treeSum (Node 1 Nil Nil) + treeSum (Node 2 Nil Nil)
treeSum (Node 1 Nil Nil) = 1 + 0 + 0
treeSum (Node 2 Nil Nil) = 2 + 0 + 0
treeSum (Nil = 0
treeSum
-- Rewritten:
Node 4 (Node 3 (Node 1 Nil Nil) (Node 2 Nil Nil)) Nil) = 4 + treeSum (Node 3 (Node 1 Nil Nil) (Node 2 Nil Nil)) + treeSum Nil
treeSum (Node 3 (Node 1 Nil Nil) (Node 2 Nil Nil)) = 3 + treeSum (Node 1 Nil Nil) + treeSum (Node 2 Nil Nil)
treeSum (Node 1 Nil Nil) = 1 + treeSum Nil + treeSum Nil
treeSum (Node 2 Nil Nil) = 2 + treeSum Nil + treeSum Nil
treeSum (Nil = 0 treeSum
```

Complete the process by generalising over all of the examples with variables:

```
treeSum :: Tree Int -> Int
Node n left right) = n + treeSum left + treeSum right
treeSum (Nil = 0 treeSum
```

`map :: (a -> b) -> [a] -> [b]`

Suppose we are asked to write the classic `map`

function
over lists. Since the input function and the element type are not known,
use placeholders when generating examples:

```
map f [a, b, c] = [f a, f b, f c]
map f [b, c] = [f b, f c]
map f [c] = [f c]
map f [] = []
```

Desugar, align, and rewrite the equations in terms of each other, and finish by introducing variables:

```
map f (a:b:c:[]) = f a : f b : f c : [] = f a : map f (b:c:[])
map f (b: c:[]) = f b : f c : [] = f b : map f ( c:[])
map f (c: []) = f c : [] = f c : map f ( [])
| | | | | |
`-xs-' x `-xs-'
x
map f (x : xs) = f x : map f xs
map _ [] = []
```

This technique is useful but limited; larger data structures quickly become too unwieldy for it to work. But it seems to really help new Haskell programmers “get” recursion and bootstrap their skills and confidence. While it’s fine to show an example or two for students to crib from (at first), something about asking students to physically handle a pen and write it all out seems to make it sink in a lot better.

]]>`uniplate`

operation and optics. This will be old news to advanced
`lens`

users, but I think it’s worth pointing out. The
`uniplate`

package’s original
`uniplate :: Uniplate a => a -> ([a], [a] -> a)`

is
an early attempt at a “traversal” optic, properly expressed in
`lens`

by `plate :: Plated a => Traversal' a a`

.
The `uniplate`

library provides low-boilerplate ways to
query and rewrite self-similar data structures; the
`uniplate`

function from `class Uniplate a`

is its
fundamental operation. Let’s look at the original definition, from the
2007 paper Uniform
Boilerplate and List Processing:

```
class Uniplate a where
uniplate :: a -> ([a], [a] -> a)
-- An example data type and instance
data Expr = Lit Int | Negate Expr | Add Expr Expr
instance Uniplate Expr where
Lit i) = ([], \[] -> Lit i)
uniplate (Negate e) = ([e], \[e'] -> Negate e')
uniplate (Add e1 e2) = ([e1, e2], \[e1', e2'] -> Add e1' e2') uniplate (
```

`uniplate`

extracts from a value of type `T`

any immediate children of type `T`

, and provides a function
to reassemble the original structure with new children. From this, we
can define operations like `transform`

, which applies a
function everywhere it can be applied, in a bottom-up way:

```
transform :: Uniplate a => (a -> a) -> a -> a
= rebuild $ map (transform f) as
transform f a where (as, rebuild) = uniplate a
```

Look closely at the type of the `uniplate`

operation: it
extracts `[a]`

from a structure, and provides a function to
assign a new `[a]`

into a structure. This is exactly what a
**get/set lens** does:

```
-- As a record:
data GetSetLens s a = GetSetLens
get :: s -> a
{ set :: s -> a -> s
,
}
-- As a type alias for a tuple:
type GetSetLens s a = (s -> a, s -> a -> s)
-- Factor out the common 's' parameter:
type GetSetLens s a = s -> (a, a -> s)
class Uniplate a where
uniplate :: GetSetLens a [a]
```

The example `Uniplate`

instance shows us that this lens
requires careful use: we must return a list of exactly the same length
as the one we are given. Now that we’ve noticed a connection between
`Uniplate`

and lenses, is there a better optic we could use?
Yes — traversals are optics that focus zero or more targets, so we could
rebuild the `uniplate`

library on top of an operation that
provides a `Traversal' a a`

. This is what `lens`

does with `Control.Lens.Plated`

:

```
class Plated a where
plate :: Traversal' a a
```

If you are unable to define a `Plated`

instance on a type
(e.g., you do not want to introduce an orphan instance on a type you do
not own), `lens`

also provides a helper,
`uniplate :: Data a => Traversal' a a`

. Interestingly,
`lens`

also provides a `partsOf`

combinator which collects the foci of an optic into a list:

```
-- Usable as:
partsOf :: Iso' s a -> Lens' s [a]
partsOf :: Lens' s a -> Lens' s [a]
partsOf :: Traversal' s a -> Lens' s [a]
partsOf :: Fold s a -> Getter s [a]
partsOf :: Getter s a -> Getter s [a]
-- The real type signature:
partsOf :: Functor f => Traversing (->) f s t a a -> LensLike f s t [a] [a]
```

Its haddock even says that it “resembles an early version of the
`uniplate`

(or `biplate`

) type” and that “you
really should try to maintain the invariant of the number of children in
the list”.

And that brings us full circle; we can get a van Laarhoven version of
our original `uniplate`

lens using `Data.Data.Lens.uniplate`

:

```
:: Data a => Traversal' a a
Data.Data.Lens.uniplate :: Data a => Lens' a [a] partsOf Data.Data.Lens.uniplate
```

This is one of my favourite things about programming in Haskell: seeing that library authors have carefully refined concepts like “view the self-similar children of a structure” into ever more powerful and composable forms, and being able to notice the different stages in that evolution.

]]>`<conio.h>`

, and he spent the rest of
the term building and tweaking a small text-mode dungeon crawler.
Many new Haskellers make it through initial material (everything up
to and including the `Monad`

typeclass, let’s say), write a
couple of “Hello, world!”-tier projects that use the `IO`

type, but struggle to make the jump to industrial libraries and/or find
projects that excite them. I think text-mode games can grow very
smoothly alongside a programmer learning a new language, so here’s some
thoughts on how to get started, how you might extend a game, and some
advice for Haskell specifically.

A text-mode dungeon crawler can start very small. My friend began with a core encounter loop, which was very much like a Pokémon battle: the player was placed into combat with a monster, given a choice between attacking and fleeing, and repeated this loop until either the player ran off or one defeated the other. You could imagine it looking something like:

```
There is a goblin in front of you.
You can ATTACK or RUN. What do you do?
[HP 98/100]> attack
You hit the goblin for 5 damage!
The goblin hits you for 7 damage!
There is a goblin in front of you.
You can ATTACK or RUN. What do you do?
[HP 91/100]> run
Okay, coward! See you later.
```

In Haskell, we might manually pass state between all our functions, and that state could be as simple as:

```
data GameState = GameState
playerHP :: Int
{ monsterHP :: Int
, }
```

Once this is working, there are a lot of ways to extend it. Some ideas of things to add:

Character generation:

- Begin with something simple, like just giving your fighter a name.
- Add stats.
- Add skills.
- Add classes. Fighter/Rogue/Magic User is a classic split for a reason.

Randomness. Pretty much anything can be made more interesting with randomness:

- Chance to hit
- Damage values
- Player stats
- Monster stats
- Gold drops

Fight a gauntlet of monsters, until the player runs out of HP.

- Track high scores during a session.
- Track high scores between sessions, by writing them to a file.

Have the player visit a town between fights. This makes the game switch between (at least) two modes: fighting and shopping.

- There won’t be much to do in town at first, but some easy options are “buy healing” and “deposit gold”.
- Once you have items in your game, add an item shop and and an item stash.

Items:

- Simple consumables (like healing potions or food) are a great place to start.
- An equipment system can be as simple as “which weapon are you taking into the next fight?”.

Skills and Spells:

- A skill or magic system opens up the player’s options beyond “fight” and “run”, making each combat round much more interesting.

Have more types of things (monsters, items, spells, &c.).

- Configure this at first with a simple data structure in one of your modules.
- Later on, you might want to try reading it from a file.

Maps:

- A simpler step before a full world map is to run each combat in a generated arena.
- Maps add all sorts of new things to hack on: pathfinding algorithms, data structures, graph representation, procedural generation, terrain, etc.

On the Haskell side, your goal should be to keep things as simple as
possible. A big ball of `IO`

with `do`

-expressions
everywhere is *completely fine* if it keeps you hacking on and
extending your game. Don’t look at the dizzying array of advanced
Haskell features, libraries, and techniques; wait until what you have
stops scaling and *only then* look for solutions. Still, some
Haskell-specific ideas might be helpful:

Start by passing your

`GameState`

in and out of functions manually. When this gets annoying, look at structuring your game around a`StateT GameState IO`

monad.- When that gets annoying (maybe you’re sick of writing
`lift`

, maybe you want to test stateful computations that don’t need to do I/O), consider`mtl`

and structuring your program around`MonadState GameState m`

constraints.

- When that gets annoying (maybe you’re sick of writing
When your “ball of

`IO`

mud” gets too big to handle, start extracting pure functions from it. Once you have some`IO`

actions and some pure functions, that’s a great time to practice using the`Functor`

,`Applicative`

and`Monad`

operators to weave the two worlds together.- Set up
`hlint`

at this point, as its suggestions are designed to help you recognise common patterns:

`-- Actual hlint output Found: do x <- m pure (g x) Perhaps: do g <$> m`

- Once you have a decent number of pure functions kicking around, your
game is probably so big that you can no longer test it in a single
sitting. This is a good point to start setting up tests - I like the
`tasty`

library to organise tests into groups, and`tasty-hunit`

for actual unit tests.

- Set up
A “command parser” like this is more than enough at first:

`playerCommand :: GameState -> IO GameState = do playerCommand s putStrLn "What do you do?" <- getLine line case words line of "attack"] -> attack s ["run"] -> run s [-> do _ putStrLn "I have no idea what that means." playerCommand s`

Later on, you might want to parse to a concrete command type. This gives you a split like:

`data Command = Attack | Run parseCommand :: String -> Maybe Command getCommand :: IO (Maybe Command) -- uses 'parseCommand' internally runCommand :: Command -> GameState -> IO GameState`

Even later on, you might want to use a parser combinator library to parse player commands.

When your command lines become complicated, that might be a good time to learn the

`haskeline`

library. You can then add command history, better editing, and command completion to your game’s interface.

Reading from data files doesn’t need fancy parsing either. Colon-separated fields can get you a long way — here’s how one might configure a list of monsters:

`# Name:MinHP:MaxHP:MinDamage:MaxDamage Goblin:2:5:1:4 Ogre:8:15:4:8`

The parsing procedure is really simple:

- Split the file into lines.
- Ignore any line that’s blank or begins with
`'#'`

. - Split the remaining lines on
`':'`

- Parse the lines into records and return them (hint:
`traverse`

).

You might eventually want to try reading your configuration from JSON files (using

`aeson`

), Dhall files, or an SQLite database.If passing your configuration everywhere becomes annoying, think about adding a

`ReaderT Config`

layer to your monad stack.Ignore the

`String`

vs.`Text`

vs.`ByteString`

stuff until something makes you care.`String`

is fine to get started, and when it gets annoying (e.g., you start using libraries that work over`Text`

, which most of them do), turn on`OverloadedStrings`

and switch your program over to use`Text`

.A bit of colour can give a game — even a text-mode one — a lot of “pop”.

- After you’ve got your codebase using
`Text`

, try the`safe-coloured-text`

library to add a bit of colour. - Many modern terminals support emoji. While I’m not an emoji fan (that’s a rant for another time), it’s an easy way to add some pictures to your game.

- After you’ve got your codebase using
Don’t worry about

`lens`

; just use basic record syntax. Once you get frustrated by the record system, look at using GHC’s record extensions like`DuplicateRecordFields`

,`NamedFieldPuns`

and`RecordWildCards`

.- Once you get sick of writing deeply nested record updates, only then
consider
`lens`

, and only as much as you need to view/modify/update nested records in an ergonomic way. Remember, the point is to keep moving!

- Once you get sick of writing deeply nested record updates, only then
consider

A project like this can grow as far as you want, amusing you for a weekend or keeping you tinkering for years. Textmode games are an exceptionally flexible base on which to try out new languages or techniques. Start small, enjoy that incremental progress and use the problems you actually hit to help you choose what to learn about.

]]>Scripting a larger program is one of the few areas where Haskell
struggles. Despite some very impressive efforts like `dyre`

, I
think it’s a bit much to require a working Haskell toolchain and a “dump
state, exec, load state” cycle just to make a program scriptable. This
post discusses why Lua is a great
scripting *runtime* for compiled programs, its shortcomings as a
scripting *language*, how Fennel addresses many of these
shortcomings, and demonstrates a Haskell program calling Fennel code
which calls back into Haskell functions.

Lua is a weakly-typed imperative programming language designed to be embedded into larger programs. It has a lot of attractive features:

- It’s written in the common subset of ANSI C and C++, so it compiles on nearly anything;
- Its C API is quite simple, so it’s not too hard to drive from the host program;
- Hot code loading allows for interactive tweaks to a running program;
- It runs reasonably quickly;
- A fresh runtime has almost nothing installed in it, so you have a fighting chance of hardening it to make a “safe” sandbox (though it’s still pretty hard); and
- It has pretty good semantics, including lexical scoping and tail-call optimisation.

Many of these features are inherent to the runtime and not the language. Which is useful, because the language has some undesirable features:

- If you forget to write
`local`

when assigning a variable, you set a global variable; - There is nothing like destructuring/pattern-matching in the language;
- Variables are mutable by default;
- Sequences are indexed starting at 1; and
- It is an imperative language, which is an unintuitive paradigm for solving real-world problems.

Fennel is a Lisp which compiles
to Lua, and draws some syntactic inspiration from Clojure. For example,
the classic `factorial`

function in Fennel:

```
fn factorial [n]
(
(match n0 1
* n (factorial (- n 1))))) _ (
```

Would compile to this Lua code:

```
local function factorial(n)
local _1_ = n
if (_1_ == 0) then
return 1
elseif true then
local _ = _1_
return (n * factorial((n - 1)))
else
return nil
end
end
return factorial
```

The language has been designed to smoothly interoperate with existing Lua code, while also providing convenience features you’d expect from a Lisp (destructuring binds, macros, etc.).

The compiler is provided in two forms: an ahead-of-time compiler
which translates `.fnl`

files to `.lua`

; and a
runtime compiler that can hook itself into Lua’s `package search mechanism`

.

`HsLua`

(old site) is a fully-featured set of
Haskell bindings to Lua. The most recent versions bundle Lua 5.4, so it
is both mature and up-to-date. The `lua`

package implements low-level FFI bindings to Lua’s C API, but the `hslua`

package is probably the one you want. It provides idiomatic wrappers
for the low-level functions as well as re-exports from all the other
`hslua-*`

packages, creating an all-in-one import for most
common cases. (Use Hoogle to
find out which package actually defines a function or type.)

Our goal is to put these pieces together in a way that demonstrates
how a larger program might use an embedded interpreter: a Haskell
program with a Lua runtime which can load Fennel files, calling back
into Haskell functions. Almost all of our work will be performed inside
a `Lua`

monad, which creates and destroys an interpreter for
us.

The first task is to implement a Lua module in Haskell. Since computing large factorials is the only thing Haskell is any good at, let’s export that capability to Lua:

```
import qualified HsLua as L
-- | The 'L.DocumentedFunction' machinery is from "hslua-packaging";
-- we can provide to Lua any function returning @'LuaE' e a@, so long
-- as we can provide a 'Peeker' for each argument and a 'Pusher' for
-- each result.
factorial :: L.DocumentedFunction e
=
factorial "factorial"
L.defun ### L.liftPure (\n -> product [1 .. n])
<#> L.integralParam "n" "input number"
=#> L.integralResult "factorial of n"
#? "Computes the factorial of an integer."
`L.since` makeVersion [1, 0, 0]
-- | Also using "hslua-packaging", this registers our
-- (single-function) module into Lua's @package.preload@ table,
-- setting things up such that the first time
-- @require('my-haskell-module')@ is called, the module will be
-- assembled, stored in @package.loaded['my-haskell-module']@ and
-- returned.
--
-- This lazy loading can help with the startup time of larger programs.
--
-- /See:/ http://www.lua.org/manual/5.4/manual.html#pdf-require
registerHaskellModule :: Lua ()
=
registerHaskellModule
L.preloadModuleL.Module
= "my-haskell-module",
{ L.moduleName = "Functions from Haskell",
L.moduleDescription = [],
L.moduleFields = [factorial],
L.moduleFunctions = []
L.moduleOperations }
```

To add Fennel to our Lua runtime, we need to download and unpack a Fennel tarball, use the `file-embed`

library to store `fennel.lua`

inside our Haskell binary (it
is a mere 200K, less if compressed), load it, and install it:

```
fennelLua :: ByteString
= $(embedFile "fennel.lua")
fennelLua
-- | Load our embedded copy of @fennel.lua@ and register it in
-- @package.searchers@.
registerFennel :: Lua ()
= do
registerFennel "fennel" $ L.NumResults 1 <$ L.dostring fennelLua
L.preloadhs
-- It's often easier to run small strings of Lua code than to
-- manipulate the runtime's stack with the C API.
$ L.dostring "require('fennel').install()" void
```

Fennel versions older than 1.2.1 ask you to install the searcher manually:

```
registerFennel :: Lua ()
= do
registerFennel "fennel" $ L.NumResults 1 <$ L.dostring fennelLua
L.preloadhs
$
void
L.dostring"local fennel = require('fennel');\
\table.insert(package.searchers, fennel.searcher)"
```

We also need some Fennel code to run. We want to be able to change
which factorials we compute without rebuilding all the Haskell, so
`fennel-demo.fnl`

imports our Haskell module and builds a
table containing a sequence of factorials:

```
require :my-haskell-module))
(local hs (
(local factorials [])for [i 1 10]
(
(table.insert factorials (hs.factorial i)))
:factorials factorials } {
```

`main`

is all that’s left. Populate a Lua runtime and ask
it for our value, then bring it across to Haskell and print it out:

```
main :: IO ()
= do
main <- L.run $ do
luaRes -- Add Lua's (tiny) standard library
L.openlibs
registerHaskellModule
registerFennel
-- Call into our fennel module and return a value from it
"local f = require('fennel-demo'); return f.factorials"
L.dostring
-- From hslua-classes: use the Peekable typeclass to unmarshal and
-- pop the sequence left on the stack
L.popValue
print (luaRes :: [Int])
```

If you want to see it running for yourself, the code is at https://git.sr.ht/~jack/hslua-fennel-demo .

I’d previously played around with embedding Lua as a scripting language in my MudCore project, but the limitations of the language made me disinclined to actually build something on top of the core. Fennel is an interesting little language that’s a lot more appealing to me, and I’m keen to find a use for it to write some scriptable Haskell programs.

I’m also pretty impressed by the thought that’s gone into the HsLua libraries: there are a lot of facilities that make the language boundary fairly convenient to cross, and it doesn’t take long to get a sense of which package actually provides the tool you’re looking for.

]]>`Foldable`

instance for `Maybe`

:
```
-- This is reimplemented all over the place as `whenJust`.
-- Pass our `a` to the function, if we have one,
-- and ignore its result; return `pure ()` otherwise.
for_ :: (Foldable t, Applicative f) => t a -> (a -> f b) -> f ()
@Maybe :: Applicative f => Maybe a -> (a -> f b) -> f ()
for_
-- Equivalent to `Data.Maybe.fromMaybe mempty`:
-- Return the `m` if we have one; otherwise, return `mempty`.
fold :: (Foldable t, Monoid m) => t m -> m
@Maybe :: Monoid m => Maybe m -> m
fold
-- Equivalent to `maybe mempty`:
-- Pass our `a` to our function, if we have one;
-- otherwise, return `mempty`.
foldMap :: (Foldable t, Monoid m) => (a -> m) -> t a -> m
foldMap @Maybe :: Monoid m => (a -> m) -> Maybe a -> m
```

Some of these confuse people more than I think they should, so this
post aims to help with that. Instead of looking at `Maybe a`

as “just-`a`

-or-nothing”, the key is to become comfortable
with `Maybe`

as “list of zero or one elements”. We’ll also go
looking for other types which can be seen as “lists of some number of
elements”.

In Haskell, it’s often not practical or ergonomic to track exact
lengths of lists at the type level. Let’s instead reflect on some
ancient wisdom, and think about lists that have at {least,most}
{zero,one,infinitiely many} elements. There are six sensible cases, and
most of them exist in `base`

:

`Proxy`

is a list of exactly zero elements,`Maybe`

is a list of exactly zero or one elements,`[]`

(list) is a list of at least zero and at most infinity elements,`Identity`

is a list of exactly one element,`NonEmpty`

is a list of at least one and at most infinity elements, and- Infinite streams are lists of infinitely elements.

The “zero/one/infinity” principle comes from Dutch computer pioneer Professor Willem van der Poel, and is preserved on the C2 Wiki via the Jargon File:

Allow none of foo, one of foo, or any number of foo. … The logic behind this rule is that there are often situations where it makes clear sense to allow one of something instead of none. However, if one decides to go further and allow N (for N > 1), then why not N+1? And if N+1, then why not N+2, and so on? Once above 1, there’s no excuse not to allow any N; hence, infinity.Having paid respects to our forefathers, let us now examine each data type in turn, in an order that lets us use our existing intuitions about lists.

`NonEmpty`

The `NonEmpty`

type is defined in
`Data.List.NonEmpty`

:

`data NonEmpty a = a :| [a]`

This has been in `base`

since `base-4.9.0.0`

(GHC 8.0.1). Knowing that your list has at least one element gives you a
lot of power:

A whole slew of functions which may fail on

`[]`

are much safer on`NonEmpty`

:`head`

,`tail`

,`minimum`

,`maximum`

,`foldr1`

,`foldl1`

, &c.All of the

`Foldable`

operations can work over`Semigroup`

instead of`Monoid`

, as witnessed by the`Data.Semigroup.Foldable.Foldable1`

class in the`semigroupoids`

package:`foldMap1 :: (Foldable1 t, Semigroup g) => (a -> g) -> t a -> g`

Similarly, functions from

`Data.Foldable`

that needed`Applicative`

or`Alternative`

have weakened versions that only need`Apply`

or`Alt`

, respectively.You get a

`Comonad`

instance, if you’re into that sort of thing,

It’s not in `base`

, but it’s easy to write a type
representing an infinite stream of `a`

s, which is the same as
`[]`

with the “nil” case removed:

`data Stream a = Stream a (Stream a)`

Since you have at least one element, you get a lot of the same things as

`NonEmpty`

: safe`head`

/`tail`

/…, instances for`Foldable1`

and`Comonad`

, &c.You have to be careful that the semigroups/monoids you use when folding are lazy enough to terminate.

For

`take :: Integer -> Stream a -> [a]`

, you gain the property that`length . take n = n`

, which is cute.

`Maybe`

`data Maybe a = Nothing | Just a`

`Maybe a`

is conventionally taught as a better answer to
`NULL`

— “either you have the thing, or you don’t” — but it’s
also valid to consider `Maybe a`

a list of exactly zero or
one `a`

s.

This is a really useful perspective to stick in your mind, especially
when writing `do`

-expressions: it shows that “do I have this
thing? if so, do `x`

” and “do a thing with each element of
the collection (a `for`

-`each`

loop)” are in fact
the same concept.

`Identity`

The `Identity`

type is defined in
`Data.Functor.Identity`

:

`newtype Identity a = Identity { runIdentity :: a }`

This has been in `base`

since `base-4.8.0.0`

(GHC 7.10.1). While it’s not a very exciting type on its own, it’s
useful as a “add no special structure” option when you need to provide a
“thing” of kind `Type -> Type`

.

`Proxy`

The `Proxy`

type is defined in
`Data.Proxy`

:

`data Proxy a = Proxy`

This has been in `base`

since `base-4.7.0.0`

(GHC 7.8.1). It was first introduced as a safe way to pin polymorphic
functions to specific types. For example, `servant-server`

uses it in the `serve`

function to select the type of the API being served:

`serve :: HasServer api '[] => Proxy api -> Server api -> Application`

`Proxy`

has (trivial) instances for `Foldable`

,
`Traversable`

, and all kinds of other typeclasses, which
makes it useful as a “do nothing” option if you need to provide a
“thing” of kind `Type -> Type`

.

`Foldable`

sOnce you’re used to thinking of all these types as lists, you can
parameterise your structures over some `Foldable`

and get
some useful results. Here is a recent example from my work on `amazonka`

,
the Haskell AWS SDK:

To make a request to AWS, you need to provide an environment, which almost always contains credentials used to sign requests. (It’s possible to exchange a web identity token for temporary credentials, which involves an unsigned request.) We seek a solution with the following properties:

Type-level information about whether or not we have credentials,

Library users should statically know whether their

`Env`

has credentials or not,Library users should statically know whether a function requires credentials or is indifferent to their presence, and

Not too many type system extensions.

A `Maybe Auth`

inside `Env`

would satisfy none
of the first three properties. The solution currently in
`amazonka`

looks something like this:

Parameterise the

`Env`

by some`withAuth :: Type -> Type`

, and set up type aliases:`data Env' withAuth = Env { auth :: withAuth Auth -- other fields omitted } type Env = Env' Identity type EnvNoAuth = Env' Proxy`

Return

`Env`

from functions which guarantee the presence of credentials, and`EnvNoAuth`

for functions which lack them. This gives us property (2).In function arguments, specify the environment we want as follows:

accept

`Env`

where we require credentials,`Env' withAuth`

where we are indifferent to their presence, or`Foldable withAuth => Env' withAuth`

where we want to branch on whether or not they’re available.

This gives us property (3).

If we don’t know the type of

`withAuth`

, we can use`Foldable`

to give us the “first”`Auth`

, if one exists:`-- Essentially `headMay :: [a] -> Maybe a`, but written as a -- `foldr` from the `Foldable` typeclass. envAuthMaybe :: Foldable withAuth => Env' withAuth -> Maybe Auth = foldr (const . Just) Nothing . auth envAuthMaybe`

If you squint, you might be able to see that instead of storing a
`Maybe`

inside the `Env`

structure, we’ve done so
at the type level. Instead of the value constructors `Just`

and `Nothing`

, we have the `Foldable`

s
`Identity`

and `Proxy`

.

**Exercise:** Write a pair of functions which witness
the isomorphism between `Maybe a`

and
`Sum Proxy Identity a`

. (`Sum`

comes from `Data.Functor.Sum`

).

```
{-# LANGUAGE LambdaCase #-}
import Data.Functor.Identity (Identity (..))
import Data.Functor.Sum (Sum (..))
import Data.Proxy (Proxy (..))
from :: Maybe a -> Sum Proxy Identity a
= maybe (InL Proxy) (InR . Identity)
from
to :: Sum Proxy Identity a -> Maybe a
= \case
to InL Proxy -> Nothing
InR (Identity a) -> Just a
```

For nearly any sensible combination of “list with at {least,most} {zero,one,infinitely many} elements”, there exists a type in

`base`

whose structure ensures those guarantees.If you internalise this idea for

`Maybe`

in particular, you’ll see that many ad-hoc “handle the`Nothing`

” operations can be replaced with functions that work on any`Foldable`

.By parameterising fields in your data types over some

`Foldable f`

, you can offer changing guarantees about what values are available when, without needing type-level programming.