zio-properties: A ZIO alternative to Spring Properties

EDIT: zio-properties 1.0 is now available on Maven Central
“com.adrianfilip” %% “zio-properties” % “1.0″

I like versatility when configuring application properties.

For instance: In Kubernetes I use environment variables, locally or in a local docker container I may use property files, environment properties, command line arguments, system properties or a mix of any of them.

Also there are many benefits to a simple and easy way of loading properties applicable on multiple use cases.

Versatility and simplicity in this case can be reduced to:

  • multiple sources
  • property resolution order

In order to achieve that I have built a library called zio-properties (on top of zio-config and magnolia) that checks multiple sources and retrieves properties based on a standard resolution order.

With one line of code you can now create a ZLayer that loads your properties from 5 default sources.

where AppProperties is the case class for your properties.

For this example let’s define it as:

The property sources used by zio-properties are (in the order of their resolution):

  1. Command line arguments
  2. System properties
  3. Environment variables
  4. HOCON files
    1. Looks for an application.conf if hoconFile and profile properties are not present in any previous source and application.conf in present in the classpath
    2. if the hoconFile property is present in any previous source – the mentioned file will be used (and profile is ignored for hocon file resolution). Fails if file not found in classpath.
    3. if hoconFile property is not present in any previous source and profile is not present in any previous source – application.conf file will be used if present (does not fail if file not found in classpath)
    4. if profile property is present
      1. and profile.lowercase ==“prod” or  profile.lowercase ==“” then application.conf file will be used if present (does not fail if file not found in classpath) otherwise
      2. application-${profile.lowercase}.conf file will be used if present (does not fail if file not found in classpath) otherwise
  5. Property files
    1. Looks for an application.properties if propertiesFile and profile properties are not present in any previous source and application.properties in present in the classpath
    2. if the propertiesFile property is present in any previous source – the mentioned file will be used (and profile is ignored for properties file resolution). Fails if file not found in classpath.
    3. if propertiesFile property is not present in any previous source and profile is not present in any previous source – application.properties file will be used if present (does not fail if file not found in classpath)
    4. if profile property is present
      1. and profile.lowercase ==“prod” or  profile.lowercase ==“” then application.properties file will be used if present (does not fail if file not found in classpath) otherwise
      2. application-${profile.lowercase}.properties file will be used if present (does not fail if file not found in classpath) otherwise

zio-properties will look in the property sources based on resolution order and will use the value from the first place where it finds it.

For instance (using the above AppProperties) for the scenario where
application.properties: db.port=3306
and Environment variables: db_port=6000
and no mention of the property anywhere else
results in zio-properties using 6000 because Environment variables are ahead of Property files in the resolution order.

How can you use it?

As you can see in a few lines of code you have created your AppProperties and are ready to use it in your application. Also because the AppProperties are provided from a Layer you can specify that as the R in ZIO[R, E, A] to your effects to avoid of passing them as parameters. That looks like this:

You can find the entire zio-properties project (with example and tests) on my Github: https://github.com/adrianfilip/zio-properties.

I recommend you also check out zio-config (@afsalt2, @jdegoes) and magnolia (@propensive).

EDIT: Now it also supports HOCON.

EDIT2: Available now on maven central: “com.adrianfilip” %% “zio-properties” % “1.0″

You can find me on:

Moving From Kotlin + Spring Reactor + Arrow to Scala + ZIO

Several years ago, I was developing an application that dealt with money. It handled loans, deposits, monthly payments, and reports. Unlike other apps, where eventual consistency and stale data may not be an issue, here one slip could lead to financial ruin for the company.

Computing the distribution of a client payment depended on a huge number of factors, including the accounts, the current customer rank, the current personalized interests established with the company, the current global rates, the client loan status, and sometimes other factors!

I was terrified just contemplating the ways in which the application could go wrong, most due to race conditions:

  • What if any of those factors change, as the distribution is computed?
  • What if the customer forgot their spouse said they will make the payment when the office opens, so they now make the payment together at the same time from different offices?
  • What if the operational costs change because the manager corrected for inflation (or some other reason) at the same time as the payment is made?
  • What if the client rank gets increased because of promotion at the time of the client payment?
  • What if the client and their spouse both want to liquidate the same deposit at the exact same time at two different offices?

There’s a potential for many things to go wrong, including the dreaded double-spend!

If you’ve read my Scale Aware Architecture article, you may remember that I mentioned my solution to this problem in Kotlin, Spring Reactor, and Arrow: a novel MultiLaneSequencer concurrency structure, designed precisely to solve my problem.

The MultiLaneSequencer allows you to enforce at runtime the order in which all received requests are processed, with user-specified guarantees on what is permitted to be concurrent, and what is required to be sequential.

MultiLaneSequencer allows us to handle concurrent and sequential requests across different lanes.

Given the following lanes and requests:
    Lanes:           1 2 3 4
t0 Request 1:   X X
t1 Request:2   X
t2 Request 3:       X X
t3 Request 4:   X X

where t0 < t2 < t3 < t4

The order in which the requests above are processed is:

  • Request 1 and Request 3 can be processed concurrently because their lanes are free
  • Request 2 is queued up behind Request 1
  • Request 4 is queued up behind Request 2 and Request 3, so until they are both processed it just has to wait in a non-blocking fashion (it should not block threads, but should wait asynchronously).

This requires tricky logic to get right, with severe consequences for any bugs—bugs that will themselves be tricky to find!

In the rest of the article, I will show you the Kotlin solution I came up with at that time, and then compare it with the Scala + ZIO solution I have since switched to.

The Kotlin Solution

First of all, my external API would be a function that has as input parameters a Set<Lane> and a program (IO<A>) and returns an IO<CompletableFuture<A>> which is a description of a program that does the same thing but this time it does it in a laning context. I return a program that describes an async effect in a laning context.

The external API looks like this:

After some false starts and throwaway code, I came up with the following solution:

  • A requestResponseEventBus where  the requests are published and from which responses are consumed
  • A sequential consumer of requestResponseEventBus that 
    • In case it receives RequestMessages, it either puts them in a sequential pending state, or publishes them to requestsEventBus.
    • In case it receives ResponseMessages, it checks if there were any requests waiting for this one to finish, and if there were and they are now eligible for processing, they will be published to requestsEventBus.
  • A requestsEventBus for RequestMessage(s) – effects that are ready to be executed are published here
  • A parallel consumer of requestsEventBus, which executes the program in the consumed request and then publishes the RequestResponse in the requestResponseEventBus

Based on this description, I created a model for a Message sum type:

  • One term for requests
  • One term for response

Then I created a sum type for Lane:

Notice here that you can define lanes at whatever granularity you want. For instance: One lane can be  CLIENTS so all operations on clients are sequential, but it can also be for a specific Client “data class CustomLane(val name: String): Lane” so you can have multiple operations that even though if they are on the same client should be sequential, they can be performed in parallel for different clients.

Now that I have the event bus and a functional data model, let’s see what the implementation looks like.

I use two event buses, one for request/responses, and one for requests, with an option for testing:

The sequence implementation is implemented as follows:
I subscribe to the requestResponseEventBus bus for the result. Then I publish the request to it. Note that subscribe – publish must be done in this precise order, otherwise the result may already be provided by the time the subscriber is initialised!

That was easy enough. How about subcribeProcessor, the parallel consumer of requestsEventBus, which executes the program in the consumed request, and then publishes the RequestResponse in the requestResponseEventBus?

This one is implemented as follows:

The sequential consumer is implemented as follows:


Testing the Kotlin Solution

The trickiest part of the Kotlin solution was figuring out how to test it—creating a realistic test environment and a correct check. Considering the nature of the problem, the test would need to check that the responses distribution and order on the lanes matches the requests distribution and respects their global ordering.

Think about it like the horseshoe game: If I throw blue, red, and then yellow, then when I check the pike, I should only see blue, red, yellow—any other combination would represent a failure.

At the time, I tested the code with several orders of magnitude more concurrency than expected real world usage. I did not push further, but there’s no reason why this wouldn’t hold for more concurrency.

The tests I developed, along with the implementation, can be found in the Github repository, also linked at the end of this article.

Why Scala?

Until recently, Kotlin + Spring + Reactor + Arrow seemed unbeatable as my “go to” choice for creating new applications. That stack has a great language (Kotlin), has the tooling (IntelliJ), has a powerful library for creating reactive applications (Spring Reactor), has an answer for functional programming (Arrow) and has a great and big community behind it. You can trust it.

Then in 2018, I noticed the IO[E, A] effect from John De Goes. Over time, this effect turned into ZIO[R,E,A] and a great community grew around the data type, along with rapid development of an ecosystem. 

About a year ago, I switched to ZIO for new projects. Since then, I have looked at some of my old projects, and migrating some of them to Scala + ZIO.

One of those is the MultiLaneSequencer construct.

As you may have noticed, the Kotlin + Spring Reactor + Arrow solution is not necessarily easy or simple. Also as you can see, it’s not fully functional, which limits composability and hampers testability.

Could Scala + ZIO provide a simple and pure functional version?

Let’s find out!

ZIO 101

ZIO provides a data type called “ZIO[R, E, A]” that represents a whole asynchronous, concurrent workflow, which can be run in some environment of type R, might error with some value of type E, and will (hopefully) succeed with a value of type A.

Values with this type are called ZIO effects, and ZIO effects compose in a type safe fashion with other ZIO effects, allowing us to build up big programs out of simple pieces.

ZIO is built on next-generation asynchronous fibers, which allow high-performance and high scalability, without any blocking. ZIO is also packed with data structures that make it easier to build concurrent applications, like async queues, semaphores, and promises.

The most powerful tool in ZIO for building concurrent structures is STM. STM, which stands for Software Transactional Memory, allows building up transactions over shared state. Different fibers can commit different transactions to the same shared state at the same time, and ZIO ensures they are executed with the “ACID” guarantees that databases provide (but without the ‘D’, “durability”).

Because STM is composable and purely functional, it means you can build up larger concurrent structures from smaller ones. Because STM is declarative, it means you never need to use locks or other low-level primitives that are deadlock prone. All STM code is automatically purely asynchronous, and can be safely canceled for timeout purposes.

The Scala Solution

What would a solution that uses STM look like?

For Lane, there wouldn’t be much of a difference besides the syntax that Scala requires for constructing sum types:

The API for the MultiLaneSequencer still defines an operation that receives a set of Lanes, but the effect being executed and returned is now a ZIO effect:

Note that randomness in generating a globally-unique identifier (a side effect) is also removed by passing the effect id as a parameter:

The implementation I came up with relies heavily on the transactional guarantees of STM:

  • When the MultiLaneSequencer is created, I create a laning map (as a transactional map). This map will be used right before and after an effect is executed, to maintain global ordering.
    • When creating the sequencer, I allow an Option[Recorder], which will be None in production scenarios and Some when testing. When the Recorder is present, it is used to record requests and responses for testing purposes. You can ignore it when looking at the code.
  • Then, when an effect is sequenced, an occupyLanes effect is created that updates the laning map by adding the global identity.
  • The waitUntillFree effect succeeds when the effect being executed is next in line to be processed for all of the lanes it occupies; or, it asynchronously waits and retries when the laningMap is updated.
  • The release effect removes all information about the program from the laningMap.
  • With all those pieces, the solution becomes:
    • update laning map 
    • Wait until next in line, execute the effect, and then cleanup
      • This part uses bracket, which is a method that guarantees that the release is always performed (it’s like try / finally).

There you have it, the full solution in Scala + ZIO—just a few lines of declarative and type-safe code. 

I couldn’t believe the ZIO STM solution is this straightforward!

Testing the Scala Solution

The test for the ZIO solution flowed naturally from the implementation. I was surprised by how composable testing can be and how much control you could have over all aspects, including passage of time. 

Unlike the Kotlin+Spring Reactor+Arrow tests,  the solution where I have control of effects to this high of a degree made reasoning and testing much easier for me in Scala + ZIO. 

Bonus: MultiLaneLocker

A cool thing about the composability of STM and functional programming is that if you rearrange the pieces or remove one, you can still build something useful!

For instance, if I remove the sequentiality part from MultiLaneSequencer, I will have a new construct, let’s call it MultiLaneLocker, which allows me to control concurrent execution based on lanes, but provides no global ordering guarantees.

Practically speaking, this means that given the following situation:

L – lane
P – program
         L1 L2 L3 L4
t0 P1    X X
t1 P2    X
t2 P3         X X
t3 P4    X X
t0 < t1 < t2 < t3

MultiLaneLocker guarantees that:
– P1 – P2 – P4 will run one at a time
– P3 – P4 will run one at a time
but it makes no guarantees about the order in which they run.

This solution is just a subset of the first solution:


When it comes to solving real world tricky concurrency problems, there is no doubt Kotlin + Spring Reactor + Arrow allows you to build asynchronous and functional solutions.

Yet it is also clear from this example that the Scala + ZIO solution is way simpler and was written faster. The Scala + ZIO solution is easier to test, easier to compose, and easier to understand, and can be quickly tweaked to generate new variations for changing requirements.

Next time I need to solve a tricky problem, I’ll reach for Scala + ZIO, because the cost of solving these problems is much lower. I recommend any readers who have concurrency challenges on the JVM to check out ZIO STM before using what you already know.

You can find the code & tests for both Scala and Kotlin solutions on my GitHub:

Thanks John De Goes and  Adam Fraser for greatly accelerating my understanding of  ZIO STM and ZIO Test.

You can also find me on:

Spring to ZIO 101

How would one coming from a Spring background get their bearings fast with FP & ZIO?

The answer is below.

I onboard new team members coming from Java + Spring backgrounds to Scala + ZIO by starting with a 1-2 day training session where I present the main functional concepts they will work with. 1-2 days is enough to cover the basics and have someone at a level where they can begin to contribute to an existing codebase. 

Even though people are very receptive, understanding how that translates to a regular project is not always a straight line. Showing them how it is done on complex projects or ones where they are unfamiliar with the domain sometimes diverts focus from the main point.

One of the reasons a framework like Spring is highly successful is because it provides clear simple examples for what it has to offer.

So why not provide a simple example of a Scala + ZIO setup for a regular scenario most people are familiar with?

A regular blog post would give you a Pet CRUD. But my readers deserve the best, this will be an Employee CRUD. 🙂

Let’s make a CRUD for an Employee using Scala + ZIO and see how it looks like.

I will use the following DDD based directory structure that should be familiar enough to most.

  • com.adrianfilip.ziosample
    • domain <- all business goes here, the business’s external api is here
      • api
        • EmployeeApi    <- Could also be called EmployeeService but I prefer to use the term API
      • model
        • Employee
          • Employee     <- this is the entity model 
          • EmployeeRepository  <- this will contain the contract for the repository (and in this case also the accessor methods)
    • infrastructure  <- all non business goes here
      • environments
        • EmployeeRepositoryEnv  <- this will contain all EmployeeRepository implementations
      • persistence
        • EmployeeRepositoryInMemory  <- this is the implementation of EmployeeRepository that persists in memory
      • Controller  <- parallels its Spring counterpart
    • Application    <- main program

Let’s start with the model. The Employee will look like this:

Notice the use of the smart constructor to forbid creation of invalid state.

Next we have the EmployeeRepository.

Notice the use of ZIO[R, E, A] here. The short version description here is:

ZIO[R, E, A] describes a program where:
R – is the type of the environment needed to run the program (tldr: R = the dependency)
E – is the type of the failure the program can fail with
A – is the result of running the program successfully

This setup may look a bit verbose but it’s worth it.

Q: What happens here ZIO.accessM(_.get.save(employee)) for instance?
A: ZIO.accessM is used to access the provided environment. So you can read the above as: Give me the provided EmployeeRepository.Service and call its save() method.

Q: Where is EmployeeRepository provided and who provides it?
A: Any client that wants to use the save program ZIO[EmployeeRepository, PersistenceFailure,Employee] has to provide it when it uses it.

Now that we have this we can move on to the API. Considering my business logic will always be the same here and only the context (environment) may change I can use an object where I describe the business of each operation like this:

Notice here that:

  1.  EmployeeApi.create describes a program that in order to run it needs an EmployeeRepository.
  2. I only describe that I need an EmployeeRepository, I don’t actually provide one via any type of injection and one is not available in the object. How does that work? I’ll come back to this later.

Next we have the Controller. Because I wanted to keep things simple the user will interact with the app via the console. In the controller operations I implement the interaction with the user.

There are 2 things to notice here:

  1. Controller.create is a program that will need both a Console and an EmployeeRepository to  run unlike EmployeeApi.create which only needs an EmployeeRepository.
  2. When EmployeeApi.create is used there is no mention of any EmployeeRepository. That is because it will implicitly use the EmployeeRepository provided to  EmployeeApi.create. Pretty useful right.

How about running this whole thing?

First I create a program that looks like this to describe how the high level interactions with the user will go and it also acts as a dispatcher from input to each Controller operation:

You can notice here:

  1. how easy it is to create a CLI because of the compositionality provided by ZIO (this thing led me to create most of my utilitaries as CLI’s now, it’s just soooooo convenient for those utilitaries that don’t require a very complex UI where you need to add an http server and maybe also a SPA)
  2. ApplicationEnvironment in the R position

What is that ApplicationEnvironment? That is my alias for the required environments to run this app. (See picture below)

Up to this point I only described programs and how they compose.
In order to actually run them I will need a Console and an EmployeeRepository. Console is provided by zio and for EmployeeRepository I have a custom in memory implementation which I use to create the localApplicationEnvironment Layer by composing it with other environments (like console).
And how do I provide all this to my main program? Like in the picture below.

Notice here that:

  1. I can compose environments – see localApplicationEnvironment
  2. provideLayer is used to provide the environment to a program
  3. You can simulate spring profiles by selecting the environment you want based on the parameters you start your app with – here for example if the app is started with sbt “run local” the provided layer will be localApplicationEnvironment)

And with this we have a full Employee CRUD controlled by a CLI implemented only with Scala + ZIO.

Notice that:
– you can create fully composable software with ZIO
– you don’t need dependency injection

Hope this answers some of the questions regarding the transition to FP with ZIO.

The entire project is available at https://github.com/adrianfilip/zio-crud-sample, feel free to clone the repo, run the app and play with it.

You can find me https://twitter.com/realAdrianFilip and https://www.linkedin.com/in/adrianfilip/.

Extra mentions:

  • For more info about ZIO you can start with this Tour of ZIO from John De Goes
  • I may create a new post where I replace the CLI with some REST services. Either way check out this Wiem Zine http4s+zio post.

Scale Aware Architecture (Onion Arch. with a twist)

My name is Adrian Filip and I have been a software developer since 2007.  

Sometime in between then and now I was working on a banking like app using Kotlin, SpringBoot and Arrow. 

Everything was going well but yet I was finding it difficult to express some scale aspects without  either mucking up my business a bit or trading away some composability by using infrastructure layer more. (See my previous post Why modularity? to understand why I abhor lack of composition in designs & implementations).

As a result I took it upon myself to improve the DDD model* by adding a layer that is all about scale concerns and keeping the business and infrastructure layers untouched by this scale corruption**.  (If you want to learn more about DDD I highly recommend to go to the source Vaughn Vernon’s books.) If you are familiar with DDD and from what I wrote so far you might have guessed it that I’m in the Onion Architecture camp. (Don’t be fooled by the name, unlike the vegetable, in this case not using it will make you cry.)

How does this new Onion looks like? Something like this.

Where you see the term program it means the description of a program. Remember that we want composability so we are working with descriptions of programs, which are values.

Why did I add that new layer for that application? Maybe the next picture will clarify it a bit.

(NOTE: ScaleAwareAPI, API, InfrastructureAPI and DomainRepository are traits only, not typeclasses – will update the picture soon)

I added the ScaleAware layer because I wanted:

  • to free up the business (domain and orchestration) from knowing non business details (cleanups, paralelisation concerns, monitoring concerns).  I also consider notions like what can be paralellized or  what must be performed in sequence not truly “dumb piping” parts of the infrastructure. 
  • a layer where I can control how inevitable infrastructure operations (like cleanup or archive old backups) & business operations interact 
  • a layer where the business like aspects that can see into the dimension of scale can be defined. Unlike the Application layer that can only orchestrate business programs or Business layer  where the entire universe of a business service only knows about work with certain types of entities. Or the infrastructure layer that is the one that actually knows how what is out there, beyond the domain works

There are some basic guidelines (read as mandatory rules) associated with this model:

  • 1 Scale Aware operation = 1 Scale Aware program = 1 business use case + related scale aware aspects
  • ScaleAware programs don’t call other ScaleAware programs and are not aware of them.
  • 1 Application operation = 1 Application program = 1 business use case
  • Aplication programs never call other application programs and are not aware of them. Common parts are reused via business programs.
  • Business programs never fork. That concept is only present in the scaleaware level.
  • Also each construct will have its own rules. For instance:
    • the Transactor construct that has the api TX.tx(program) can only work with non forking IO’s. Everything else is a misuse
    • the Parallelism & Forking constructs must be provided the proper thread pools for their purpose …
  • All calls go through scale aware and API regardless of what happens there. 
    • Why? 
      • Control – To have a clear and complete API boundary 
      • Flexibility – To easily enhance the program when needed

(Example of an infrastructure service:A BackupService interface with a method called backup (the interface just has that operation and the impl will be the one that handles the details of what that actually means for this app. So the scale aware concern of creating a backup can be defined at the scale aware level via the interface. ) The impl can backup a nosql or a rdbms or a file and can do it in a whatever infrastructurally decided way. But this step can still be encoded in the scale aware instructions. It’s just that how it is implemented is pushed to the infrastructure layer, outside of scale aware’s clean api.)

But the actual power of the ScaleAware layer is given by the constructs that it uses.  For example:
The Laning constructor provides a way to sequence the execution of whatever programs you want based on a dynamic definition of the “lanes” it needs open to run.
An analogy that would describe it is:
Imagine it like you have a highway with n lanes and each car is magic and can somehow use whatever1 or more  lanes it wants at the same time. But they can only pass the toll booth only if all the lanes they use are free.

Lanes:   1   2   3   4   5
Car 1:    x        x
Car2:     x
Car 3:         x       x
Car 4:    x   x

The way the cars above pass the toll booth is:
– Car 1 and Car 2 reach the toll booth because their lanes are free
– Car 2 is queued up behind Car 1
– Car 4 is queued up behind cars 2 and 3, so until they both pass the booth it just has to wait.

The biggest increase in productivity on this project came when I switched it to FP. The next boost was defining the scale aware architecture. Using Arrow FP + ScaleAware made the cost of maintenance and developing new features drop by a lot. 
But that was then. Since then I noticed that the Scala world did not stop innovating despite the great flame wars of the 2010s***. One of the results of that innovation is a library called ZIO.  

I have been using ZIO almost exclusively for about a year now and I am so impressed by it that I really want to see how my ScaleAware project would look like implemented in ZIO.

I think I will start the migration by comparing the implementations of one of my constructs between:

Arrow + Kotlin + Reactor + Future vs ZIO + Scala.

Place your bets!

* No DDD models were harmed in the design of the ScaleAware architecture.
**I sometimes use hyperbole. Not here, but I sometimes do.
*** Many were raised to Olympus (went to Haskell, some say they still describe how to drink nectar but never do it), some deserted (to Kotlin), I strategically retreated to Kotlin (next question please) and others started raising llamas or smth

Why modularity?

Because it gets things done.

Like most people I am the kind of person that gets really bored when I have to do unnecessary avoidable things. As a side effect (fp ppl will get it) I work very hard to avoid being in a situation where I’m forced to do unnecessary things.

For example I don’t like waiting in lines or being stuck in traffic so I mostly work remotely and I don’t like spending unnecessary time trying to determine the correctness of code or to make it modular so I use functional programming. I have a Ying and Yang thing going on there. I need to mention that I operate mostly in the JVM area so at the moment it’s FP in Scala for me

Some might say.

  • You don’t need FP to have correct code.
  • FP “sux” because it’s slow, hard to learn,…
  • ..

To them I answer: It depends (Isn’t this the universal answer on any question that has room for interpretation? Except for is JS bad because we all know the answer there.)

ZIO architect John De Goes claims FP is faster in the “large” while non-fp is faster in the “small”.

From my experience that claim is accurate.

What experience is that you might ask?

I’ve been developing software from 2007 and for the past at least 10 years I have designed and developed from scratch at least one medium sized project a year. I used OOP (as it’s understood in the wild), OOP (as it’s understood in the classroom), Actors (which is basically OOP on steroids) and FP (which is FP). I mainly used Java, Kotlin, Scala for backend and JS for frontend.

Why don’t I have any Github projects up?

MY DISCOVERIES ARE MINE, I’M NOT SHARING ANYTHING, I’LL LET THEM WRITE AWFUL CODE WHILE I WRITE THIS BEAUTIFUL AWESOME ONE ALL BY MYSELF MWAHAHAHAHAA. Just joking. Had I known that later on I will want to write an article and need that for credibility I would have. Joking again.

I did not do that because I was in a looking for answers phase.

Many paradigms show you the nice parts but don’t really know how to present the hidden costs very well. Most presentations and examples are on very simple and sterile scenarios. In real life you end up following “best practices” and see that the code is not all that it could be. The best practices of yesterday are avoided tomorrow. And nobody seems to notice the unnecessary effort being put in with diminishing returns or worse think that it’s normal. This is why sometimes rewrites happen. Because development puts the project in a corner with no way out and changes become more and more expensive, sometimes shortcuts are used that further aggravate the problem long term.

I know because on several occasions I was the guy that was brought in to do the rewrite and saw the mistakes that caused the need for rewrite.

Now did this happen because OOP is bad, or the programmers were bad? OOP projects are done and work fine every day and blaming the programmers is a cop out (they can’t all be bad, or maybe could it be that the tools they have make them deliver subpar software).

I think that the issue is the fragility of non modular/non composable software and that it’s harder(not impossible) to achieve modularity in OOP thus increasing the risk of failing for OOP projects.

I wrote OOP, I wrote more OOP than FP. I watched uncle bob’s videos (love them btw), I read the OOP books,…. With completely defined & detailed specifications the increased fragility I mentioned above is not that obvious.

But life is not all sunshine & completely defined & detailed specifications. You see priorities shift, people turnover, enhancements, long periods between looking at the same area of the code, changes in direction.

My experience tells me modular design is better at absorbing those “shocks” than non modular design and FP is better at modular design than OOP.

To the crowd that would say “you are just not doing it right”/“go away” I answer “maybe the doing it right in OOP is too expensive to be viable”/“you go away 😛”

You can achieve modularity easier with FP. You can prevent unsound states easier in Scala using FP design principles.

FP gives you back control in a dynamic world. (I said dynamic not chaotic, if your workplace is chaotic change it and thank me later). I love dynamic environments when using fp. Every new feature becomes a new cool problem with a principled solution and you get the good feeling that the user will get something done right. It’s just another piece in a lego.

Having developed at least a medium sized project from scratch every year for the past 10 years, many smaller projects and lead several teams of different size I picked up a few “heuristics” relevant to this point:

  • The less modularity the more complexity. Complexity is bad is an understatement and is much higher in non modular code.
  • The less modularity the more coupling. Coupling is the devil.
  • The less modularity the higher the refactoring & evolution cost. Change is more expensive in non modular software.
  • Productivity increases with modularity. Lower cognitive load, context switching, local reasoning — these all add up bigtime.

The downsides mentioned above increase non linearly with the size of the project and the number of people added to the project.

In closing I will tell you what I am using now with great success so far.

Scala + ZIO

I was using Kotlin+Arrow (which is a very good fp library for kotlin hope it grows and does well) prior to the release of Bifunctor by John De Goes. After that I made the switch to ZIO in my smaller personal projects and now I use it everywhere. There are so many things done right in ZIO that it’s a blast. I have yet to unlock all it has to offer but you can be very productive with it even if you only use the basic building blocks.

Have fun coding!