The only constant is a change or how The Explicit Architecture can save the day (Part II)

Let's do a little recap of what we know till now from part I of the series. The goal is the same, we want to increase the scalability of development of the ever-changing business requirements. We have introduced The Explicit Architecture, where we have defined layers and checked the examples of the code. We were still in the Application Core and we have just finished the Domain Layer.

Now let's continue one layer above.

Application services

They are presenting group 3 (check part I of the series for the definitions of the groups). Group 3 is implementing our workflows triggered from outside. As defined by Herberto Graca, their typical role is to:

  • Use repositories to get entities.
  • Orchestrate the domain layer to do some logic.
  • Use repositories again for persistence (if needed).

Let’s take a look at the snippet of one of my Application Service functions, which is implementing the login workflow:

@impl IManageIdentity
def login(params) do
with {:ok, credentials} <-,
{:ok, user} <- IManageRepository.fetch_user(credentials),
{:ok, expire_in} <- expire_in(),
{:ok, token} <- IAdaptJwt.create(,
{:ok, refresh_token} <-
expire_in: expire_in,
value: uuid()}),
{:ok, _} <- IManageRepository.persist_refresh_token(refresh_token)
{:ok, %{
token: token,
refresh_token: RefreshToken.value(refresh_token)}}
err -> err

Code is self-explanatory (hopefully), the steps of the workflow are clearly visible. I’m using the typical Elixir way by chaining with statements by using functions that are returning tuples {:ok, ..} or {:error, ..}.

As you might have noticed, to access opaque domain model properties, I’m using their getters.


Here is the little caveat. This architecture and also myself are putting repositories as part of the Application layer and not part of the infrastructure outside. Because we are sending domain models to repositories and we are receiving domain models from — we can observe that repositories speak the domain lingua! So keeping domain models inside of the application core seems natural.

In my case I'm not using Ports and Adapter to abstract the ORM adapter (Ecto) — I'm using Ecto directly in my repositories as we can see in the example below:

@impl IManageRepository
def fetch_user(credentials) do
query = from u in AppUser,
fragment("md5(?)", ^Credentials.password(credentials)) == u.passhash
u.username == ^Credentials.username(credentials)
case Repo.fetch_one(query) do
{:ok, res} -> to_user(res)
err -> err

I’m using custom Repo functions for querying, like fetch_one/1. The reason why originates in the practice used across the whole app — functions should return tuples when possible for easier chaining.

I’m also using a custom transaction function for the same reasons and for some more clarity.

Both practices are inspired by the article Towards Maintainable Elixir: The Anatomy of a Core Module, by Saša Jurić.

Inside of the repository directory, I’m putting also my Ecto schema modules. That means I’m not having a single global directory with all schemas. Although personally, I'm not against that, especially in scenarios where we are using a lot of DB (like more than 50% of modules are schemas). But I prefer to have them close, that developer does not need to jump from one place to another and lose clarity.

Every repository is using the program-to-behaviour technique. So repository is actually an implementation of predefined behaviour. In this way, we can easily swap the implementation and for example in testing scenarios, totally avoid DB access.

Finishing this part leads us to last group 4 of the Application Core which contains agreements about access to/from the outside world. Before we dig in let's draw a final diagram about the architecture:

As you can see all four groups are represented in the diagram, also behaviours ie. ports for access to outside.

We have ports on the right side for all infrastructural needs, like for managing JWT and providing system data like dates etc. On left side we have a port for our API controllers which contains commands for triggering the workflows. Last but not least, we have also a port for the repository, for the reasons mentioned before.

For example here is the code for behaviour IAdaptSystem. I use a naming convention like IProvide.., IManage.., IAdapt.. so I can easily recognize them.

defmodule Propy.Identity.Core.IAdaptSystem do
use Knigge, otp_app: :propy
@callback utc_now() :: DateTime.t()
@callback uuid() :: String.t()

I'm using library Knigge to achieve program-to-behaviour as clean as possible. It allows also runtime configurable adapters. My code can then use directly the behaviour module:

{:ok, count_before} <- active_ad_count(

In the configuration, I have a config telling the system which adapter to use ie. which adapter implements the behaviour:

config :propy,

So when calling IAdaptSysyem.previous_week, the call is delegated accordingly. I don’t want to go into many details about this library, you can read more about it here How we deal with behaviours and boilerplate.

Voilà, we finished the Application Core!

Infrastructure, UI and adapters

Whats left is the outside world, the world of adapters and controllers. Here is the example of the adapter for IAdaptSystem behaviour:

defmodule Propy.Identity.Infra.SystemAdapter do
alias Propy.Statistics.Core.IAdaptSystem
@behaviour IAdaptSystem @impl IAdaptSystem
def utc_now_as_date() do
DateTime.utc_now() |> DateTime.to_date
@impl IAdaptSystem
def previous_week() do
Date.add(utc_now_as_date(), -7)

Nothing special here. Now that we are close to the end, I wanted to show you, how the final directory structure looks like. You can recognize all parts that we have discussed:


Last but not least I need to say something about testing too. In my code base, you can find also a simple behaviour test. The purpose is to show how program-to-behaviour can help us also when testing.

Let’s check the code together (by hiding some implementation details):

setup_impl(_) do
# we should add also 'on_exit' to put values back

describe "Identity Context" do # arrange
setup [:setup_credentials, :setup_connection, :setup_impl]
test "...", %{conn: conn} d
# act
conn =, @options)
# assert
refresh_token = get_response_cookie(conn, @response_cookie_name)
%{"jwt" => jwt} = get_json_response_body(conn)
assert conn.state == :sent
assert conn.status == 200
assert String.length(refresh_token) > 0
assert jwt == "a.b.c"

As you can see, in the arrange phase I’m replacing real adapters for JWT and repository with test ones. For example the test adapter for managing JWT is always returning a jwt with value “a.b.c.”, which is then asserted. Because we repository switch, we are also not touching DB at all.

Then I’m calling directly HTTP server (act phase)and asserting (assert phase) the response. Simple and clear.


It’s been a long ride, hopefully, I’ve provided you with some value. I like the described architecture a lot. It seems a bit verbose but the level of clarity is high. And that’s important, especially if Elixir is not your 8h-per-day thing and you are returning to it from time to time and/or from other (non-functional) languages.

You can find things fast and change them fast and test them fast — ie. you are increasing requirements scalability, which was the goal set for this series of articles.

As usually there is so much more to tell. I’ve compressed a lot of information. I’ve also avoided tackling a lot of other possibilities that this architecture offers. Who knows, maybe next time we are gonna visit them. Happy coding!



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store