Skip to content

Functional Programming Principles

Portada

Tabla de contenidos

  1. Introducción
  2. Why pure functions are so important?
  3. Can we always write pure functions?
  4. Principles of functional programming in TypeScript
  5. First-Class and Higher-Order functions
    1. Composition and pipelines
    2. Arrow functions and closures
  6. Immutability
    1. Const, Spread operator, and Array methods
  7. Referential Transparency
    1. How to achieve referential transparency in Typescript
  8. An interesting library

Introduction

Functional programming is a programming paradigm that has gained a lot of popularity in recent years due to the benefits it can bring compared to imperative paradigms. Its roots lie in a rather unusual area: mathematics, and more specifically, in lambda calculus.

Lambda calculus, developed by mathematician Alonzo Church in the 1930s, is a computational model based on mathematical functions. Essentially, lambda calculus provides us with a way to express and manipulate functions in an abstract manner.

These functions, known as lambda expressions, are the basis of functional programming. Due to the rules imposed by lambda calculus on how to create and manipulate these expressions, lambda expressions have a number of very interesting computational characteristics.

On the other hand, the Turing Machine, conceived by Alan Turing in the 1930s, is another fundamental computational model, considered today as the cornerstone of classical computing. The Turing Machine consists of an infinite tape divided into cells, a read/write head that moves along the tape, and a set of rules to determine its behavior.

Unlike lambda calculus, which focuses on functions and mathematical abstractions, the Turing Machine is based on explicit instructions and algorithms. While lambda expressions can work by simply providing a mathematical statement or theorem, Turing machines need to determine all their actions, usually through finite state automata.

The state independence found in lambda expressions allows us to create pure expressions that do not depend on any external factors to be resolved. In fact, this characteristic, which by definition allows us to create pure functions, is the cornerstone of modern functional programming.

Why pure functions are so important?

I've started this article by mentioning that functional programming has gained a lot of popularity in recent years. I would venture to say that the concept of a pure function is responsible for this.

A pure function, as in mathematics, is a function that, given some inputs, always produces the same outputs. To ensure this is always the case, the function cannot depend on any external state that could alter the results it produces.

Working with pure functions simplifies and cleans up the code for various reasons.

  1. You can rely on a pure function always producing the same result, which facilitates debugging and code maintenance.
  2. Being deterministic and not depending on external states, pure functions are easier to test, as you can pass different sets of test data and expect consistent results.
  3. Pure functions can be easily composed. This allows for building more complex functions by combining and chaining smaller functions. This promotes code reuse and the creation of independent components that can be combined to form larger systems.
  4. Pure functions do not depend on shared states, making them ideal for parallel execution. With no hidden dependencies, it's possible to execute multiple instances of a pure function simultaneously without worrying about conflicts or race conditions.

All these advantages are obtained by eliminating state from the equation when programming. Generally, striving to write pure functions and not depending on global states will significantly improve our code.

Not writing pure functions can not only make us lose all the aforementioned advantages but also lead to unwanted consequences, such as complicating code understanding, producing side effects, or making code maintenance difficult.

Although it's a concept of functional programming, writing pure functions should be the rule as much as possible regardless of the paradigm.

Can we always write pure functions?

So far, it might seem that functional programming and pure functions are the ideal way to model the world around us, as simply removing state from the equation can turn our code into a work of art instead of the occasional disaster it can sometimes become.

However, not everything is as simple and straightforward as it may seem. Not to discourage, but it's practically impossible to create a program that is 100% pure. Mathematical functions are abstract expressions, used by highly qualified individuals such as scientists, mathematicians, physicists...

However, software is used by everyone; our application that calculates how many calories you should ingest per day doesn't know that Juan will input a string instead of a number. Similarly, it also doesn't know that in 2 weeks the production database will crash due to the sudden high workload caused by the arrival of summer.

These are exceptional situations, but if our function depends on external factors to function correctly, such as a database or user input, it automatically becomes an impure function.

Now, just because we can't write functions that are 100% pure doesn't mean we shouldn't strive to write functions that are as pure as possible. We should always try to write pure functions to keep our code as clean as possible.

I don't know how obvious this might be, but you don't need to use a functional programming language like Haskell or Ocaml to write pure functions. It doesn't matter if you're using Java, Python, or Javascript; if you have the opportunity to write a pure function, just do it.

The only difference between functional and non-functional languages is that the former impose a series of restrictions to prevent you from using mechanisms that may result in impurities.

You don't need to program in Haskell to write pure functions. Your functions may be purer in Haskell due to the restrictions imposed by the language itself. However, these restrictions arise from a set of functional principles that don't necessarily have to be tied to a specific language.

Java, Python, or Javascript may not be functional languages, but as I mentioned, there are already a set of principles for writing pure functions that are not tied to any programming language. In a moment, we'll discuss them, and I'll try to convince you to apply them in your favorite language, even if it's a very object-oriented language like Java (even Java in its latest versions is adding many features from the functional world).

Principles of Functional Programming in Typescript

Typescript will be the selected language to provide examples of functional code, because its syntax is very easy to understand and because it's one of the most widely used languages in recent years.

Moreover, Javascript has many features originating from the functional world, making it the ideal language among currently popular languages to try to understand and apply functional principles.

First-Class and Higher-Order Functions

While it's been mentioned before that functional principles can be applied in any language regardless of the paradigm they're focused on, the truth is not all principles can be applied in all languages. In this case, this is the only principle that requires the language to have certain special characteristics.

A programming language is said to have first-class functions, or that functions are treated as first-class citizens, when functions are treated like any other variable. This means we can take the identifier, and that function can be passed as an argument to other functions, returned by another function, or assigned to a variable.

On the other hand, higher-order functions are those that fulfill one of these two properties:

  1. Accept one or more functions as arguments.
  2. Return a function as a result.
type SumF = (a: number, b: number) => number;

function apply(f: SumF, x: number, y: number): number {
  return f(x, y);
}

function sum(a: number, b: number): number {
  return a + b;
}

const f = sum;

Allowing these kinds of behaviors for functions brings numerous benefits when writing code.

Firstly, this feature increases the level of abstraction in our code, as it allows us to encapsulate common patterns or behaviors in small functions, which can then be reused in other parts of our code. Therefore, higher-order functions also aid in code reusability.

function map<T, U>(arr: T[], func: (a: T) => U): U[] {
  const mappedArray: U[] = [];

  for (const item of arr) {
    const mappedValue = func(item);
    mappedArray.push(mappedValue);
  }

  return mappedArray;
}

const numbers = [1, 2, 3, 4, 5];
const doubledNumbers = map(numbers, (n) => n * 2);

The most classic example of this is the map() function. This function allows us to create a new array from another, where we apply a higher-order function to all elements of the array. The beauty of this is that the map() function has a common behavior regardless of the elements in the list, and that is that the function must do three things:

  1. Iterate over the elements of the initial list.
  2. Apply the transformation to the current element.
  3. Place the new element in the new list.

With higher-order functions, step number 2 can be completely abstracted, so thanks to this and the use of generics, we can create a reusable function with any type of list.

Composition and Pipelines

On the other hand, the use of higher-order functions greatly favors what many people describe as the antonym of inheritance: composition.

Function composition is a technique that involves combining two or more functions to create a new function. Because functions can be used as input or output of other functions, implementing composition patterns is straightforward.

The idea behind composition is to create small functions that have a single responsibility, so that larger functions can be easily built from them. This way, they are easily replaceable if a bug is introduced in any of them.

const double = (x) => x * 2;
const increment = (x) => x + 1;
const exp = (x) => x * x;

function calc(x: number): number {
  const y = double(x);
  const z = increment(y);
  const w = exp(x);

  return w;
}

The calc() function, in this case, would be a function composed of the double(), increment(), and exp() functions. What the example illustrates can be done in any programming language regardless of whether it has higher-order functions or not. However, by having such functions in Typescript, we can take composition to a new level.

In functional languages, it's very common to have the pipe operator or the concept of a pipeline. As detailed in the immutability section below, functional code revolves around creating new values from old values. A pipeline is nothing more than a sequence of operations that we want to apply to an initial value and to the output of the functions in the pipeline.

val = 4 |> calc() |> increment() |> exp()

In Elixir, for example, you can do something like illustrated in the example. The |> operator is what's known as the pipe, and it allows us to create pipelines. The operation works as follows: the number 4 is passed as an argument to the calc() function; this function is executed and passes its result to the increment() function, which, in turn, is executed and passes its value to the exp() function. Once the entire pipeline finishes executing, the resulting value is stored in val.

This approach allows for clearer visibility of the transformations that a value undergoes step by step, thus enhancing code readability.

In the case of JavaScript, we don't have this operator by default, although there are many libraries that implement it, as well as a stage 2 proposal to be added to the language in the future. However, we can implement one ourselves or use an existing one from a library.

type Applier<T> = (arg: any) => T;

export function pipe<T, R>(val: T, funcs: Applier<T>[]): R {
  let result = val;
  for (const f of funcs) {
    result = f(result);
  }

  return result as unknown as R;
}

const double = (x: number) => x * 2;
const increment = (x: number) => x + 1;
const exp = (x: number) => x * x;

const pipelineResult = pipe(1, [
  double,
  increment,
  exp,
  double,
  increment,
  increment,
  exp,
  double,
]);

console.log({ pipeline: pipelineResult });
// { pipeline: 800 }

This would be an example of how to implement this operator. As you can see, it's very straightforward to see what operations are being applied with just a quick glance. The typical example with numbers is very simple, but we can extend this to functions that do more complex things, as long as the data types match.

const isValidBody = (body: Request<unknown>): body is UserDto => {...}
const mapBodyToUser = (dto: UserDto): User => {...}
const saveUserOnDatabase = (user: User): User => {...}
const sendConfirmationEmailToUser = (user: User): User => {...}
const buildRandomPassToReturn = (user: User): Response<string> => {...}

const result = pipe(body,[
	isValidBody,
	mapBodyToUser,
	saveUserOnDatabase,
	sendConfirmationEmailToUser,
	buildRandomPassToReturn
])

In this example, it's evident how the controller of a REST API endpoint can be simplified into a simple pipeline thanks to function composition.

Arrow Functions and Closures

To streamline working with functions, most modern languages implement what's commonly known as lambda expressions. In the case of JavaScript, these types of functions are called arrow functions.

Arrow functions are functions that are declared in a more concise way to make it less verbose to work with higher-order functions. Typically, arrow functions are used to express simple calculations that don't require too many lines of code. However, nothing prevents you from using arrow functions to organize your functions if you prefer.

const add = (a: number, b: number): number => a + b;

const add = (a: number, b: number): number => {
  return a + b;
};

The syntactic sugar they provide allows avoiding the word function in any case, and if the function occupies only one line, it also avoids the use of braces and the return keyword to return the calculated value.

On the other hand, JavaScript also allows the creation of closures to achieve something similar to the visibility system of classes.

A closure is nothing more than a function that contains other functions. If we can declare variables inside a function that are of type number or string, nothing prevents us from creating variables that are of type function.

The beauty of closures is that just as you cannot access the variables inside a function, you also cannot access the functions inside a function because, remember, higher-order functions are considered variables.

This pattern is interesting when part of the function's code can be extracted to another function, either for readability or because it can be reused in more parts of the function's body, but that extracted function will not be reused by other functions.

If we create the functions within the function itself, that is, if we create a closure, the function will only be available in the context of that function. If you've ever worked with React, it's 100% certain that you've used closures since the style rules for creating custom hooks require you to use closures.

type Operation = "+" | "-" | "*" | "/"

const calculator = (a:number, b:number, operation: Operation): number => {
	const add = (a:number, b::number) => a + b;
	const sub = (a:number, b::number) => a - b;
	const mul = (a:number, b::number) => a * b;
	const div = (a:number, b::number) => a / b;

	if (operation === "+") return add(a,b)
	if (operation === "-") return sub(a,b)
	if (operation === "*") return mul(a,b)
	if (operation === "/") return div(a,b)
}

In this example, we are creating a calculator. The calculator() function should be responsible for performing the operation corresponding to the numbers passed as parameters; however, the implementation details of how the operations are performed should not be available to the user.

By creating a closure, we achieve the same thing as creating a class with private methods: encapsulation and abstraction of information that should not be relevant to the consumer of the function.

The advantage of using closures instead of classes for this case is that creating a closure is much lighter in terms of performance than creating a class, and it's also less verbose, as this function can be executed just like any other. In contrast, if we were to use a class, we would have to create a new class and instantiate it.

Immutability

Immutability is perhaps the most well-known concept in functional programming, as even React promotes and advises its use in the first guided project in its documentation.

Firstly, immutability refers to the property of data that, once created, cannot be changed or modified. Instead, the functional paradigm suggests that if we want to make changes to the data, we should use new data structures that reflect these changes, leaving the original data unchanged.

Following this approach brings several noticeable benefits when writing code. The main benefits are probably the improvement in the predictability of our code as well as the improvement in tracking changes.

It's important to understand that hiding changes is very easy when our code is based on mutations. If we have a class where a property is constantly mutated by 6 different methods, it's very difficult to determine in a straightforward manner what value resides in that property, and even more difficult is to determine who made that change to that property.

The big downside of this is that if for some reason a bug is introduced in one of these methods and our class attribute starts taking strange values, we'll find ourselves in the situation where, at worst, we'll have to check the 6 methods to determine which one is causing these undesired behaviors.

In general, writing code based on mutations makes it much more complex in the long run to see which part of the code is making a change, and therefore, implementing new functionalities or updating existing ones becomes increasingly complex.

In contrast, writing immutable code means that data can no longer change unexpectedly. You know that if you pass a value to a function, it cannot be modified by reference within the function, and it also cannot mutate values outside of it, so any bug that may arise will be confined to the context of that function.

However, personally, the biggest advantage of writing immutable code is that the code becomes much flatter and easier to read. By constantly generating new "states" of the data, we ensure that our function is a sequence of small transformations, which gradually build up the data we want to return. It's very easy to see and understand how that data is evolving step by step until it's returned.

To better understand this, let's compare a program made with mutable code and immutable code. Imagine that we want to implement a program in TypeScript to manage academic records. The data structure will be defined by the following types.

export type Subject = {
  subjectName: string;
  grades: number[];
};

export type StudentRecord = {
  studentName: string;
  dni: string;
  coursedSubjects: Subject[];
};

let students: StudentRecord[] = [
  {
    studentName: "Carlos",
    dni: "12345678K",
    coursedSubjects: [
      {
        subjectName: "Programming",
        grades: [],
      },
      {
        subjectName: "Maths",
        grades: [],
      },
    ],
  },
];

The first functionality we will implement is to add the exam notes to the different students. For this, we will implement a function, which given the student's ID, the name of the subject and the grade of an exam, the latter will be added to its corresponding student and to its corresponding subject.

function addGradeImp(dni: string, subjectName: string, grade: number) {
  for (let i = 0; i < students.length; i++) {
    if (dni === students[i].dni) {
      for (let j = 0; j < students[i].coursedSubjects.length; j++) {
        if (students[i].coursedSubjects[j].subjectName === subjectName) {
          students[i].coursedSubjects[j].grades.push(grade);
        }
      }
    }
  }
}

The code shown does exactly what was described in the previous paragraph, following an approach based on mutations. At first glance, we already encounter a problem with this code, which is that it has a level of indentation of 4. This is too much indentation considering that the task performed by this function is relatively simple.

Furthermore, understanding it is also somewhat complex. In general, working with indices to access specific positions in a data structure is not very readable, as at first glance, we only know that we are assigning a value at positions i and j of the arrays.

Lastly, and most importantly, we are mutating the list of records by reference. The example tries to be as simple and illustrative as possible, but if we had more complex code in which we had to check several conditions and make different modifications based on these, our function would end up being a disaster full of conditional statements with assignments that wouldn't indicate the true purpose of these assignments.

In contrast to the mutable and more imperative approach, we have the immutable and more declarative approach.

function addGradeFun(
  dni: string,
  subjectName: string,
  grade: number,
  studentsRecord: StudentRecord[]
): StudentRecord[] {
  const studentIdx = studentsRecord.findIndex((student) => student.dni === dni);

  if (studentIdx === -1) {
    return studentsRecord;
  }

  const oldStudent = { ...studentsRecord[studentIdx] };

  const { coursedSubjects: oldCoursedSubjects } = oldStudent;
  const subjectIdx = oldCoursedSubjects.findIndex(
    (subject) => subject.subjectName === subjectName
  );

  if (subjectIdx === -1) {
    return studentsRecord;
  }

  const oldSubject = { ...oldCoursedSubjects[subjectIdx] };

  const { grades: oldGrades } = oldSubject;
  const newSubject = { ...oldSubject, grades: [...oldGrades, grade] };

  const newCoursedSubjects = oldCoursedSubjects.map((subject, idx) =>
    idx === subjectIdx ? newSubject : subject
  );

  const newStudent = {
    ...oldStudent,
    coursedSubjects: newCoursedSubjects,
  };

  const newStudentsRecord = studentsRecord.map((student, idx) =>
    idx === studentIdx ? newStudent : student
  );

  return newStudentsRecord;
}

The change is quite noticeable at first glance, especially considering that the code does exactly the same thing. Not only has the level of indentation been reduced to 1, but also what happens on each line only affects that line.

Let me explain: in the mutable code above, when we reached the instruction that entered the grade in the corresponding subject, we were intentionally causing a side effect on the data structure that stores the records. Although intentional, the execution of that instruction was impacting a data structure of which we only have its reference.

However, in the case of the function that works with immutability, whenever we make a change to a data structure, or rather, whenever we create a new change from a data structure, we are containing that change in a new variable.

So, if a bug occurred when implementing the logic of the function, that bug would be "contained" locally in the new variable we created, unlike the function that works with mutability, which would directly introduce the change in the data structure, potentially making it very difficult to determine which instruction made that change.

Another key benefit is that by creating new variables for the changes, we are being more self-explanatory about what is happening, at least as long as we select good names for the variables.

Having seen the key benefits of one function over the other, let's examine the code and explain how we should think in order to build functions in this way.

The fundamental trick is to try to see our code as small transformations of the data, that is, we have to see how from some data, we can transform them into others to achieve the desired goal.

In the case of the function to add grades to a student's record, I know that I am going to receive as a parameter a list of records for all students, and I also know that I have to return a new list of records with the grades of a student updated.

Therefore, the fundamental transformation I have to make is to transform the old list of records into a new list of records with the grades updated.

To achieve this, I also know that I have to update the grades of a specific student, so I have to transform the old student object into a new student object with that grade.

And finally, in order to carry out this transformation, I have to transform the old list of grades of the student into a new list of grades, with the updated grade.

Const, Spread Operator, and Array Methods

TypeScript, or rather JavaScript, is a language that over the years has absorbed many ideas from the functional world, and personally, I believe it has been a great success.

The first of these is standardizing variable declaration as constants, using the const keyword. Declaring our variables with const instead of let helps us write immutable code, as declaring a variable with const does not allow its reassignment.

Therefore, it is ideal to use const whenever possible, to force ourselves to instantiate new objects instead of modifying existing ones. However, some care must be taken, as const only prohibits reassignment, but it is still possible to mutate values within an array or an object using indices or specific methods. Thus, the programmer has the responsibility to use the correct tools in these cases.

In these cases we mentioned, the best way to update an array or an object is to use the spread operator, better known by its notation of three dots ....

This operator is somewhat complex, and it would probably be a good idea for you to research it to understand it more deeply. But for our case, we can say that this operator is used to "create copies" of arrays and objects.

According to the official documentation, the spread operator expands an iterable where zero or more elements are expected. Translated into human terms, what the spread operator does is copy all the data contained in an iterable and place it in the iterable that contains the spread operator.

const oldStudent = { ...studentsRecord[studentIdx] };

const newSubject = { ...oldSubject, grades: [...oldGrades, grade] };

For example, in this code snippet extracted from the example, the first line will take all the properties of the student and put them into a new object called oldStudent. In the second line, again, we will create a new object with all the properties of the oldSubject object, except for grades, which, being positioned to the right of the spread operator, will update that property with the new value we are assigning. At the same time, the grades array will be a new array with the values from oldGrades plus the addition of the new grade.

If you've never heard of the spread operator, you should take some time to understand how it works, as it is one of the most powerful tools that JavaScript offers. I'd like to talk more about this operator, but this post might become too long, so I'll leave that part to you.

Finally, a great tool that is indispensable for working with immutable code is to know and use the methods that JavaScript offers us for working with arrays. I'm not referring to all the methods; since the arrival of ES5 and ES6, array methods have been adapting to a trend, which is to return new copies of the array we are modifying instead of modifying the array directly.

You need to be particularly careful with methods like .push(), .pop(), or .sort(), as they are methods that mutate the array by reference without returning a new one. These methods may be deprecated in the future in favor of methods that do return new instances of the array, as we are finding that immutability brings more advantages than disadvantages.

On the other hand, the array methods you should know are those that do return a copy of the array in question. Specifically, the most useful and well-known within the functional world are .map(), .filter(), and .reduce().

These methods are known as applicative methods because they apply a function to all the elements of the array.

  • .map() is used to transform all elements of an array according to the function passed to it into others.
  • .filter() is used to create a new array of filtered elements from another array, depending on whether the function passed to it returns true or false based on a condition.
  • .reduce() is used to "reduce" all the elements of an array to a single value thanks to a reducing function.

In our use case, we are using the .map() function to create a new array with the updated subjects, so we always return the same subject, except when we reach the index of the subject that we have modified, in which case we put in the updated subject.

In general, it is a very good idea to use array applicative functions, since they not only favor immutability but also, since each one has a purpose, we are being much more explanatory with our code. If we see a .map(), we know that we want to transform an array into another array, if we see a .filter(), we know that we want to obtain a new array with filtered values, if we see a .find(), we know that we want to find a certain element in the array...

In contrast, if we use a for loop or a while loop, at first glance we don't know if we want to transform an array, filter, or find an element, not to mention that these types of statements imply the mutation of control variables.

Finally, it should be added that not all methods return a new array. .findIndex(), for example, returns the index of the value we are looking for (or -1 if it doesn't find it), but it is still an applicative method. It is not a rule that they have to return a new array, just that they allow applying a function to it and not mutating the original array.

In general, it will always be a better idea to use one of these methods rather than a loop for its expressiveness and because they operate on the elements of the array, eliminating the possibility of producing infinite loops.

Referential Transparency

Referential transparency is a principle that often goes unnoticed because people don't give it the importance it really deserves.

Referential transparency refers to the property of a function to always produce the same result when given the same set of arguments. In other words, a function is referentially transparent if its result depends solely on its inputs and has no observable side effects.

This, in principle, sounds obvious. If we want to build pure functions, this is exactly the goal we should aim for. However, languages like TypeScript don't make this task entirely straightforward.

function sum(a: number, b: number): number {
  return a + b;
}

function div(a: number, b: number): number {
  return a / b;
}

At first glance, these two functions might seem to fulfill the property of referential transparency; both take a number as an argument and both return a number. It seems like there's no room for error.

However, only the first function complies with this property, as the second one does not meet the conditions stated in the function header. The div() function will return a number for any value of a or b, almost always, but there's an exception.

If the value of b is 0, the function will throw an error. The problem is that this error is not accounted for in the function header, and therefore we have a case where the function will not always return a number type as it should.

function div(a: number, b: number): number {
  if (b === 0) throw new Error("Division by 0 is undetermined.");
  return a / b;
}

The problem with all of this is that even if we handle the error, we don't know if this function will throw an error or not. The only way to check is by inspecting the function body and seeing if it indeed can produce errors or not.

This adds more complexity when developing real systems that are not as trivial as this function, as we cannot treat functions as black boxes, as we might encounter unexpected surprises.

Languages like Java, for example, do indicate in function headers whether those functions throw exceptions or not.

public double divide(double dividend, double divisor) throws DivisionByZeroException {
    if (divisor == 0) {
        throw new DivisionByZeroException("Division by zero is not allowed.");
    }
    return dividend / divisor;
}

How to achieve referential transparency in Typescript

If we want our code to be more secure, it would be a good idea to try to implement referential transparency in our functions in some way.

First of all, it is worth differentiating between 2 types of errors, expected and unexpected.

Expected errors, more than errors, are possible cases that the programmer knows could occur when entering a certain input. For example, in the case of division, the programmer already knows that if the divisor is 0, that division cannot be performed.

Another very common example would be creating a function to search for users by their identifier in a database. It is within the realm of possibility that when invoking the function with an identifier that is not associated with any user, a user cannot be returned because it does not exist.

On the other hand, unexpected errors are errors that occur due to external factors, over which the programmer has no control. For example, if one of our services depends on an external API, and that API is down, we as programmers cannot do anything to remedy it, we can only react to it.

The question here is, should we really throw an exception in cases where there are expected errors, knowing that these cases are entirely feasible? That is, an exception represents an exceptional situation, a rare event that should not occur if our program is well-built; it should be something that is beyond the logic of our application.

That a user is not present in our database or the fact that we cannot divide by 0 are completely valid cases that should not be represented by errors, but should be handled in some way.

To handle this (at least for expected errors), there are different solutions with different levels of complexity. The simplest action would be to simply return a data type more suitable to the nature of the function.

function findUserById(userId: number): User | undefined {
	...
}

As it's possible that for certain identifiers a user doesn't exist, a good idea would be for the return type to be a union of User and undefined. This way, when executing the function, Typescript would require us to identify which of these types we are working with. With a simple if statement, we could determine if the returned user is actually an object of type User or of type undefined.

In this way, if the value were undefined, we could directly return an error response with the 404 code in our API, or a response with the user and a 200.

In truly functional languages, and even in newer languages like Rust, this is handled differently. In these kinds of languages, a software pattern is implemented to handle these situations, known as monads.

Monads are a very complex topic that I've discussed in another post, but as a summary, monads are data structures used to encapsulate calculations that can produce these kinds of side effects, and they allow working with them at a high level through function composition.

The most famous monad is known as the Maybe or Either monad, which, as its name suggests, is used to represent a value that can be one thing or another.

type Maybe<T> = T | undefined;

This could serve as a very simplified representation of a monad in TypeScript. Although it's more advisable to use a library based on real monads, for our example, using this type of data would provide greater clarity on what a function can return.

function findUserById(userId: number): Maybe<User> {
	...
}

At the end, there isn't much difference between using undefined or Maybe in this case, but I repeat, monads go much further than this. A real monad implementation would allow passing higher-order functions to work with the encapsulated value, whose functions would only take effect if the value is different from undefined.

const userMonad = findUserById(1);

userMonad
  .bind((user) => loadUserPosts(user))
  .bind((userWithPosts) => removePassword(userWithPosts))
  .bind((finalUser) => buildResponse(finalUser))
  .match({
    onSuccess: (userResponse) => response.send({ ...userResponse }),
    onFailure: (error) => response.send({ ...buildErrorResponse(error) }),
  });

As pseudo code, a monad would allow applying all the higher-order functions passed to the user residing within. If the monad ultimately stores a user, the match function would execute the onSuccess callback; otherwise, the onFailure callback.

As I said, monads are too extensive a topic for this post, but it's good to have an intuitive idea of the functional mechanism that exists to handle this scenario.

An Interesting Library

To wrap up, I'd like to mention that although JavaScript makes considerable efforts to bring functional programming closer to developers, there are libraries that offer very interesting utilities to adhere to these principles.

Specifically, I'm talking about Effect, a TypeScript-based library designed precisely to fulfill the last principle, referential transparency.

I'm currently developing an API based on this library, which I'll talk about in more detail in the future, but its main purpose is to contain side effects in structures called "effects" Effect<C,E,A>, which are somewhat similar to monads but much simpler and user-friendly for easy implementation.

With this, I bid farewell, my dear devs. I hope I've convinced you of the benefits of functional programming principles and that you start applying them in your code.

Best regards, and until next time!