3 underrated things great developers do

People are attracted to sexy, complex-sounding things because it’s cool to talk about them, and they can act like it’s going to fix all of their problems.

Take personal health for example. It’s cool to talk about intermittent fasting, biohacking and the latest fitness trends. Not cool to talk about eating more fruits and vegetables, getting enough sleep and consistently moving your body.

Software engineering is no exception. It’s really cool to talk about microservices, event sourcing, chaos engineering and Kubernetes. Not cool to talk about legacy code, documentation and writing code that other people can understand. People are suffering from the silver bullet syndrome (spoiler alert: nothing’s going to fix all of your problems)

It’s the software engineering version of Nailing the Basics is Simple, not Easy:

We know we should test our code, but we rather spend time applying that design pattern we read about.

We know our architecture should be documented, but we rather spend time introducing that cool new library.

We know we should limit work in progress, but we rather spend time starting cool new features.

So let’s have a look at a few basic, underrated things you can do to make the difference for your team.

1. Make an effort to understand legacy code

Legacy code is everywhere, everybody knows we should deal with it, but nobody wants to. We’re quick to discarded it with "this code is horrible, we should rewrite it", and the author is perceived as a bad programmer.

There is a lot of bad code out there, and yes, we should do better. But remember: code is often written with the best intentions, and you don’t know the context in which it was created.

When we get into a new codebase, everything tends to look like chaos. When you look closer, you can often find patterns that make it easier to understand. It’s usually not as bad as it looks at first sight. Plus, if you ever want to replace it with something new, you better know what it’s doing.

This brings us to the next one:

2. Share what you learn

I’m not talking about: "Here’s how I heard they do X at FAANG, and by the way: we should do this here too!"
but rather: "Here’s what I found out about this undocumented part of the code."

When you’re new to a codebase, you have a unique view on it. Use this unique view to make it easier for developers coming after you. Ask yourself: "What piece of information was I missing? What made the codebase click for me?". Focus on patterns that help someone understand the code as a whole, or a notoriously difficult part of the code. Create diagrams (see C4 model for inspiration) with some text and share them with your colleagues. Either in your team chat, or in a dedicated knowledge sharing session.

3. Write code that other people can understand

Unless you’re writing some temporary internal tool, code will be read more than it’s written. Even if the rest of the code is unreadable spaghetti, you can still make the code you add and update more readable. Write clean code and provide tests.

The underlying trait

What is the underlying trait of people who do these things?

Empathy!

Accept that nobody’s perfect. Show that you care about doing better as a team and making things suck less for everybody:

  1. Make an effort to understand legacy code
  2. Share what you learn
  3. Write code that other people can understand

Want to learn more? Sign up for the newsletter!

You’ll receive more content like this that will help you grow as a full stack developer.

Processing…
Success! You're on the list.

Automatic HTTPS for all your development servers

Every developer gets into a situation from time to time, where you’re developing a new feature, all the tests pass, you run the application, test everything thoroughly, the result looks great! So you push the code, it gets deployed and… it doesn’t work. What the hell is going on?! It works on my machine!

To prevent these things from happening we try to have our development environment match the production environment as close as possible. But matching it exactly is not always possible.

One thing that’s often different is the development environment is HTTP and the production environment is HTTPS. Browsers do their best to mitigate these issues: most things that require HTTPS in production, like service workers and PWA’s, will also work on localhost. There are a few exceptions though. Things that often cause trouble are secure cookies and mixed content.

Luckily there’s a way to use HTTPS for all your development servers without having to update the code: a HTTPS-enabled reverse proxy. This guide will go through setting up Traefik proxy as an HTTPS-enabled reverse proxy on your development machine.

The code for this guide is also available on GitHub.

Prerequisites

mkcert

Install mkcert: web.dev (Google) has an excellent guide on using HTTPS in development, so this guide will not go through all of that. There is only one downside to their approach: each server needs HTTPS to be configured separately. This guide aims to fix that.

Follow the guide on web.dev until you’ve installed mkcert, or follow mkcert‘s official installation instructions, we will continue from there. You don’t need to generate any certificates for localhost or configure a server.

If you’re using Windows Subsystem for Linux (WSL), mkcert -install will not install the CA certificate for browsers running on Windows. You will have to let them trust the root CA certificate manually. You can find out where the root CA certificate is stored by running mkcert -CAROOT.

Docker Compose

You will need docker compose to run the reverse proxy, so make sure you have Docker and Docker Compose installed on your system.

Try docker compose --help or docker-compose --help to see if you already have it installed.

Overview

There are 3 high-level steps you need to go through:

  1. Pick a domain name and generate the certificates for it
  2. Configure and run the reverse proxy
  3. Make sure your domain name resolves to localhost

You will learn how to configure Traefik proxy to forward URL’s of the form https://<port>.localhost.test to http://localhost:<port>. That way, you can rely on Traefik to handle encryption for you without making any changes to your development server.

Here’s an overview of the solution:

Diagram: Traefik as HTTPS reverse proxy

Pick a domain name and generate the certificates for it

This guide will use <port>.localhost.test, but you can pick your own domain name if you like. Just make sure there is room for the <port> subdomain somewhere. We use .test as the top-level domain because it is intended for testing purposes and won’t interfere with publicly available domains. Since you’re trying to resemble a production environment as closely as possible, avoid domains that would receive special treatment, like domains names ending with .localhost.

To generate the certificate, run mkcert "*.localhost.test". You should get 2 files:

  • _wildcard.localhost.test.pem, the certifcate
  • _wildcard.localhost.test-key.pem, the certificate’s private key

That’s it 😊

Configure and run the reverse proxy

Create a docker compose file called docker-compose.yml:

Since docker-compose.yml mounts a Traefik configuration file called traefik.yml you’ll have to create it:

You just told Traefik to load dynamic configuration from files using the file provider.

This file provider will look for configuration files in the config directory you mounted in docker-compose.yml. Create the config directory along with 1 file inside it: config/https-localhost-test.yml, this is where the magic happens.

It’s a long file, but there’s a lot of repetition. You can find the full version here.

From the configuration file, it should be clear that you’re telling Traefik to listen for requests in the form https://<port>.localhost.test and forward them to http://host.docker.internal:<port>. But what’s up with this host.docker.internal stuff? Why isn’t it localhost?

If you think about it, using localhost doesn’t make sense here. From the container’s perspective, localhost is anything that’s running inside the container. But only Traefik is running inside the container. Development servers run on the machine that’s running Docker. You need a mechanism to connect to those services.

In Docker terms, "the machine that’s running Docker" is called the host. Docker Desktop has a special DNS name for the host: host.docker.internal. That’s the DNS name from the configuration file.

There is one gotcha though: it won’t work if you’ve installed Docker without Docker Desktop. Luckily there’s an easy workaround. You can add this extra_hosts section to the traefik service in docker-compose.yml:

Traefik should now start without any problems by running docker compose up:

Console output for docker-compose up

Make sure your domain name resolves to localhost

You’re almost ready to test, but there is still one thing missing. When you type in https://8080.localhost.test in your browser or run curl https://8080.localhost.test/products from the command line, your OS will ask a DNS Server "What’s the IP address for "8080.localhost.test"?" and that DNS server will have no idea. Luckily, the hosts file can help you with that.

You will find your hosts file at /etc/hosts on Unix-based systems and C:\Windows\System32\drivers\etc\hosts on Windows. Open the file and add the following lines:

Testing the setup

You’re now ready to test!

For a first test, see if you can open https://8099.localhost.test in your favourite browser. You should see Traefik’s dashboard.

Next, try your own development server. If you don’t have an application or an API to test with, you can clone the repository that belongs with this guide. It has a small test application that makes a GET request, opens a WebSocket and uses Server-sent events. The test application is in the test-app directory, so open a command line window, navigate to the test application directory and run npm install and then npm start. If you don’t have the npm command, you will have to install Node.js first.

Once you see the message "API Server is running on port 8080", open https://8080.localhost.test in your browser. This should start to appear:

HTTPS test application

Conclusion

It takes some time to set up, but once it’s running, you can use HTTPS and WSS for all your development servers without making any changes to the servers themselves. This guide configures ports 8050 – 8099. If you want to support a wider range of ports, you can add more configuration to https-localhost-test.yml and to your hosts file.

No more "it works on my machine" for HTTPS-related issues!


Want to learn more? Sign up for the newsletter!

You’ll receive more content like this that will help you grow as a full stack developer.

Processing…
Success! You're on the list.

Avoid the pain of mocking modules with dependency injection

When you’re unit testing, there is no way around it: from time to time you’ll need mocking. Modern testing libraries have support for mocking modules. This means injecting mocks into the module system. Both Jest and Vitest have support for this.

When you look at the examples in the documentation, it seems simple enough. But when you try to do it in a production code base, it becomes a big frustrating mess:

It would either work, or not work at all. It just felt completely random at times.

I have no clue what is happening behind the scenes when I run my tests.

It just feels so extremely magical.

Those are quotes from a thread I came across. Luckily, it doesn’t have to be this way.

Mocking modules is a TDD anti-pattern

Before we get into dependency Injection (DI), let’s see how mocking modules causes so much pain.

Since we hook into the module system, tests that share modules can no longer run in isolation, they will affect each other: if a module is mocked in one test, it will have to be mocked in other tests. And if you’re not careful, mocks from one test will creep into other tests. This is a TDD anti-pattern:

you should always start a unit test from a known and pre-configured state

A mock from one test creeping into another test is not "a known and pre-configured state".

Modern testing libraries like Jest and Vitest do a good job of avoiding these problems because they:

  • Keep the module systems of test files separated
  • Always run tests from one file in the order that they are defined
  • Suggest clearing all mocks after each test with an afterEach()

But you still have to be careful what you’re doing:

  • Want to mock a module in one test, but not in the other? Too bad, you’ll have to put the tests in different files.
  • Forgot to clear mocks in a test? Pain.
  • Using it.concurrent() to run tests in parallel? Pain!

And on top of that: you need to know the library-specific API’s to inject your mocks into the module system.

Maybe this doesn’t seem too bad, but things can get quite subtle: you run your tests locally, everything seems fine, so you push the code. Your CI pipeline runs the tests, everything green, great! Life is good.
The next day you push some completely unrelated code and your test from the day before suddenly fails, WTF is going on?!

This is what we call "flaky" tests. They’re pretty much bugs in your tests, the worst kinds of bugs: those you can’t reproduce predictably because they only happen sometimes. That’s what happens when your tests do not start from a "known and pre-configured state".

Plain JS Dependency Injection

What if I told you all of this pain can be avoided by applying just one technique, no new syntax or tool, just plain JS. Introducing: Dependency Injection (DI).

Wikipedia describes the goal of DI like this:

dependency injection aims to separate the concerns of constructing objects and using them, leading to loosely coupled programs.

It says "constructing objects" because this technique has its origin in object-oriented programming. You don’t need objects (or classes for that matter) to make use of it, we’ll be constructing a function.

This is the function we will be testing:

And a Jest test that mocks the Axios module:

We have to mock the Axios module here because there is no other way to make getProducts() call a different Axios instance. The current implementation is tightly coupled to the default Axios instance.

We can fix that with DI, let’s see what the code looks like:

In the tests we can now use the factory function createGetProducts() to create our own version of getProducts with a mocked Axios instance. No need to get fancy, the mock is also plain JS:

By using DI instead of module mocking:

  • Your tests can run in isolation
  • You do not need to know any framework-specific syntax / module magic to inject a mock
  • Your code is more loosely coupled

Want to learn more? Sign up for the newsletter!
You’ll receive more content like this that will help you grow as a full stack developer.

Processing…
Success! You're on the list.

Composing Software by Eric Elliot, what you need to know

Functional programming (FP) is a big deal in the Javascript community. It’s one of the fundamentals of building good software and great to learn if you want to grow as a developer.

FP has its roots in math academia, so a lot of the learning material is theoretical, making it hard to wrap your head around. It’s not always easy to see how you can apply it in your day job. The more practical information is scattered and often hidden in documentation for specific libraries.

I don’t know of any course or book that’s the reference on Functional Programming in Javascript. I have recommended Composing Software by Eric Elliott, but apparently some people don’t like that.

I do agree with some of the criticism, but it sounds like the whole book is rubbish and that’s not true at all. You can still learn a lot from it: most of the advice is good, it doesn’t go too heavy on the theory and the examples are practical.

There are a few things you should know: two OOP principles that come up in the book, object composition and the open-closed principle, are not used correctly. The book describes a forgotten history of OOP before getting into these things, but that’s not a good reason to confuse readers by redefining well-known OOP principles. Let’s see what’s going on here.

The wrong definition of Object composition

Skip the chapter on object composition. It explains that there are 3 types of object composition, but I’m not sure where these are coming from. It’s confusing.

"What is object composition" starts with a quote from Wikipedia, but it comes from the Composite data type page and not Object composition.

Then 3 types of object composition are described:

  • Aggregation: I don’t know where this definition of aggregation comes from, but in the classic GoF book and in UML, aggregation has a different meaning.
  • Concatenation comes back later in the book in the chapter about functional mixins. We will come back to this in the next section.
  • Delegation uses a form of object composition, so I can kind of see how it relates, but it’s closer to class inheritance than object composition. It’s not very relevant to what you should know about object composition and why you should favor it over class inheritance.

Mixins are not "object composition" in "favor object composition over class inheritance"

In the chapter "Functional Mixins" the confusion about object composition continues:

Mixins are a form of object composition

And then a few paragraphs further:

it is the most common form of inheritance in JavaScript

So… are mixins composition or inheritance?

Technically speaking, describing mixins as "object composition" is not wrong if you only consider the meaning of the words in English: you’re composing objects together to form a new object. But when we talk about object composition in OOP, this is not what we mean. Functional mixins are a form of multiple inheritance, not object composition.

It’s not a bad technique, it’s useful to know it and use it. Just remember that you’re doing multiple inheritance. The book seems to suggest that you’re applying "favor object composition over class inheritance", but this is not true. In fact, the "Caveats" section has this warning:

Like class inheritance, functional mixins can cause problems of their own. In fact, it’s possible to faithfully reproduce all of the features and problems of class inheritance using functional mixins.

Composition is not necessarily harder with classes

In "Why composition is harder with classes" the book tries to make the point that composition is harder with classes because mixins are harder with classes. Since mixins are not object composition, this is not correct.

What you should remember from this chapter is:

  • A factory function is more flexible than new or class.
  • Changing from new or class to a factory function could be a breaking change.

A clumsy reference to the open-closed principle

Further in this chapter, there is a section called "Code that Requires new Violates the Open/Closed Principle". It starts like this:

Our APIs should be open to extension, but closed to breaking changes.

This is good advice and the open-closed principle has a similar high-level goal, but it’s not what the open-closed principle means.

The open-closed principle generally means what Wikipedia calls the polymorphic open–closed principle. We would use interfaces rather than abstract base classes these days, but Robert C. Martin’s article is in C++ and from 1996. The concept of interfaces didn’t exist in the language, abstract base classes were the closest thing.

That’s not acceptable, aren’t there any other resources to learn from?

Specifically for Javascript there are some alternatives:

Personally I learned functional programming from an online course on Coursera: Functional Programming Principles in Scala. It’s a lot of work to learn a new language and translate what you learned to Javascript just to get into functional programming. I didn’t learn Scala just to get into functional programming. But if you’re serious, and you’re up for a challenge, feel free to learn Scala. Or go find the definitive learning material for purely functional languages like Haskell or Lisp. It’s not the most efficient way, but you will learn a lot, I promise!

Want to learn more? Sign up for the newsletter!
You’ll receive more content like this that will help you grow as a full stack developer.

Processing…
Success! You're on the list.

Is clean code even worth the effort?

Did you ever spend a lot of time cleaning, refactoring, testing and documenting your code, you’re proud of the result, and then… nobody notices? While some developers on your team just hack something together, leave horrible code behind and then get praised for delivering "quickly".

Frustrating, right? You start to wonder: is clean code even worth the effort?

The answer is: Yes!

Let’s take a look at a few reasons why.

Working in a bad code base sucks

I’m sure every developer remembers at least one of those hours-long debug sessions where it’s mind-numbing just parsing what the heck is even going on, let alone solve the actual problem. Working in a bad code base is frustrating and sucks!

Making changes to a bad code base is slow and on top of that, the frustration has a negative impact on motivation and that impacts the team‘s ability to do good work even further.

That was a bit of a no-brainer, so let’s get to the real stuff.

Clean code pays off more quickly than we expect

Most people know that we try to write clean, readable code because bad code comes back to haunt us over the long term, but we underestimate just how quickly that happens.

The question "Is the extra work clean code requires worth the effort?" is part of a more general question: "Is building high quality software worth the effort?"

In 2019, the great Martin Fowler published an article titled: Is High Quality Software Worth the Cost?.

In the chapter "Visualizing the impact of internal quality", he admits that you can be faster in the short-term with quick, low-quality code, but points out that people underestimate how quickly this becomes a problem. He visualizes it with a nice (pseudo-)graph:

High quality vs low quality

I’ve seen this happen an number of times. When someone says "bad code will slow us down (cost us money) in the future", people seem to think that we’re talking about some far-away future, like more than a year from now. But it doesn’t take long for low-quality code to start slowing us down. For larger teams, I agree with Martin Fowler, it’s a matter of weeks. For smaller teams, it may take a little longer, but we’re still talking no more than a few months.

The only situations where you can be faster by quickly hacking something together is when all of these apply:

  • Short-term results are the only thing that matters
  • It’s new code or clean existing code
  • Everything fits in your head (making it fit into someone else’s head will be a problem though)

That’s a lot of ifs and there are very few situations where it makes sense to do this. By quickly hacking something together you pretty much always shoot yourself (or the company you work for) in the foot.

Writing clean code doesn’t have to take forever

Writing clean code might seem like a lot of effort at first, but the more you practice it the more clean your code will be from the start, to a point where you’re actually faster writing cleaner code.

If you’re a perfectionist, try to be pragmatic because there are diminishing returns: you could get your code to eg. 80% pretty quickly, but trying to have perfect textbook definition clean code can take a lot of time and is not the goal. You can tweak your code forever and still find some things to change.

Your main goal should be, when a random developer gets dropped into the code to make a change:

  • The change can be made in one place without changes in many other places (loose coupling)
  • It is clear what the code is doing and where the change should be made without having to carefully read every line to understand what’s going on.

In summary, write clean code because:

  • Working in a bad code base sucks
  • Bad code will come back to haunt you more quickly than you think
  • It doesn’t have to take forever

Want to learn more? Sign up for the newsletter!
You’ll receive more content like this that will help you grow as a full stack developer.

Processing…
Success! You're on the list.

4 tips to understand your code 4 months from now

As software developers, we’ve all been there: you’re writing some code, you know it’s dirty, but the deadline‘s coming closer, and you go: "it’s working now, I’ll just refactor it later", and we all know how that turns out.

This repeats a few times, and before you know it, it’s just one big bowl of spaghetti that you don’t want to touch anymore.

Let’s look at a few things you can do to help the next developer who looks at your code. After all, that developer could be future you!

We’ll look at a solution for the first puzzle of Advent of Code 2021. The puzzle has 2 parts, our solution includes part 2.

The assignment goes like this:

  • You get a list of integers
  • Use a sliding window to go over them
  • Calculate the sum of each sliding window
  • Count the number of times the sum of each window increases

This code solves the puzzle:

When you glance at the code, you can kind of get an idea of what’s happening, but you have to read the whole thing carefully to understand or make a change. It’s not clean code.

It’s not necessarily a problem to have this kind of code when you’re exploring a solution, as long as everything fits in your head. But it’s not the kind of code that should end up in production. As software developers, we read code much more than we write it. So if we want to be efficient as developers, we should prioritize writing code that is easy to read and understand, not quick to write.

The first advice you typically get is:

  • Use clear names
  • Document unclear code with comments

So let’s do that.

This is exactly the same code, but with better names and some clarifying comments:

That looks better already, doesn’t it? It’s clear that there’s something going in with sliding windows, making sums and counting increases.

Now comments are all well and good, but as Robert C. Martin points out in Clean Code, they have a tendency to get out of date. Let’s assume for a minute that the requirements change (don’t they always?): instead of the sum, we should use the median to detect increases. In this kind of scenario it often happens that someone updates the code but doesn’t update the comments. Now the comments say "sum" but the code says "median" and it’s just confusing instead of clarifying. We might spot that quickly in our small example here, but when you’re working on a large project it can get really crazy when you’ve been staring at the same code for hours, and you suddenly realize that that comment you trusted was lying to you all along!

That’s why it is better if you can explain yourself in code.

Let’s see what that looks like:

That’s quite clear, isn’t it? Without any comments!

By extracting functions and giving them meaningful names we can clearly communicate what’s happening without using comments. We can explain ourselves in code. It’s clear from the code that we’re using sliding windows and that we’re counting increases based on the sum of these windows.

Could we improve it even more?

Let’s compare the code to the description of the solution in plain English:

  • Use a sliding window to go over them
  • Calculate the sum of each sliding window
  • Count the number of times the sum of each window increases

Is it clear from the code that this is happening? It’s certainly more clear than the first version, but everything is in 1 for loop and your fellow developers (and future you) would still need to read the code carefully to understand what’s happening.

Let’s see what happens if we decouple our solution into self-contained steps:

countDepthIncreases() is only 3 lines now. One for each step we described in English. It’s not necessary to go look at the implementation of the different functions to see what’s going on here. When a change is need, it’s quite clear where the change needs to happen and because the different steps are self-contained it is less likely that a change to one part will break another part of the solution.

Need to use median instead of sum to detect increases? ok, no problem, write a median() function and replace it:

Need to count decreases instead of increases? ok, no problem, write a countDecreases() function and replace it:

Apart from readability, there is an added benefit to these self-contained steps: you can mix-and-match to create different variations of the solution and reuse parts of it in something completely different:

  • Need to count increases of both sum and median? Ok, you can add more 3-line functions for different variations
  • Need to do something else with sliding windows? Ok, you can reuse calculateWindows()

In summary:

  • Use clear and meaningful names
  • Always try to explain yourself in code
  • Use comments only when you can’t explain yourself in code
  • Decouple your solution into self-contained steps that do 1 thing

Want to learn more? Sign up for the newsletter!
You’ll receive more content like this that will help you grow as a full stack developer.

Processing…
Success! You're on the list.

How to grow as a developer without getting overwhelmed

So you would like to grow as a developer?
Great! There’s a lot of good information out there.

But there is so much! You can feel like there’s a massive mountain of things to learn, and you don’t know if you have time to even scratch the surface.
It’s overwhelming, you’re worried that you’re not meeting expectations, and you’re freaking out.

Don’t freak out, please don’t.
Instead, ask yourself this question:
Has anyone brought up your performance?
The answer is probably no.

That’s because this is normal. It’s impossible to know everything. No one knows everything, not even that mythical developer you run into on social media that seems to be an expert on every subject.

This is especially true as a full stack developer. It is often expected of you to handle a wide range of tasks. Being able to learn what you need for the task at hand is important. But technology changes so fast, especially if you’re dealing with frontend development.

So what do you do?

As a junior developer, learn about the frameworks and libraries you use at your job. It will boost your confidence and make you stand out. Avoid learning the "best" library for X or knowing all the frameworks. That’s just FOMO. The technologies you use do not make or break you as a developer.

When you start to get more experience, you will encounter different environments with different technologies. You could focus on learning all the inns & outs of those technologies every time, but how much will that effort pay off in the future? When you get a new position or change jobs, a lot of that time will be wasted.

If you want to grow without getting overwhelmed, here’s 2 things to focus on:

1. Learn the fundamentals of building good software. Things that won’t be obsolete in a few years and things that help you understand how technology works.

Having a solid grasp of the fundamentals enables you to be much more effective as a developer. You waste less time, learn to use new technologies faster and most importantly: you build better software both technically and functionally.

Good resources on fundamentals for full stack developers:

Tip: avoid the trap of thinking that you have to read every chapter in a book to be able to "check it off" or to "say that you read it". If you read one chapter, and you are able to apply it and get value from it, that’s great!

2. Have a high-level idea of the technologies that are relevant to you, without going deep.

You can learn about new technologies, but you only need a high-level idea of what they’re about.

Ask questions like:

  • What problem does this technology solve, what’s a realistic use case scenario?
  • What are the ups and downs?
  • Why are people excited about it?

If you do this, you will have a mental toolbox of technologies without spending an excessive amount of time learning them. When you need to solve a problem that a technology addresses, that is the time to learn more about it. It’s part of your job, so it’s ok to do it on the job.

No one can expect you to know everything.

Want to learn more? Sign up for the newsletter!
You’ll receive more content like this that will help you grow as a full stack developer.

Processing…
Success! You're on the list.

Is object-oriented programming (OOP) dead in frontend development?

If you’re following frontend experts on social media, you hear a lot about how great (reactive) functional programming (FP or RFP for short) is for frontend development. Or even that object-oriented programming (OOP) is bad and that we should no longer use it.
Then you go to your day job, you’re doing OOP and you feel like you’re the only one on Earth still using it.

The truth is, everybody in frontend development will encounter OOP in some form sooner or later, even if they claim to be doing pure (R)FP. Don’t let social media misguide you, OOP is still widely used in the industry and Javascript is very much suited for OOP.

Javascript’s Web API’s and many popular open-source libraries use OOP techniques. DOM Elements make use of inheritance to define different kinds of elements and use the observer pattern for events.
The most popular frameworks, React and Vue.js still have the "OO" approach as the default in their Getting started tutorial even though they have a full-blown functional API.

A lot of attention is going to FP these days, but OOP isn’t evil and certainly isn’t dead. Javascript is a multi-paradigm language, it allows you to use OOP and FP techniques together.

FP is great, but there is no reason to limit yourself to FP because "OOP is dead" or "only FP bro". Learning more about it might prove to be more useful for your career than learning about the latest library or framework that will be deprecated by the time you get a chance to use it.
Grasping foundations enables you to learn new technology more quickly, which is very relevant in frontend development.

Want to learn more? Sign up for the newsletter!
You’ll receive more content like this that will help you grow as a full stack developer.

Processing…
Success! You're on the list.