<?xml version="1.0" encoding="UTF-8" ?>
  <rss version="2.0">
    <channel>
        <title>Fabian Waller</title>
        <link>https://www.fabianwaller.de</link>
        <description>This is my personal blog as RSS feed</description>
        <language>en</language>
        <copyright>Fabian Waller</copyright>

        <item>
        <title>Flexible Client-Side Error Handling of different Server Errors</title>
        <description>How I handle different error messages from a server in the client, focusing on separating concerns and flexibility.</description>
        <link>https://www.fabianwaller.de/blog/error-handling</link>
        <pubDate>Mon, 21 Oct 2024 00:00:00 GMT</pubDate>
        <content>## Introduction

Displaying error messages on the client side is critical to providing a seamless user experience. As a server function becomes more complex, the possibility of having multiple different error messages increases.
Decoupling error types from their visual representation ensures that the client can define and display error messages in a consistent way. This is especially important when considering internalization, as error messages may need to be displayed in different languages using some form of client-side translation hook. 
In addition, separating concerns ensures a cleaner and more maintainable codebase.

In this post, we will explore how to handle different error messages in a client application using Next.js server actions. However, this is applicable to any client-server architecture. We will define client-side error messages and display them based on server-side errors, without comparing untyped strings and ensuring that the error type is decoupled from its visual representation.

## The Problem

### Problem with server-side error handling

As suggested in the [Next.js Server Actions documentation](https://nextjs.org/docs/app/building-your-application/data-fetching/server-actions-and-mutations), the server-side code can throw a new error that will be caught by the nearest error boundary on the client. This is the most common way to handle errors in server actions.

```ts
&apos;use server&apos;

export async function createUser(formData: FormData) {
  try {
    // Mutate data
  } catch (e) {
    throw new Error(&apos;Failed to create user&apos;)
  }
}
```

In this example, the error message itself is defined by the server as an untyped string.  This is a common approach and works well for simple applications. However, as described above, we want to define custom typed errors on the client side.

### Example server action with serializable errors

Returning serializable error objects from the server is an alternative approach to gain more control over the structure of the response. Look again at the nextjs example for server-side validation and error handling.

```ts
&apos;use server&apos;

import { z } from &apos;zod&apos;

const schema = z.object({
  email: z.string({
    invalid_type_error: &apos;Invalid Email&apos;,
  }),
})

export default async function createUser(formData: FormData) {
  const validatedFields = schema.safeParse({
    email: formData.get(&apos;email&apos;),
  })

  if (!validatedFields.success) {
    return {
      errors: validatedFields.error.flatten().fieldErrors,
    }
  }

  // Mutate data ...

  return {
    message: &apos;Please enter a valid email&apos;,
  }
}
```

In this approach, the server still defines the error message. Let&apos;s explore a solution where the client defines the error message.

## Solution

### Defining error types

First, we use Typescript to define a type for the error to allow only certain errors to be thrown on the server.

```ts
export enum ErrorType {
  UNAUTHORIZED = &apos;UNAUTHORIZED&apos;,
  DEFAULT = &apos;DEFAULT&apos;,
  // Add more error types as needed
}
```

### Defining error entities

An actual error in the client is now a single entity of the `ErrorType`. You can later define whatever properties you need in your user interface to display the error.

```ts
export type ErrorEntity = {
  title: string;
  text: string;
  severity: AlertColor;
};
```

### Client side error handling

In our UI component, we invoke the server action by clicking a button. If an error occurs, we get back a serialized enum string and map it typesafe to the error entity. Notice how we can define error messages inside the client component and have access to all available client hooks such as translation hooks.

```tsx {6-10, 13, 20} /ErrorType/ /ErrorEntity/
&apos;use client&apos;

import { useState } from &apos;react&apos;;
import { useTranslation } from &apos;react-i18next&apos;;

export type ErrorState = {
  title: string;
  text: string;
  severity: AlertColor;
} | null;

export function Signup() {
  const [error, setError] = useState&lt;ErrorState&gt;(null);
  const [data, setData] = useState&lt;DataType&gt;({});
  const {t} = useTranslation();

  const onClick = async () =&gt; {
    const res = await serverAction({ /* parameters */ });

    if (res.error) {
      setError(errorMessages[res.error]);
      return;
    }

    setData(res.data);
  };

  const errorMessages: { [key in ErrorType]: ErrorEntity } = {
    [ErrorType.UNAUTHORIZED]: {
      title: t(&apos;Unauthorized&apos;),
      text: t(&apos;You are not authorized to perform this action.&apos;),
      severity: &apos;warning&apos;,
    },
    [ErrorType.DEFAULT]: {
      title: t(&apos;Error&apos;),
      text: t(&apos;An unexpected error occurred.&apos;),
      severity: &apos;error&apos;,
    },
  };

  return (
    &lt;div&gt;
      {error &amp;&amp; (
        &lt;div className={`alert alert-${error.severity}`}&gt;
          &lt;strong&gt;{error.title}&lt;/strong&gt; {error.text}
        &lt;/div&gt;
      )}
      &lt;button onClick={onClick}&gt;Sign up&lt;/button&gt;
    &lt;/div&gt;
  );
}
```

```ts {5, 12}
export const serverAction = async ({
  // parameters
}): Promise&lt;{ error?: ErrorType; data?: DataType }&gt; =&gt; {
  if (!user) {
    return { error: ErrorType.UNAUTHORIZED };
  }
  try {
    // Mutate data ...
    return { data: &apos;abc&apos; };
  } catch (error) {
    console.error(&apos;serverAction error&apos;, error);
    return { error: ErrorType.DEFAULT };
  }
};
```

## Conclusion

Everything is well typed, there is no possibility of misspellings or typos in error type strings. The error messages are decoupled from their visual representation, which makes it easy to change the UI without changing the error type.

If you only have one error message, you don&apos;t need this approach. You can just catch the error and display the appropriate error UI. However, if you end up comparing strings to find out what type of error is being thrown, this is my way to go.

Please let me know if you know of a better, cleaner, and more developer-friendly way to do this.</content>
   </item>
<item>
        <title>Caching and Content Delivery Networks</title>
        <description>Using a CDN platform provides predictable end-to-end system quality and performance that web application developers desire.</description>
        <link>https://www.fabianwaller.de/blog/caching-and-content-delivery-networks</link>
        <pubDate>Mon, 15 Jan 2024 00:00:00 GMT</pubDate>
        <content>## Contents

## Introduction

In modern web applications, ensuring predictable end-to-end system
quality and performance is crucial. Even minor performance issues can
have a significant impact on business, resulting in lost revenue and
damage to brand reputation.

However, the Internet&apos;s existing architecture was not designed to meet
the demanding levels of performance, reliability, and scalability that
these applications require. The Internet is made up of thousands of
distinct networks, which means that centrally-hosted content must
traverse multiple networks to reach end users. As a result, capacity
issues arise at peering points where networks exchange traffic.
Wide-area Internet communication is vulnerable to various bottlenecks,
such as latency, packet loss, network outages, inefficient protocols,
and inter-network friction. These limitations within the Internet
architecture hinder its ability to deliver static and dynamic web
content efficiently and with guaranteed end-to-end system quality.

In addition to these challenges, basic routing protocols such as the
Transmission Control Protocol (TCP) were not designed for optimal
performance. Route calculations for internet traffic rely
primarily on an Autonomous System (AS). The AS lacks
knowledge about topologies, latencies, and real-time congestion in
subnetworks.

Content Delivery Networks (CDNs) attempt to mitigate these problems,
which are beyond the direct control of a web developer. CDNs emerged in
the late 1990s as critical tools for overcoming significant technical
obstacles, bridging the gap between the limited capabilities of the
Internet infrastructure and the performance requirements of web
applications. At the time, bandwidth prices were high, but
infrastructure costs were less significant. Most CDN providers therefore
aimed to minimise bandwidth requirements by distributing servers with
content caches close to end users within the existing Internet
architecture, while minimising server loads, client response times and
server availability. Since then, the Internet has evolved significantly,
with bandwidth prices falling, customer demand for rich media content
increasing and server costs rising. As more people consume digital
content, the need for enhanced security, additional cloud functionality
and support for market metrics and analytics has grown.

A highly distributed network emerges as the most effective architectural
solution, especially for interactive and bandwidth-intensive content.
CDNs enable companies to achieve very acceptable levels of performance,
reliability and cost-effective scalability, but also provide the ability
to iterate and ship faster - with much less worry about infrastructure
provisioning, capacity planning, architecture for scalability and
breaking production code.

In the following, this report will provide a broad overview of CDNs by
explaining how they work as a virtual network over the existing Internet
infrastructure, what system components are required to work together as
a web content cache, and how this differs from other caching methods.
Two different cache distribution methods are presented. It also explains
how entire applications can be made highly performant by moving
application logic closer to the user, and the additional benefits of a
recovery-oriented design philosophy.

## Overview

An origin server is the server that hosts the original version of the
content. This can be a web server, an application server, a dedicated
storage server or a database server, usually hosted in a larger data
centre. Edge servers hold additional copies of this content that are
distributed in close proximity to end users. End users only communicate
with the edge servers, which are responsible for retrieving content from
the origin server if it is not in its cache, significantly reducing the
load and bandwidth requirements on the origin server cluster.

As a result, this optimisation has a positive impact on the perceived
performance of Web services to users, as it aims to minimise bottlenecks
in the middle mile and ensure fast retrieval of cached content when
available.

### Virtual Networks

A CDN, defined as a geographically distributed network of Points of
Presence (PoPs) where edge servers are hosted in data centres, operates seamlessly over the existing Internet infrastructure as
an adaptable virtual network without requiring client software or
changes to the underlying networks. CDN nodes are typically
deployed on a widely distributed hardware infrastructure comprising tens
of thousands of servers around the world, spanning various major data
routes. The widespread presence of PoPs around the world
ensures that users can access a high-speed server in their proximity,
ideally within their local ISP&apos;s network, as shown in

![overview](/cdn/overview-Invert_B.png)

*The edge servers are located in global distributed Points of Presence
(PoPs), which are interconnected by an optimised transport system.
Either each PoP has its own IP address (DNS-based routing) or all PoPs
share the same IP address (Anycast). In either case, users are somehow
mapped to the optimal (closest) edge server, which then retrieves the
content from the origin server if it is not already in its
cache.*

### System Components of a Delivery Network

The CDN components shown in

![overview](/cdn/system_components-Invert_B.png)
*When a user requests content, the mapping system translates
the domain name into the IP address of an optimal edge server. The edge
server then checks its cache for the requested content. If the content
is cached, the edge server delivers it to the end user. If necessary, an
edge server can request content from an origin server (backend web
server, application server). The transport system is responsible for
ensuring a reliable and high performance connection for data and content
over the long distance Internet.*

are designed to work together to deliver
content to end users quickly and reliably.

1.  Request Handling: When a user initiates a URL request, the **mapping
    system** translates the domain name into the IP address of an edge
    server. This system uses data to intelligently select an edge server
    that is in optimal proximity to the end user. Content requests are
    typically algorithmically routed to nodes optimised based on
    specific objectives such as geographic location, availability (in
    terms of both current and historical server performance and network
    congestion), performance, cost considerations, or the dynamic
    likelihood that the requested content is already in cache.

2.  Subsequently, the end user&apos;s browser initiates an HTTP request to
    the obtained **edge server** IP address to retrieve the content. The
    edge server then checks its cache for the requested content. If the
    content is cached, the edge server delivers it to the end user.

3.  The primary role of the transport system is to move data efficiently
    and reliably from the origin to the edge servers. In cases where the
    content is not already cached, the edge server efficiently retrieves
    it from the origin server via the **transport system** before
    delivering it to the end user. The transport system is responsible
    for ensuring a reliable and high performance connection for data and
    content over the long distance Internet. Communication between CDN
    servers can be optimised through various techniques such as path
    optimisation and protocol enhancements. The transport system also
    accelerates non-cacheable customer content and applications by
    retrieving content or performing freshness checks from the origin
    server.

Current status information, control messages and configuration updates
are usually available to CDN customers through a **communication and
control system**. Often a **data collection and analysis system**
systematically collects and processes data, including server and client
logs, user data, and network and server information. The **management
portal** acts as a configuration management platform and provides
analytics based on the collected data, such as audience demographic
reports and insights into user interactions with the application,
traffic metrics, monitoring, alerting, reporting and billing.

### Difference from other caching problems

While all caching mechanisms share the overarching goal of reducing
latency and improving access times, they are specifically designed for
distinct contexts. CDNs operate at the application layer within the
technology stack, focusing on delivering content to end users over the
Internet. In contrast, database buffers, file system caches and L2
caches operate at the database, operating system and hardware levels
respectively. And unlike a local browser cache, which is exclusive to a
single user, a CDN is a shared cache accessible to all users of a
service. These different caching mechanisms are
complementary. For example, a web application benefits from the shared
use of a database buffer to speed up dynamic data retrieval, a CDN to
deliver web content quickly, and each user&apos;s local browser cache to
store static content that does not change frequently.

### Cache Distribution

Web caches store content on servers that have the greatest demand for
the requested content. They are filled based on user requests (pull
caching) or based on preloaded content distributed by content servers
(push caching).

The tiered distribution model for less frequently accessed content,
known as \&quot;**pull**\&quot;, involves the use of a set of well-provisioned
\&quot;parent\&quot; clusters (they have a high degree of connectivity to edge
clusters). If an edge cluster does not have the requested content in its
cache, it will retrieve the content from its parent cluster instead of
the origin server. This approach reduces the load on the origin server,
which only needs to maintain connections with a few dozen parent
clusters rather than all edge servers. Both origin and parent clusters
can use the performance-optimised transport system.

In contrast, the **push** model involves an overlay network, which is
particularly useful for live video streaming or edge configuration. The
captured and encoded stream is sent to an entry point cluster, and to
avoid single points of failure, copies of the stream are sent to
additional entry points with automatic failover mechanisms. The live
stream transport system then propagates the stream packets from the
entry point to a subset of edge servers. Reflectors, an intermediate
layer of servers, enable scalable replication of streams to multiple
edge clusters, providing alternate paths for improved end-to-end quality
through path optimisation.

The distributed nature of the CDN network is key to the effectiveness of
the overlay network, ensuring highly optimised long-haul tunnels with
endpoints located close to the origin server and end user. This results
in optimised communication from origin to end user, making even origin
server downloads through the high performance overlay almost as
efficient as cached files.

## High Performance Application Delivery Networks

In addition to static files, entire web applications and other
non-cacheable dynamic content benefit from using a CDN in two primary
ways.

### High Performance Overlay Network

First, a CDN takes advantage of the speed of the Internet for
long-distance communications by using the CDN transport system as a
high-performance overlay network. It traverses the Internet and reaches
a CDN machine close to the customer&apos;s origin server, usually within the
same network or even the same data centre, so that latencies are low.

### Edge computing

Second, developers can push application logic from the origin server to
the edge of the Internet. CDN customers can deploy functions to the
edge, where they can be executed based on HTTP requests or custom
events. This allows code to run in local data centres, closer to end
users, and provides a relatively simple multi-region setup. The ultimate
performance boost, reliability and scalability is only achieved when the
application itself is distributed to the edge. Deploying and running a
request driven application or component on edge servers brings cloud
computing to a level where resources are not only allocated on demand,
but also close to the end user. But it also brings new challenges,
including more complex session management, multi-machine replication,
security sandboxing, fault management, distributed load balancing, and
resource monitoring and management, as well as advanced requirements for
testing and deployment tools.

Not all types of applications can run entirely on the edge, especially
those that rely heavily on large transactional databases. However,
several applications or parts thereof can benefit, including content
aggregation/transformation, static databases, data collection and data
validation. Even with real-time database transactions, running front-end
components at the edge offers performance benefits by streamlining
communications with the origin server and reducing load. For example,
the origin server can generate a small dynamic page that references
cachable fragments, allowing the final HTML page to be assembled and
served at the edge.

![places](/cdn/places-Invert_B.png)

*Edge functions can run at different logical locations,
namely viewer request (before caching), origin request (after caching),
origin response (before caching origin response) and viewer response
(before sending the origin/cached response).*

As visualised in the figure above, application logic can also run in different
logical places for different purposes, such as content paywalls for a news site,
permanent redirects, image formatting and setting user cookies for
analytics. Why these functions are best executed at these locations is
explained below.

1.  If caching for a news website is done on a per-user basis (based on an authentication token), 
    it may result in rarely returning a cached result, as users
    may only visit the site once a day. Instead, it is recommended to
    remove the authentication token, extract the subscription level (premium or free) and
    set a header containing this information before caching the request.
    This header can then be cached, resulting in a cached response
    returned to all users with the same subscription level.

2.  For permanent redirects, such as during a migration, a URL can be
    matched and redirected to a new URL. Configuring this after caching
    ensures that the traffic is routed through the CDN.

3.  To handle
    image formatting, the client first makes a GET request, specifying
    parameters for image type and size acceptance. After the storage
    bucket returns the image, which may be larger and in a different
    format, the edge function transforms it into the correct format.
    Because this transformation occurs before the response is sent, the
    modified image can be cached and served to other users without the
    need for additional transformations.

4.  To track users and improve analytics by matching requests to
    specific users, a user cookie can be set before the response is
    sent. It is important that this cookie is not associated with the
    cache, as caches should be shared between users.

With modern web frameworks, it is also easy to run all site
functionality in edge functions only, without using an origin server at
all.

## Benefits

The recovery-orientated design philosophy outlined below has a number of
useful by-products.

### Design Principles

The entire CDN design is based on the assumption that problems such as
machine, cluster, connectivity and network failures will occur at some
point in the network. The system is therefore designed for
**reliability**. The aim is to achieve close to 100% end-to-end system
availability. CDNs ensure full component redundancy to eliminate single
points of failure and incorporate multiple levels of fault tolerance.
With a large infrastructure, CDNs are able to meet increasing traffic
demand and handle traffic spikes. All platform components are highly
**scalable**, able to efficiently handle varying levels of traffic,
content and customers. On the other hand, the **need for human
management is limited**. Because the CDN network is designed with the
assumption that components can fail at any time, CDNs are designed as
autonomic systems and keep human operating costs low. CDNs have the
ability to recover from failures, manage load and capacity shifts,
self-tune for optimal performance, and securely deploy software and
configuration updates. Humans do not have to worry about most outages or
rush to fix them. Moreover, staff can proactively suspend components if
they have the slightest concern, as this will not affect the performance
of the overall system. Another benefit is the ability to roll out
software updates seamlessly. Because the failure of a number of machines
or clusters does not affect the overall system, zoned software rollouts
can be performed quickly and frequently without disrupting production
services. This enables application developers to iterate and deliver
their products faster and more frequently. Finally, because CDNs are
designed for **performance**, they optimise end-user performance,
improve cache hit rates, effectively manage network resource utilisation
and promote energy efficiency throughout the system.

### Additional Benefits

In addition to these improvements, a CDN brings other significant
benefits, such as

- **Security**: A CDN leverages its significant network capacity at
    the edge, playing a key role in providing robust Distributed Denial
    of Service (DDoS) protection, particularly against large-scale
    attacks, as shown in 
    ![security](/cdn/ddos-Invert_B.png)
    *Web requests are distributed across the different CDN
    servers. In contrast to a centralised origin server, DDOS attacks do not
    block the system for other visitors because the load is balanced across
    many servers.*

    . The key strategy is to maintain a network
    capacity that is significantly greater than that of potential
    attackers. This not only effectively thwarts DDoS attacks, but also
    prevents downtime and cost explosions. This approach is particularly
    effective when the CDN is built on an anycast network, allowing
    attack traffic to be distributed across a large number of servers.

-   **Improved Availability and Reliability**: The inherent design of a
    CDN is one of high distribution. By having copies of content across
    many PoPs, a CDN is resilient to multiple hardware failures compared
    to centralised origin servers, as shown in
        ![availability](/cdn/stale_if_error-Invert_B.png)
        *By having cached copies of content available in many
    locations, a CDN can withstand many more hardware failures than the
    origin server alone by potentially serving outdated cached content.
    There are no more single points of failure.*

    . The large server distribution acts as
    a failover mechanism and has a proven ability to maintain
    uninterrupted services in the face of unpredictable downtime due to
    machine, cluster or connectivity failures. High availability
    techniques within edge clusters respond seamlessly to machine
    failures by starting other machines and timely updating the map used
    for optimised routing to redirect new requests to accommodate these
    failures. In the event of whole cluster failures or connectivity
    issues, the CDN dynamically adjusts cluster allocations and quickly
    updates the system to redirect requests to clusters with better
    performance. The robustness of the CDN platform also extends to
    connectivity issues, where degraded connections are quickly detected
    and mitigated through path optimisation technology that finds good
    alternative paths through intermediate nodes in the CDN network.

-   **Lower costs** can be a significant financial consideration. CDN
    egress costs, which refer to the costs associated with data leaving
    the data centre and reaching the end user, are significantly lower
    than direct data centre egress. The CDN infrastructure optimises
    data delivery, reducing the total amount of outgoing data and
    therefore the financial burden associated with getting data from the
    data centre to the end user. In addition, using the image
    transformation edge function [mentioned
    above](#sec:image_formatting), or an equivalent built-in
    functionality, images can be transformed into the optimal format for
    the end user&apos;s browser. This reduces the amount of data that needs
    to be transferred, further reducing costs.

-   Handshakes for encrypted connections take multiple network rounds to
    establish and are therefore inherently resource intensive. By
    **terminating the encrypted connection at the edge server**, as
    visualised in the figure below, the latency for users to establish an
    encrypted connection is significantly reduced. This optimisation is
    one of the reasons why many modern applications even send dynamic,
    uncacheable HTTP content via a CDN.

     ![availability](/cdn/ssl_termination-Invert_B.png)
     *By terminating the secure connection at the edge, the
    latency for the user to establish an encrypted connection to the edge
    server is significantly reduced. The connection to the origin server is
    kept alive.*

    The highly distributed nature of the CDN network is key to its
    effectiveness. This distribution ensures that the endpoints of the
    optimised long-haul tunnel are located in close proximity to both
    the origin server and the end user. As a result, most of the
    communication from the origin to the end user is optimised, with the
    short hops at either end having extremely low latency due to their
    short distance. In practice, this optimisation results in good
    performance over long distances.

-   **Flexibility** by providing the ability to integrate with multiple
    origins. Users can configure routing rules directly within the CDN.
    This allows customers to define specific rules for content delivery,
    intelligently routing static file requests to specific storage
    bucket servers or other appropriate origins.

-   Robust **logging and analytics** capabilities are critical to
    gaining comprehensive insight into system performance. CDNs often
    facilitate the collection and aggregation of rich data at the edge,
    providing valuable capabilities for observing traffic patterns,
    extracting insights and effectively categorising information as
    mentioned above.

-   Modern CDNs often have the ability to go beyond traditional content
    delivery and actively **transform static content** into more
    optimised formats. This includes minimising the file size of script
    bundles, transforming image files into modern formats such as webp,
    and compressing content.

## Conclusion

Using a CDN platform provides the desired predictable end-to-end system
quality and performance. CDNs were created to reduce the bandwidth
required to deliver static web content quickly and reliably, overcoming
the inherent limitations of the Internet architecture. They act as a
layer on top of the existing Internet infrastructure, operating
seamlessly as a virtual network without requiring any software or
hardware changes for their customers. End users only communicate with
the edge servers, which are then responsible for retrieving content from
the origin server if it is not in its cache, significantly reducing the
load and bandwidth requirements on the origin server cluster, increasing
performance and reducing costs. Even in the unlikely scenario of
simultaneous failures, the CDN is highly resilient, recovering quickly
and ensuring a consistent and reliable content delivery experience for
end users. Overall, the multi-level failover capabilities of a CDN give
customers the reliability, availability, security and flexibility they
want for their web applications. Today, entire applications can be made
highly performant by moving application logic closer to the user.

To visit the full report including all references, please visit [report](/cdn/report.pdf).</content>
   </item>
<item>
        <title>Heuristics for clean code</title>
        <description>To write new code we have to read old code. So making it easy to read actually makes it easier to write.</description>
        <link>https://www.fabianwaller.de/blog/heuristics-for-clean-code</link>
        <pubDate>Mon, 10 Oct 2022 00:00:00 GMT</pubDate>
        <content>Clean code contains **no duplication** and does one thing well. It provides one way rather than many ways for doing one thing and therefore follows the DRY principle (don’t repeat yourself).

**Performance** is close to optimal to not tempt people to make the code messy with optimizations. There is nothing obvious you can do to make it better.

It follows standard **conventions**. The code should describe the conventions with examples without needing an additional document.

It uses **meaningful names**. If a name requires a comment then the name does not reveal its intent. Choose names at the appropriate level of abstraction. Avoid disinformation and make meaningful distinctions.

**Functions should be small and do only one thing**. This means no selector or flag arguments to do multiple things.
Functions should descend only one level of abstraction. Means implementing one level of abstraction per function.
Command query separation. Either do something or answer something.
Minimal number of function arguments and no output arguments.
If the function should change something, let it change the state of its owning object.

&gt; The proper use of comments is to compensate for our failure to express ourself in code.

**Comments** should be reserved for technical notes about the code and design and don&apos;t contain inappropriate information.
Old, irrelevant, and incorrect comments tend to migrate away from the code they once described. They become irrelevant. The older a comment is and the farther away it is from the code it describes, the more likely it is to be just plain wrong. These are obsolete comments.
Clean code doesn&apos;t contain commented-Out Code. Instead it is deleted. Version control remembers it. Otherwise it sits there and rots, getting less and less relevant.
Comments should not describe something that adequately describes itself. Comments should say things that the code cannot say for itself.
Rather than spending your time writing the comments that explain the mess, spend it cleaning that mess.

**Formatting**

&gt; A good software system is composed of a set of documents that read nicely. They need to have to be consistent and smooth style. The last thing we want to do is add more complexity to the source code, by writing it in a jumble of different individual styles.

A clean source file reads like a newspaper article. Details should increase as we move downward, until at the end we find the lowest level functions and details in the source file. Lines of code that are tightly related appear vertically dense.
If one function calls another, they are vertically close, and the caller is above the callee, if all possible to achieve a natural flow from high level to low level.

**Objects and data structures**

Hiding implementations is not just about putting a layer of functions between variables it is about abstractions.
Objects hide their data (internal structure) behind abstractions and expose functions that operate on that data.
Data structures expose their data (internal structure) and have no meaningful functions.

**Error handling**

Clean code uses exceptions rather than return codes. 
Return codes clutter the caller. The caller must check for errors immediately after the call and it&apos;s easy to forget. With exceptions the calling code is cleaner. Its logic is not obscured by error handling.

**Tests**

Tests keep our code flexible, maintainable, and reusable. If we have a test, we do not fear making changes to the code. Without our tests every change is a possible bug.
The number of asserts in a test should be minimized to have a single conclusion that is quick and easy to understand. Clean code tests a single concept in each test function.

Tests should be 
F - Fast
I - Independent
R - Repeatable in any environment
S - Self-Validating (boolean output)
T - Timely (just before the production code)

**Classes**

Clean systems are composed of many small classes, not a few large ones. Each small class encapsulates a single responsibility, has a single reason to change, and collaborates with a few others to achieve the desired system behaviors.

**Successive refinement**

&gt; The first draft might be clumsy and disorganized, so you wordsmith it and restructure it and refine it until it reads the way you want it to read.

Leave the codebase cleaner than you found it.</content>
   </item>
    </channel>
  </rss>