DEV Community

Cover image for Why Not Using TypeScript (or Using It Wrong) Is Killing Your App Performance
Davide Ceschia
Davide Ceschia

Posted on

Why Not Using TypeScript (or Using It Wrong) Is Killing Your App Performance

While TypeScript is primarily marketed for its developer experience improvements, there's an often-overlooked side effect: TypeScript's static type system encourages JavaScript coding patterns that are more likely for V8 to optimise.

In this deep dive, we'll start by understanding how JavaScript values are stored in memory and why consistent types matter for performance. Then we'll explore V8's optimization strategies, what causes performance-killing deoptimizations, and how TypeScript's compile-time type checking can help you write JavaScript that plays nicely with modern engine optimizations.

From Interpretation to JIT: How JavaScript Actually Executes

Before diving into V8's optimization strategies, it's crucial to understand a common misconception: JavaScript is not simply an interpreted language. While JavaScript started as a purely interpreted language in 1995, modern JavaScript engines use a sophisticated hybrid approach combining interpretation with Just-In-Time (JIT) compilation.

The Evolution of JavaScript Execution

Early Days (1995-2008): Pure Interpretation

The first JavaScript engine, created by Brendan Eich for Netscape Navigator, was a straightforward interpreter. It would read JavaScript source code line by line and execute it directly without any compilation step. This approach had significant limitations:
In a pure interpreter, this loop would be re-parsed and executed identically on every iteration - no learning, no optimization

for (let i = 0; i < 1000000; i++) {
  someFunction(i); // Same overhead every single time
}
Enter fullscreen mode Exit fullscreen mode

The JIT Revolution (2008+): TraceMonkey and V8

Both 2008 releases marked the beginning of the JIT era, but with different approaches. Mozilla's TraceMonkey was the first JIT compiler for JavaScript, using a "trace-based" compilation strategy that recorded and optimized specific execution paths. Meanwhile, Google's V8 engine took a different approach, revolutionizing the landscape by coupling interpretation with aggressive whole-function JIT compilation from the start, setting the foundation for modern JavaScript performance.

Modern JavaScript Execution: The Hybrid Approach

Today's JavaScript engines, including V8, SpiderMonkey, and JavaScriptCore, use a multi-tiered execution strategy:

The Complete V8 Execution Pipeline

In the diagram I created below you can see a somewhat simplified but comprehensive flow from your source code all the way to the execution.

v8 pipeline

V8, Chrome's JavaScript engine, uses a multi-tiered compilation approach to optimize JavaScript execution:

The Optimization Tiers

1. Ignition (Interpreter): Initial bytecode interpretation

2. Sparkplug (Fast Compiler): Fast non-optimizing compiler (introduced 2021)

3. Maglev (Mid-tier Compiler): Mid-level optimizing compiler (introduced 2023)

4. TurboFan (Optimizing Compiler): Aggressive optimizations based on runtime profiling

5. Deoptimization: Fallback to bytecode when assumptions prove wrong

When V8 encounters frequently executed code (hot code), it profiles the types and shapes of objects being used. Based on this profiling data, TurboFan makes aggressive optimizations by assuming certain type patterns will continue.

Why This Hybrid Approach Matters for Performance

This hybrid approach delivers four key advantages that make modern JavaScript engines incredibly effective.

  • Code can begin executing immediately via the interpreter without waiting for compilation, ensuring fast startup times.
  • The engine learns from actual runtime behavior rather than relying on static analysis, enabling adaptive optimization that becomes more effective as the application runs.
  • Bytecode is much more compact than machine code, providing crucial memory efficiency especially on mobile devices.
  • Finally, when optimization assumptions prove incorrect, the engine can dynamically deoptimize and fall back to less optimized but more flexible code, maintaining correctness while preserving the opportunity for future optimization.

The JIT Compilation Process

function calculateDistance(p1, p2) {
  const dx = p1.x - p2.x;
  const dy = p1.y - p2.y;
  return Math.sqrt(dx * dx + dy * dy);
}

// First few calls: Executed by Ignition interpreter
calculateDistance({ x: 1, y: 2 }, { x: 3, y: 4 });
calculateDistance({ x: 5, y: 6 }, { x: 7, y: 8 });

// After many calls with consistent object shapes:
// TurboFan compiles this to optimized machine code with:
// - Direct memory access to x, y properties
// - Inlined Math.sqrt
// - Specialized floating-point operations
Enter fullscreen mode Exit fullscreen mode

The profiler notices that this function is called frequently (hot code), the parameters are always objects with x and y number properties and the property access patterns are consistent.

TurboFan then generates specialized machine code that assumes these patterns will continue, resulting in dramatic performance improvements.

The Foundation for Optimization

This hybrid execution model is why V8's optimization pipeline exists. The interpreter provides the runtime data that makes aggressive optimization possible, while JIT compilation transforms that data into fast machine code.

How JavaScript Variables Are Stored in Memory

Before diving into V8's optimization strategies, it's crucial to understand how JavaScript values are actually stored and accessed in memory. This foundation will help explain why certain patterns trigger optimizations while others cause performance-killing deoptimizations.

Computer Memory Basics

Computer memory is like a giant array of boxes, each with a unique address:

Memory Address:  [0x1000] [0x1001] [0x1002] [0x1003] [0x1004] ...
Memory Content:  [  42  ] [ 'H' ] [ 'i' ] [  ??  ] [  ??  ] ...
Enter fullscreen mode Exit fullscreen mode

Each box can hold exactly one byte (8 bits). When you store data, you need to know:

  1. Where it starts (the address)
  2. How much space it takes up (the size)
  3. How to interpret the bytes (the type)

The Problem with JavaScript's Dynamic Types

In languages like C++, you declare exactly what type each variable holds:

int number = 42;        // Always 4 bytes, always an integer
char letter = 'A';      // Always 1 byte, always a character
float price = 19.99;    // Always 4 bytes, always a floating-point number
Enter fullscreen mode Exit fullscreen mode

The compiler knows exactly how much memory each variable needs and how to interpret the bytes.

But JavaScript is different:

let value = 42; // Number
value = "hello"; // Now it's a string
value = { x: 1, y: 2 }; // Now it's an object
Enter fullscreen mode Exit fullscreen mode

The same variable can hold completely different types of data that might take up wildly different amounts of memory, get stored in entirely different ways, and require the CPU to use different instruction sets.

Why Consistent Memory Layout Matters

When the CPU processes data, it's most efficient when it can make assumptions:

// Fast, CPU knows exactly what to expect
Memory: [42][43][44][45] // All 4-byte integers
CPU: "Great! I can use fast integer addition instructions"

// Slow, CPU must check each value
Memory: [42]["hello"][{obj}][3.14] // Mixed types, different sizes
CPU: "I need to check what each one is before I can do anything"
Enter fullscreen mode Exit fullscreen mode

Modern CPUs are like specialized workshops with different tools for different jobs. They have blazingly fast integer instructions for whole numbers, carefully tuned floating-point instructions for decimals, and specialized string instructions for text processing. When the CPU knows exactly what type of data it's working with, it can grab the perfect tool for the job instead of fumbling around with a generic one-size-fits-all approach.

V8's Solution: Tagged Pointers

JavaScript variables can hold different types (number, string, object, etc.), but the computer's memory needs a consistent way to store them. V8 solves this using a clever technique called tagged pointers.

A pointer is just a memory address that tells the computer where to find data. V8 "tags" these pointers by using their unused bits to store type information.

Think of it like this: if a memory address is 0x1000, V8 might store it as 0x1001 where the extra 1 at the end is a "tag" indicating the type.

// V8's internal representation (heavily simplified for illustration)
// Actual V8 implementation is much more complex
class Object {
    uintptr_t value_; // This stores either a tagged pointer or a direct value

    bool IsSmi() const { return (value_ & 1) == 0; }        // Check if it's a small integer
    int32_t ToSmi() const { return static_cast<int32_t>(value_ >> 1); } // Extract the integer
    HeapObject* ToHeapObject() const { return reinterpret_cast<HeapObject*>(value_ - 1); } // Get heap object
};
Enter fullscreen mode Exit fullscreen mode

Two Main Storage Strategies

V8 uses two primary strategies for storing values:

Small Integers (SMI): The Fast Path

For integers that fit in 31 bits (like 42, -100, or 1000), V8 pulls off a neat trick: it stores the value directly in the pointer itself. No separate memory allocation, no garbage collection overhead, just pure speed. The number 42 gets stored as 84 (42 shifted left by 1 bit, with a tag bit of 0).

Heap Objects: The Slow Path

Everything else (strings, objects, large numbers, arrays) gets the full treatment. V8 allocates separate memory in the "heap" (think of it as a big storage warehouse), which means memory allocation overhead and eventual garbage collection. The string "hello" gets allocated somewhere in heap memory, and the pointer to it gets tagged with a 1 to signal "this is a heap object, not a direct value."

Why This Matters for Performance

// Fast path, all SMI integers
function addNumbers(a, b) {
  return a + b; // V8 can use fast integer arithmetic
}

addNumbers(1, 2); // Both values stored as SMI, super fast
addNumbers(100, 50); // Still SMI, still fast

// Slow path, mixed types
addNumbers(1, 1.5); // 1.5 requires heap allocation, slower
addNumbers(1, "hello"); // String concatenation, much slower
Enter fullscreen mode Exit fullscreen mode

When types are consistent (all SMI integers), V8 gets to live its best life. It can skip all the paranoid type checking, fire up the CPU's blazingly fast integer instructions, avoid the whole memory allocation dance, and completely ignore garbage collection.

When types get mixed up, V8 has to examine each value at runtime to figure out what it's dealing with, convert between different representations, allocate heap memory for the complex stuff, and eventually schedule garbage collection to clean up the mess.

Note: This is a conceptual representation. Modern V8 uses pointer compression and more sophisticated tagging schemes, but the core principle remains the same.

Understanding V8's Optimization Pipeline

Hidden Classes: V8's Secret to Fast Object Access

Now that we understand how V8 stores individual JavaScript values using tagged pointers, let's explore how it optimizes object property access through Hidden Classes (also called Maps in V8, or Shapes in other engines).

The Object Optimization Challenge

JavaScript objects are incredibly flexible, but this flexibility creates performance challenges for V8:

// These objects look similar but create different hidden classes
const user1 = { name: "Alice", age: 25 }; // HiddenClass_A
const user2 = { age: 30, name: "Bob" }; // HiddenClass_B (different order!)

// Dynamic property changes break optimizations
const user3 = { name: "Charlie", age: 35 }; // HiddenClass_A (same as user1)
user3.email = "[email protected]"; // Now HiddenClass_C!

// Inconsistent properties across similar objects
function processUser(user) {
  return user.name; // V8 can't optimize - unclear what properties exist
}
Enter fullscreen mode Exit fullscreen mode

The challenge isn't just about property lookup speed, but about V8's ability to predict and optimize object shapes. When objects have unpredictable structures, V8 can't make the aggressive optimizations that lead to fast code.

Hidden Classes

V8 creates a Hidden Class (think of it as a "blueprint" or "schema") for each unique object structure. This blueprint is like a map that tells V8 exactly what to expect: which property names exist, what order they appear in, and most importantly, exactly where each property lives in memory. It's also smart enough to track property attributes like whether they're writable or enumerable.

// When you create this object:
const user = { name: "Alice", age: 25 };

// V8 internally creates a hidden class like:
// HiddenClass_1 {
//   property_0: "name" at offset 0
//   property_1: "age" at offset 8
// }
Enter fullscreen mode Exit fullscreen mode

How Hidden Classes Enable Fast Access

With hidden classes, property access becomes O(1) - constant time:

const user1 = { name: "Alice", age: 25 };
const user2 = { name: "Bob", age: 30 };

// Both objects share the same hidden class!
// V8 knows: user.name is always at memory offset 0
//          user.age is always at memory offset 8

console.log(user1.name); // Direct memory access: base_address + 0
console.log(user2.age); // Direct memory access: base_address + 8
Enter fullscreen mode Exit fullscreen mode

Hidden Class Sharing Rules

Objects can share the same hidden class, but V8 is surprisingly picky about this. The objects need to have identical property names in exactly the same order, plus matching property attributes. Even something as subtle as swapping the order of two properties will force V8 to create an entirely new hidden class.

This property order requirement is still a fundamental limitation in modern V8. Despite significant optimizations in other areas, the V8 team hasn't implemented optimizations to treat objects with same properties but different orders as equivalent. The official V8 documentation states:

"The basic assumption about HiddenClasses is that objects with the same structure (e.g. the same named properties in the same order) share the same HiddenClass."

// These objects share the same hidden class
const obj1 = { x: 1, y: 2 }; // HiddenClass_A
const obj2 = { x: 3, y: 4 }; // HiddenClass_A (shared!)

// This creates a different hidden class
const obj3 = { y: 2, x: 1 }; // HiddenClass_B (different order!)

// This also creates a different hidden class
const obj4 = { x: 1, y: 2, z: 3 }; // HiddenClass_C (extra property!)
Enter fullscreen mode Exit fullscreen mode

Inline Caching

Inline Caching takes hidden classes one step further. When V8 sees the same property access pattern repeatedly, it "inlines" the memory offset directly into the optimized code. Think of inlining as V8 saying "I know exactly where this property lives, so instead of looking it up every time, I'll just hardcode the address." It's like replacing "go to the filing cabinet, find the folder labeled 'customers', then look for John's file" with "go directly to drawer 3, slot 15."

function getNames(users) {
  const names = [];
  for (const user of users) {
    names.push(user.name); // This property access gets optimized
  }
  return names;
}
Enter fullscreen mode Exit fullscreen mode

After profiling, V8 might generate optimized C++ code like:

// Conceptual optimized C++ code
void getNames_optimized(JSArray* users, JSArray* names) {
    // V8 assumes all users have the same hidden class
    for (JSObject* user : users) {
        // Direct memory access - no property lookup needed
        JSString* name = *(JSString**)(user + NAME_OFFSET); // Hardcoded offset
        names->push(name);
    }
}
Enter fullscreen mode Exit fullscreen mode

Hidden Class Checks

Since V8 makes assumptions about object structure, it needs to verify these assumptions. This is where hidden class checks come in:

function processUser(user) {
  return user.name + " " + user.email; // Optimized for specific hidden class
}
Enter fullscreen mode Exit fullscreen mode

V8's optimized code includes a hidden class check:

// Conceptual optimized C++ code
JSString* processUser_optimized(JSObject* user) {
    // Hidden class check - verify our assumptions are still valid
    if (user->map() != ExpectedHiddenClass) {
        return deoptimize_and_fallback(user); // Fall back to slow path
    }

    // Fast path: direct memory access using known offsets
    JSString* name = *(JSString**)(user + NAME_OFFSET);   // Direct access to 'name'
    JSString* email = *(JSString**)(user + EMAIL_OFFSET); // Direct access to 'email'
    return concatenateStrings(name, " ", email);
}
Enter fullscreen mode Exit fullscreen mode

If the hidden class check fails (wrong object shape), V8 deoptimizes and falls back to the slow, generic property lookup.

What Triggers Deoptimization?

Deoptimization occurs when V8's assumptions about code behavior prove incorrect. Common triggers include:

1. Type Polymorphism

function add(a, b) {
  return a + b; // V8 optimizes for specific types
}

// Initially called with numbers
add(1, 2); // TurboFan optimizes for number addition
add(3, 4);
add(5, 6);

// Type change triggers deoptimization
add("hello", "world"); // Deopt! Falls back to slower generic path
Enter fullscreen mode Exit fullscreen mode

2. Hidden Class Transitions

function processUser(user) {
  return user.name + " " + user.email; // Optimized for specific object shape
}

const user1 = { name: "Alice", email: "[email protected]" };
const user2 = { name: "Bob", email: "[email protected]" };

processUser(user1); // Hidden class HC1
processUser(user2); // Same HC1, optimized

const user3 = { email: "[email protected]", name: "Charlie" }; // Different HC2
processUser(user3); // Deoptimization!
Enter fullscreen mode Exit fullscreen mode

3. Dynamic Property Addition

function Point(x, y) {
  this.x = x;
  this.y = y;
  // Hidden class HC1: {x, y}
}

const point = new Point(1, 2);
// Later in code...
point.z = 3; // Hidden class transition HC1 to HC2, potential deopt
Enter fullscreen mode Exit fullscreen mode

Understanding Object Memory Layout and Hidden Classes

Now that we understand how V8 stores individual values in memory, let's explore how it handles JavaScript objects - which is where the real performance magic (and potential pitfalls) happen.

Optimized Operations

When V8 knows types are consistent, it can generate highly optimized machine code:

// JavaScript
function calculateDistance(p1, p2) {
  const dx = p1.x - p2.x;
  const dy = p1.y - p2.y;
  return Math.sqrt(dx * dx + dy * dy);
}
Enter fullscreen mode Exit fullscreen mode

With consistent typing, V8 generates optimized assembly:

; Optimized assembly (conceptual, actual V8 output is much more complex)
mov rax, [rcx + 8]    ; Load p1.x (direct offset access)
sub rax, [rdx + 8]    ; Subtract p2.x
mov rbx, [rcx + 16]   ; Load p1.y
sub rbx, [rdx + 16]   ; Subtract p2.y
; ... optimized floating-point operations
Enter fullscreen mode Exit fullscreen mode

Without type consistency, V8 must generate defensive code with type checks:

; Deoptimized assembly (conceptual, simplified for illustration)
call CheckObjectType   ; Runtime type check
call PropertyLookup    ; Hash table lookup instead of direct access
call TypeCoercion      ; Handle potential type conversion
; ... significantly more overhead
Enter fullscreen mode Exit fullscreen mode

To make this difference clearer, here's a conceptual C++ representation of what V8 might generate:

// Optimized version (consistent types)
double calculateDistance_optimized(Point* p1, Point* p2) {
    double dx = p1->x - p2->x;  // Direct memory access
    double dy = p1->y - p2->y;  // Direct memory access
    return sqrt(dx * dx + dy * dy);
}

// Deoptimized version (mixed types)
JSValue calculateDistance_deoptimized(JSValue p1, JSValue p2) {
    if (!isObject(p1) || !isObject(p2)) return handleError();

    JSValue x1 = getProperty(p1, "x");  // Hash table lookup
    JSValue x2 = getProperty(p2, "x");  // Hash table lookup
    JSValue y1 = getProperty(p1, "y");  // Hash table lookup
    JSValue y2 = getProperty(p2, "y");  // Hash table lookup

    if (!isNumber(x1) || !isNumber(x2) || !isNumber(y1) || !isNumber(y2)) {
        return handleTypeCoercion(x1, x2, y1, y2);
    }

    double dx = toDouble(x1) - toDouble(x2);
    double dy = toDouble(y1) - toDouble(y2);
    return createJSNumber(sqrt(dx * dx + dy * dy));
}
Enter fullscreen mode Exit fullscreen mode

TypeScript's Performance Advantages

TypeScript's static type system provides several performance benefits by preventing the deoptimization scenarios we've discussed.

1. Preventing Type Polymorphism

// TypeScript prevents mixed types
function add(a: number, b: number): number {
  return a + b; // V8 can optimize for number-only operations
}

add(1, 2); // ✓ Optimized
add(3, 4); // ✓ Still optimized
// add("hello", "world"); // ✗ Compile-time error prevents deopt
Enter fullscreen mode Exit fullscreen mode

2. Improving Object Shape Consistency (Partially)

TypeScript helps with some aspects of object shape consistency, but not all:

interface User {
  name: string;
  email: string;
}

function processUser(user: User): string {
  return user.name + " " + user.email; // TypeScript guarantees these properties exist
}

// ✅ What TypeScript DOES help with:
const user1: User = { name: "Alice", email: "[email protected]" };
const user2: User = { name: "Bob", email: "[email protected]" };
// user2.phone = "555-1234"; // ✗ Prevents dynamic property addition

// ⚠️ What TypeScript CANNOT prevent:
const user3: User = { email: "[email protected]", name: "Charlie" }; // Different property order = different hidden class

// TypeScript ensures required properties exist and prevents dynamic changes,
// but property order still matters for V8 optimization
Enter fullscreen mode Exit fullscreen mode

The Property Order Performance Reality

This limitation has real performance implications. Even in modern V8, the hidden class system means that objects with the same properties in different orders create separate optimization paths. The V8 team's focus on real-world performance testing (using actual websites like Twitter and Google Maps instead of synthetic benchmarks) has confirmed that consistent object shapes (including property order) remain crucial for optimal performance.

So while TypeScript doesn't solve the property order issue, it does prevent many other shape-breaking patterns that cause deoptimizations.

3. Preventing Dynamic Property Addition

class Point {
  constructor(public x: number, public y: number) {}
}

const point = new Point(1, 2);
// point.z = 3; // ✗ Compile-time error prevents hidden class transition
Enter fullscreen mode Exit fullscreen mode

4. Generic Type Constraints

function processItems<T extends { id: number }>(items: T[]): number {
  let sum = 0;
  for (const item of items) {
    sum += item.id; // V8 knows 'id' is always a number
  }
  return sum;
}
Enter fullscreen mode Exit fullscreen mode

Understanding the Performance Impact

Example: Type-Consistent Array Processing

// JavaScript version - potential for type inconsistency
function processNumbers(arr) {
  let sum = 0;
  for (let i = 0; i < arr.length; i++) {
    sum += arr[i] * 2; // Could deoptimize if mixed types are passed
  }
  return sum;
}

// Later in code, this could cause deoptimization
processNumbers([1, 2, 3]); // Numbers
processNumbers(["1", "2", "3"]); // Strings, type change!
Enter fullscreen mode Exit fullscreen mode
// TypeScript version - enforces type consistency
function processNumbers(arr: number[]): number {
  let sum = 0;
  for (let i = 0; i < arr.length; i++) {
    sum += arr[i] * 2; // Compiler prevents non-number arrays
  }
  return sum;
}

// processNumbers(['1', '2', '3']); // ✗ Compile-time error
Enter fullscreen mode Exit fullscreen mode

Example: Consistent Object Shapes

// JavaScript - potential shape inconsistency
function calculateTotal(orders) {
  let total = 0;
  for (const order of orders) {
    total += order.amount; // Could access different object shapes
  }
  return total;
}
Enter fullscreen mode Exit fullscreen mode
// TypeScript - enforces consistent shape
interface Order {
  id: number;
  amount: number;
}

function calculateTotal(orders: Order[]): number {
  let total = 0;
  for (const order of orders) {
    total += order.amount; // All objects guaranteed same shape
  }
  return total;
}
Enter fullscreen mode Exit fullscreen mode

Advanced Optimization Techniques

1. Branded Types for Runtime Safety

type UserId = number & { __brand: "UserId" };
type ProductId = number & { __brand: "ProductId" };

// Prevents accidental type mixing that could cause deoptimization
function getUser(id: UserId): User {
  /* ... */
}
Enter fullscreen mode Exit fullscreen mode

2. Discriminated Unions for Predictable Polymorphism

type Shape =
  | { type: "circle"; radius: number }
  | { type: "rectangle"; width: number; height: number };

function calculateArea(shape: Shape): number {
  switch (shape.type) {
    // V8 can optimize this: each case has predictable hidden class
    // The 'type' property acts as a hidden class discriminator
    case "circle":
      return Math.PI * shape.radius ** 2; // Always accesses 'radius' at known offset
    case "rectangle":
      return shape.width * shape.height; // Always accesses 'width' & 'height' at known offsets
  }
}
Enter fullscreen mode Exit fullscreen mode

Measuring Deoptimizations in Practice

You can actually observe deoptimizations in V8 using Node.js flags:

node --trace-opt --trace-deopt your-script.js
Enter fullscreen mode Exit fullscreen mode

This will show you exactly when and why deoptimizations occur, helping you identify performance bottlenecks that better typing discipline could help avoid.

The Property Order Problem: What You Can Do

Since TypeScript can't enforce property order, here are practical strategies for maintaining consistent object shapes:

Use Class Constructors for Consistency

class User {
  constructor(
    public readonly name: string,
    public readonly email: string,
    public readonly age: number
  ) {}
}

// Always creates objects with consistent property order
const user1 = new User("Alice", "[email protected]", 25);
const user2 = new User("Bob", "[email protected]", 30);
Enter fullscreen mode Exit fullscreen mode

Establish Object Creation Patterns

// Define a consistent factory function
function createUser(name: string, email: string, age: number) {
  return { name, email, age }; // Always same order
}
Enter fullscreen mode Exit fullscreen mode

Avoid Dynamic Object Building

// ❌ Inconsistent property order
const user = {};
if (hasName) user.name = "Alice";
if (hasEmail) user.email = "[email protected]";

// ✅ Consistent initialization
const user = {
  name: hasName ? "Alice" : undefined,
  email: hasEmail ? "[email protected]" : undefined,
};
Enter fullscreen mode Exit fullscreen mode

Remember: V8's performance benefits from predictability. The more consistent your object creation patterns, the better V8 can optimize your code.

Best Practices for Performance-Oriented TypeScript

If you want to write TypeScript that helps V8 perform at its best, here are the key strategies:

1. Embrace strict type definitions: Say goodbye to any and overly broad union types

2. Use interfaces as contracts: Define them for frequently used objects to keep their shapes predictable

3. Choose specific over generic: Literal types give V8 more information to work with

4. Minimize dynamic property access: Use bracket notation sparingly for better predictability

5. Enable TypeScript's strict mode: Pushes toward better patterns

Conclusion

TypeScript's static type system encourages coding patterns that are more amenable to JavaScript engine optimizations. By preventing common anti-patterns that cause V8 deoptimisations, it can indirectly help your applications run faster, but should you actually care?
Most of the time the performance differences are negligible, but in some cases they might actually make a difference.

Knowing why and how these optimisations work is crucial to taking more informed decisions and writing better code, even though you might not need to use these tricks all the time.


Thanks for reading! If you found this helpful, follow me for more deep dives into JavaScript performance, web development and Rust, .

Top comments (6)

Collapse
 
nevodavid profile image
Nevo David

Pretty cool seeing someone dig into hidden classes like this - honestly, most folks just gloss over those details. Makes me want to go check my own code for funky object orders.

Collapse
 
blarfoon profile image
Davide Ceschia

It’s just unlucky that typescript doesn’t enforce that, but I found this eslint rule that sort of does the same: eslint.org/docs/latest/rules/sort-...

Collapse
 
javascriptwizzard profile image
Pradeep

Great post. Very insightful. Thanks for sharing.

Collapse
 
blarfoon profile image
Davide Ceschia

Thank you! I appreciate it

Collapse
 
juandadev profile image
Juanda Martínez

Thanks for sharing this, man! The hidden class concept was new to me.

I remember my first project — the “guru” on the team used to schedule weekly meetings to give us theoretical lessons about JS and how to improve our codebase. One time, he said we should start sorting all our properties and variables alphabetically, and even encouraged this practice during code reviews.

I didn’t really understand why he wanted that — I assumed it was just for readability. After I left the project, I dropped that habit. It’s funny how I’ve just now discovered that this actually affects performance.

I see you’re not new around here, but this was an excellent first post! Looking forward to reading more from you 🫡

Collapse
 
smjburton profile image
Scott

Great article - this was really helpful to understand how using TypeScript can benefit a codebase outside of compile time error catching. I appreciate that you went into detail on the history of the V8 engine to set the stage for why TypeScript can lead to performance optimizations compared to pure Javascript. Nice work!