Several valid critiques have been offered, but I have not yet seen a design suggestion that stands to greatly decrease the complexity of a common list operation: appending to the end of a list.
Currently you have to traverse the whole list to find the last node, then attach a new node to that. However, since a list is separate from the internal nodes, in addition to a head node, your list can contain a link to the tail node.
With this in mind, the code might look something like:
typedef struct Node {
struct Node* next;
int data;
} Node;
typedef struct LinkedList {
Node* head;
Node* tail;
int length;
} LinkedList;
LinkedList* append(LinkedList* list, int data) {
// Checks on validity of list and its pointers.
Node* new_node = malloc(sizeof Node);
if (!new_node) return NULL;
new_node->next = NULL;
new_node->data = data;
list->tail->next = new_node;
list->tail = new_node;
list->length++;
return list;
}
Appending now becomes a O(1) operation rather than O(n), and helps you avoid the fact that _add_node, which append calls is recursive rather than iterative. Now, it's tail-recursive, which is good, but you can't count on C compilers to optimize for this, and you're opening yourself up to the possibility of a stack overflow on a long list.
Potential drawbacks of linked lists in general
Also consider whether your efforts are well-spent implementing a linked list in the first place. Search for a video from Bjarne Stroustrop on linked lists vs. dynamic arrays in C++. The basic message is relevant to C as well as C++.
One of Bjarne's concerns with linked lists is fragmentation of nodes in memory: dynamically allocating each node means that nodes can be all over the place in memory. This means that when a CPU loads a whole chunk of memory into the cache, we can't count on seeing a performance boost because the next node might not be anywhere in that cache.
One way of addressing this without abandoning the linked list entirely is to use your own allocator that allocates from larger contiguous blocks of memory. You're basically building a linked list of large memory blocks, and then using those to allocate the nodes in your linked list. It still behaves exactly like a linked list, but loading one means you probably have the next several nodes already loaded into cache, making that access faster.
Of course, this is really complicating the simplicity of a linked list and again poses the "are you using the right data structure?" question.