With this schema:
create table object (
   obj_id      serial      primary key,
   name        varchar(80) not null unique,
   description text,
   tag_arr     int[]
);
create table tag (
   tag_id      serial      primary key,
   label       varchar(20) not null unique
);
An object may have any number of tags attached. Instead of an object X tag table, I wish to keep tag_ids in an array so they can be easily fetched with the object record.
How do I create an index on object so that each element of tar_arr is an index?
That said, are there better ways to solve this problem?
Discussion
This could be achieved with:
create table obj_x_tag(
   obj_id    references object,
   tag_id    references tag,
   constraint obj_x_tag_pk primary key( obj_id, tag_id )
);
select obj_id, name, description, array_agg( tag_id )
from object o
join obj_x_tag x using( obj_id )
group by 1, 2;
But to me it makes more sense to simply keep the array of tag_ids in a column and dispense with the cross table and array_agg()
It was suggested to use PostgresQL SQL: Converting results to array. The problem, as noted, is that "this doesn't actually index individual array values, but instead indexes the entire array"
It was also suggested to use pg's intarr and gist (or gin) index. The problem - to me - seems that the index is for the standard pg set-based array operators, not necessarily optimized for finding one element of an array, but rather where one array contains another, intersects with another - for me it's counter-intuitive that, size-wise and speed-wise, such a wide solution is correct for such a narrow problem. Also, the intarr extension seems limited to int, not covering int64 or char, limiting its usefulness.
obj_idand thetag_idthat will save you a lot of trouble in the long run. Just think of a query that gives you the number how often a tag was used.