As others have mentioned, there's not necessarily a relation because characters simply look alike. Seems most similar examples online are mainly focused on removing accents and would use the standard Python library unicodedata. It uses standard approaches to converting to ASCII like NFKD (NFKD explained here)
More Common unicodedata Approach
import unicodedata
str_unicode = u"ⁱ, ᴠ, Ғ, Ƭ, ѳ, ❶"
#replace = any characters that can't be translated will be replaced with ?
print(unicodedata.normalize('NFKD',str_unicode).encode("ascii",'replace'))
#will ignore any errors
print(unicodedata.normalize('NFKD',str_unicode).encode("ascii",'ignore'))
unicodedata Output
'i, ?, ?, ?, ?, ?'
'i, , , , , '
unidecode Map with Translate
The unidecode library seems closer for your specific example. I think you will have to augment it though with a translate call to cleanup characters the library doesn't map.
I added a second example a character that couldn't be mapped. I added paragraph mark "¶" mapped to "P" for reference
import unicodedata
import unidecode
#Script
str_unicode = u"ⁱ, ᴠ, Ғ, Ƭ, ѳ, ❶, ¶"
dict_mapping = str.maketrans("❶¶","1P")
str_unidecode = unidecode.unidecode(str_unicode)
str_unidecode_translated = unidecode.unidecode(str_unicode.translate(dict_mapping))
print(str_unidecode)
print(str_unidecode_translated)
"i, V, G', T, f, "not good enough? What is the actual rule you want to apply? If you expect that every character should have a specific mapping, with no particular pattern to it, then you will need to just have some kind of hard-coded mapping (whether you make it yourself or use a library likeunidecode) - there is no getting around that. It's only possible to write an algorithm for things that have an algorithmic approach.unidecode, to see if there are any configuration options that could get you results more like what you want?Ғis not related toFat all. It's mainly used to represent voice velar and uvular fricatives (think French "r") in Turkic languages that use the Cyrillic language. Just because two glyphs look similar does not mean there's a mapping that represents the similarity.