this post was submitted on 11 Oct 2024
819 points (98.1% liked)

Programmer Humor

32396 readers
486 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 

cross-posted from: https://lemmy.zip/post/24335357

you are viewing a single comment's thread
view the rest of the comments
[–] thevoidzero@lemmy.world 5 points 3 weeks ago (9 children)

I thought the most mode sane and modern language use the unicode block identification to determine something can be used in valid identifier or not. Like all the 'numeric' unicode characters can't be at the beginning of identifier similar to how it can't have '3var'.

So once your programming language supports unicode, it automatically will support any unicode language that has those particular blocks.

[–] NeatNit@discuss.tchncs.de 4 points 3 weeks ago (8 children)

Sanity is subjective here. There are reasons to disallow non-ASCII characters, for example to prevent identical-looking characters from causing sneaky bugs in the code, like this but unintentional: https://en.wikipedia.org/wiki/IDN_homograph_attack (and yes, don't you worry, this absolutely can happen unintentionally).

[–] toastal@lemmy.ml 8 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

OCaml’s old m17n compiler plugin solved this by requiring you pick one block per ‘word’ & you can only switch to another block if separated by an underscore. As such you can do print_แมว but you couldn’t do pℝint_c∀t. This is a totally reasonable solution.

[–] NeatNit@discuss.tchncs.de 2 points 3 weeks ago

That's pretty cool

load more comments (6 replies)
load more comments (6 replies)