this post was submitted on 17 Aug 2023
7 points (100.0% liked)

Programming Languages

1167 readers
1 users here now

Hello!

This is the current Lemmy equivalent of https://www.reddit.com/r/ProgrammingLanguages/.

The content and rules are the same here as they are over there. Taken directly from the /r/ProgrammingLanguages overview:

This community is dedicated to the theory, design and implementation of programming languages.

Be nice to each other. Flame wars and rants are not welcomed. Please also put some effort into your post.

This isn't the right place to ask questions such as "What language should I use for X", "what language should I learn", and "what's your favorite language". Such questions should be posted in /c/learn_programming or /c/programming.

This is the right place for posts like the following:

See /r/ProgrammingLanguages for specific examples

Related online communities

founded 1 year ago
MODERATORS
top 6 comments
sorted by: hot top controversial new old
[–] ananas@sopuli.xyz 2 points 1 year ago (1 children)

That page makes Firefox's Reader mode really worth its weight in gold, but the content is interesting.

Has anyone used MLIR for anything yet?

[–] TheTrueLinuxDev@programming.dev 1 points 1 year ago* (last edited 1 year ago) (1 children)

Yup, been writing a new shader language to replace GLSL and HLSL for Vulkan Compute purposed, but I eventually switch from SPIR-V IR to MLIR and use IREE Compiler which accepts the MLIR and compile it to any of CUDA, ROCm, SPIR-V and so forth.

A lot of it was because of my unadulterated hatred toward our current Machine Learning Frameworks...It's one of the project that I've been working on to outright replace Pytorch/Tensorflow and ban those two framework from my office forever. I got fed up not knowing how much exactly do I need in memory allocation, computational cost, and so forth when running or training neural net models. Plus I want an easier way to split the model across lower-end GPU too that doesn't rely on Nvidia-only GPU for CUDA code. I also wanted to have SPIR-V as a fallback compute kernel, because if CUDA/ROCm is too new for GPU, you're SOL, but if you have SPIR-V, chances are, any GPU made in the last 10 years that have a Vulkan Driver, would likely be supported.

One of the biggest plus with MLIR is that you are also future proofing your code, because that code could feasibly be recompiled for new devices like Neural Net accelerator cards, ASIC, FPGA, and so forth.

[–] ananas@sopuli.xyz 2 points 1 year ago (1 children)

Interesting, I definitely need to check out MLIR stuff more. I've always been a bit dissuaded that it's one more step in my IR -> MLIR -> LLVM chain, but ability to compile it to multiple GPUs is a very good selling point.

[–] TheTrueLinuxDev@programming.dev 0 points 1 year ago (1 children)

Yeah, MLIR is more or less an "IR with Dialects", a lot of IR language spec share a lot in common with one another, so MLIR try to standardize that similarity between IR. Because of that feature, it reduce amount of IR code that developer have to worry about and they can progressively expand the available dialects for MLIR as they develop a compiler like IREE.

[–] ananas@sopuli.xyz 2 points 1 year ago (1 children)

Yeah, MLIR is more or less an “IR with Dialects”, a lot of IR language spec share a lot in common with one another, so MLIR try to standardize that similarity between IR

Oh, shit, that sounds like C all over again, just for GPUs this time.

But that note aside, definitely sounds like something I need to learn more about.

Lol, that one way to put it. Basically a language convergence, not a bad thing to be honest.