Not necessarily, you still need backups or snapshots especially on home directory in case software have a nasty bug like deleting your data.
TheTrueLinuxDev
Yup and I am getting sick of hearing this even on Arch Linux. Like, mofo, you could literally run a snapshot or backup before upgrading, don't blame us if you're yoloing your god damn computer. Windows have exactly the same problem too and this is why we have backups. Christ.
On my Arch Linux Install, I literally have a Pacman Hook that would forcibly run backup and verify the said backup before doing a system-wide update.
That one was an old documentation that some of the Chinese folks actually document a lot of quirks related to X11 protocol. I paid about $6000 for translator to work on translating that doc to English and I use it to build my own GUI Toolkit on Linux that I still use to this day.
How it really works:
mpf_t temperature;
If confused...
It's arbitrary sized floating precision number provided in LibGMP and you can find more information about mpf_t here.
Oof, sorry. :( I had hoped that they sorted it out by then...
Lol, that one way to put it. Basically a language convergence, not a bad thing to be honest.
Yeah, MLIR is more or less an "IR with Dialects", a lot of IR language spec share a lot in common with one another, so MLIR try to standardize that similarity between IR. Because of that feature, it reduce amount of IR code that developer have to worry about and they can progressively expand the available dialects for MLIR as they develop a compiler like IREE.
Yup, been writing a new shader language to replace GLSL and HLSL for Vulkan Compute purposed, but I eventually switch from SPIR-V IR to MLIR and use IREE Compiler which accepts the MLIR and compile it to any of CUDA, ROCm, SPIR-V and so forth.
A lot of it was because of my unadulterated hatred toward our current Machine Learning Frameworks...
It's one of the project that I've been working on to outright replace Pytorch/Tensorflow and ban those two framework from my office forever. I got fed up not knowing how much exactly do I need in memory allocation, computational cost, and so forth when running or training neural net models. Plus I want an easier way to split the model across lower-end GPU too that doesn't rely on Nvidia-only GPU for CUDA code. I also wanted to have SPIR-V as a fallback compute kernel, because if CUDA/ROCm is too new for GPU, you're SOL, but if you have SPIR-V, chances are, any GPU made in the last 10 years that have a Vulkan Driver, would likely be supported.
One of the biggest plus with MLIR is that you are also future proofing your code, because that code could feasibly be recompiled for new devices like Neural Net accelerator cards, ASIC, FPGA, and so forth.
Very nice, I was basically forking off Python Lark and rewriting it in C language, with some adjustments to Earley Parser in an experiment to parallelize the processing in Vulkan Compute.
I agree on avoiding on the idea of avoiding having to make your own parser generator, this is precisely what I'm doing and it's hell. I assumed that you probably want to pick up some understanding on how parser differs when it come to writing grammars. As for ease of use and requiring the least understanding, using something like Earley parser is probably the easiest, it would be slower than other parser algorithms, but it could handle ambiguous grammars making it ideal for first timers to learn how to write a programming language.
Sure until you can't with flatpak. Flatpak does not safeguard against system binaries and there are always risks associated with that.
Honestly I think I am going to move on from Programming.dev, it's filled with script kiddie like you. Good lord.
Fuck y'all. Good evening.