I read "happy ___ starts with ___" as stating that happiness was the eventual result of a process that started with ___.
BatmanAoD
It's probably not "provable" one way or the other, but I'd like to see more empirical studies in general within the software industry, and this seems like a fruitful subject for that.
Cool! Oracle, a company famous for making good-will decisions, and open to being "urged" into doing the right thing. 🙄
I suppose the open letter is a nice gesture, and I hope that the petition to cancel the trademark succeeds.
Extension modules can be, and are, written in Rust and C++. And PyPy has a compatibility layer to run extensions (such as numpy) that are written for CPython.
The reason extension modules are typically in C is of course the API is in C, but that's true of cffi
as well (though you're right that cffi
is more portable). And the reason the API is in C is more fundamental than "CPython is written in C".
For what it's worth, Ada and Spark are listed separately in the Wiki article on dependent typing. Again, though, I'm not a language expert.
The reason C becomes relevant to Python users isn't typically because the interpreter is written in C, but because so many important libraries (especially numpy) are implemented in C.
Whatever you want to call them, my point is that most languages, including Rust, don't have a way to define new integer types that are constrained by user-provided bounds.
Dependent types, as far as I'm aware, aren't defined in terms of "compile time" versus "run time"; they're just types that depend on a value. It seems to me that constraining an integer type to a specific range of values is a clear example of that, but I'm not a type theory expert.
It sounds like you're talking about dependent typing, then, at least for integers? That's certainly a feature Rust lacks that seems like it would be nice, though I understand it's quite complicated to implement and would probably make Rust compile times much slower.
For ordinary integers, an arithmetic overflow is similar to an OOB array reference and should be trapped, though you might sometimes choose to disable the trap for better performance, similar to how you might disable an array subscript OOB check.
That's exactly what I described above. By default, trapping on overflow/underflow is enabled for debug builds and disabled for release builds. As I said, I think this is a sensible behavior. But in addition to per-operation explicit handling, you can explicitly turn global trapping behavior trapping on or off in your build profile, though.
It depends what kind of errors you're talking about. Suppose you're implementing retries in a network protocol. You can get errors pretty regularly, and the error handling will be a nontrivial amount of your runtime.
By Ada getting it right, I assume you mean throwing an exception on any overflow? (Apparently this behavior was optional in older versions of GNAT.) Why is Ada's preferable to Rust's?
In Rust, integer overflow panics by default in debug mode but wraps silently in release mode; but, optionally, you can specify wrapping, checked (panicking), or unchecked behavior for a specific operation, so that optimization level doesn't affect the behavior. This makes sense to me; the unoptimized version is the same as Ada, and the optimized version is not UB, but you can control the behavior explicitly when necessary.
There's also a massive tradeoff for when the error condition actually occurs. If an exception does get thrown and caught, that is comparatively slowwww.
If you look at the proposal, this is specifically "static reflection", i.e. compile-time reflection. So it doesn't actually have any of the downsides you mention, as far as I can tell.