this post was submitted on 15 Jun 2024
76 points (91.3% liked)

Python

6347 readers
2 users here now

Welcome to the Python community on the programming.dev Lemmy instance!

๐Ÿ“… Events

PastNovember 2023

October 2023

July 2023

August 2023

September 2023

๐Ÿ Python project:
๐Ÿ’“ Python Community:
โœจ Python Ecosystem:
๐ŸŒŒ Fediverse
Communities
Projects
Feeds

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Valmond@lemmy.world 4 points 4 months ago (1 children)

100k lines of code doesn't mean anything.

You can make a 1k python lines bog down your new shiny PC, as well 1M lines run just fine.

[โ€“] sugar_in_your_tea@sh.itjust.works 2 points 4 months ago* (last edited 4 months ago)

Exactly. We have hundreds of thousands of lines of code that work reasonably well. I think we made the important decisions correctly, so performance issues in one area rarely impact others.

We rewrote ~1k lines of poorly running Fortran code into well-written Python code, and that worked because we got the important parts right (reduced big-O CPU from O(n^3^) to O(n^2^ log n) and memory from O(n^4^) to O(n^3^)). Runtime went from minutes to seconds in medium size data sets, and made large data sets possible to run (those would OOM due to O(n^4^) storage in RAM). If you get the important parts right, Python is probably good enough, and you can get linear optimizations from there by moving parts to a compiled language (or use a JIT like numba). Python wasn't why we could make it fast, it's just what we prototyped with so we could focus on the architecture, and we stopped optimizing when it was fast enough.