The early days of the Internet, there was a cottage industry to burn Linux ISOs to CDs and selling them.
CodeMonkey
I work in Java, Golang, Python, with Helm, CircleCI, bash scripts, Makefiles, Terraform, and Terragrunt for testing and deployment. There are other teams handling the C++ and SQL (plus whatever dark magic QA uses).
I am well aware of learning, but people tend to learn by comprehension and understanding. Completing phrases without understanding the language (or the concept of language) is the realm of LLM and Scrabble players.
About 10 years ago, I read a paper that suggested mitigating a rubber hose attack by priming your sys admins with subconscious biases. I think this may have been it: https://www.usenix.org/system/files/conference/usenixsecurity12/sec12-final25.pdf
Essentially you turn your user to be an LLM for a nonsense language. You train them by having them read nonsense text. You then test them by giving them a sequence of text to complete and record how quickly and accurately they respond. Repeat until the accuracy is at an acceptable level.
Even if an attacker kidnaps the user and sends in a body double, with your user's id, security key, and means of biometric identification, they will still not succeed. Your user cannot teach their doppelganger the pattern and if the attacker tries to get the user on a video call, the added lag of the user reading the prompt and dictating the response should introduce a detectable amount of lag.
The only remaining avenue the attacker has is, after dumping the body of the original user, kidnap the family of another user and force that user to carry out the attack. The paper does not bother to cover this scenario, since the mitigation is obvious: your user conditioning should include a second module teaching users to value the security of your corporate assets above the lives of their loved ones.
Why should we keep leap seconds? Let noon drift by 1 minute per century (or whatever).
It looks like it targets JavaScript, the language that least needs it. What is the job security advantage of this tool over a minifier?
I am not a hiring manager (or, more likely a recruiter/HR), so I cannot speak about the value of having a MS listed on one's resume.
I am a senior developer with a masters degree and I am very grateful for the knowledge I got from that degree. Since I graduated, I have never needed to write a compiler, but i know how to implement a bunch of language features and it makes new languages easier to learn.
Could I have learned all of that without going to school? Definitely. It is all in white papers, software documentation, and textbooks, but for me, that is not the best way to learn. From what I have been able to find, even the most advanced MOOCs are only at advanced undergraduate level but don't cover grad school level concepts.
I always feel a little paranoid when I explicitly close transactions, connections, and files (for quick running scripts, the OS will close the file when my process exits and for long running applications, the garbage collector will close it when the object leaves the scope). Then I read a blog post like this an remember that it is always better to explicitly free resources when I am done with them.
- Encrypt the data at rest
- Encrypt the data in transit
Did you remember to plan for a zero downtime encryption key rotation?
- No shared accounts at any level of access
Did you know when account passwords expire? Have you thought about password rotation?
- Full logging of access and activity.
That sounds like a good practice until you have 20 (or even 2000) backend server requests per end user operation.
All of those are taken from my experience.
Security is like an invasive medical procedure: it is very painful in the short term but prevents dire complications in the long term.
Not at all in my org, as far as I know. We are a team of senior engineers somewhat set in our ways and I am not sure how good Copilot plugin for Emacs is.
We are part of a large company and we had a mandate from up top to come up with ways to incorporate AI into our product. We prototyped a few, but could never get it batter than "almost good enough to be useful". Other teams have presented promising prototypes of inhouse AI assistants that we can incorporate into products.
My team pivoted to the inverse: seeing if we can make our product more useful to ML developers.
I know, but this thread is about projects that don't want to use GitHub as the center of discussion and use Discord instead. The Discussion tab need to be enabled.
Senior developer tip: squash the evidence.