phase_change

joined 1 year ago
[–] phase_change@sh.itjust.works 3 points 2 months ago

The person isn’t talking about automating being difficult for a hosted website. They’re talking about a third party system that doesn’t give you an easy way to automate, just a web gui for uploading a cert. For example, our WAP interface or our on-premise ERP don’t offer a way to automate. Sure, we could probably create code to automate it and run the risk it breaks after a vendor update. It’s easier to pay for a 12 month cert and do it manually.

[–] phase_change@sh.itjust.works 2 points 3 months ago

That’s my hope. Still from where I live I can only hope my specie contributions are used to affect that.

[–] phase_change@sh.itjust.works 14 points 3 months ago (4 children)

This poll tracking is showing Harris barely ahead on national polls. This millennium, Republicans have won the presidency in 2000, 2004, and 2016.

In 2000 and 2016, the Democratic candidate won the popular vote.

Winning the popular vote doesn’t mean shit. The electoral college is what matters.

That same NYT poll link lists 9 tossup states: Wisconsin, Michigan, Pennsylvania, Arizona, Georgia, Minnesota, North Carolina, Nevada, and Virginia.

You’ll notice all but the first three are in alphabetical order. That’s because all but the first three don’t have enough polling to make a prediction. Of those first three: a statistical tie in Wisconsin and Michigan with a Trump lead in Pennsylvania.

If you include Kennedy, Harris is ahead by 1% in Wisconsin and Pennsylvania but still tied in Michigan.

National polling trends are going in the direction I want, but they really don’t matter.

I write this from a state whose electoral college votes have never gone for a Democrat in my lifetime and won’t ever before my death. I’ll be voting for Harris, but that vote is one of those national votes that won’t actually help my preferred candidate.

The only way I can help is via monetary donation.

And if you’re a Harris voter in a solidly blue state, your vote means as much fuck all as mine does. Yes, it actually makes it to the electoral college, but, like mine, that’s a forgone conclusion. You should be donating money too and hoping it’s used wisely to affect those swing states.

[–] phase_change@sh.itjust.works 13 points 8 months ago* (last edited 8 months ago) (1 children)

Under the CMB method, it sounds like the calculation gives the same expansion rate everywhere. Under the Cepheid method, they get a different expansion rate, but it’s the same in every direction. Apparently, this isn’t the first time it’s been seen. What’s new here is that they did the calculation for 1000 Cepheid variable stars. So, they’ve confirmed an already known discrepancy isn’t down to something weird on the few they’ve looked at in the past.

So, the conflict here is likely down to our understanding of ether the CMB or Cepheid variables.

[–] phase_change@sh.itjust.works 100 points 8 months ago (6 children)

Except it’s not that they are finding the expansion rate is different in some directions. Instead they have two completely different ways of calculating the rate of expansion. One uses the cosmic microwave background radiation left over from the Big Bang. The other uses Cepheid stars.

The problem is that the Cepheid calculation is much higher than the CMB one. Both show the universe is expanding, but both give radically different number for that rate of expansion.

So, it’s not that the expansion’s not spherical. It’s that we fundamentally don’t understand something to be able to nail down what that expansion rate is.

[–] phase_change@sh.itjust.works 1 points 1 year ago (1 children)

As a first book, I think Children of Time is much better than Shards of Earth. I enjoyed both series but would say the third book in each was the weakest. The Final Architecture series had a slightly stronger third entry.

And the article content posted is just an excerpt. The rest of the article focuses on how AI can improve the efficiency of workers, not replace them.

Ideally, you’ve got a learned individual using AI to process data more efficiently, but one that is smart enough to ignore or toss out the crap and knows to carefully review that output with a critical eye. I suspect the reality is that most of those individuals using AI will just pass it along uncritically.

I’m less worried about employees scared of AI and more worried about employees and employers embracing AI without any skepticism.

 

I’m a guy approaching 60, so I’ll start by saying my perception may be wrong. That could be because the protest songs from the late 60’s and early 70’s weren’t the songs I heard live on the radio but because they were the successful ones that got replayed. More likely, it’s because music is much more fractured than what I was exposed to on the radio growing up. Thus, today, I’m simply not exposed to the same type of protest songs that still exist.

Whatever the reason, I feel that the zeitgeist of protest music is very different from the first decade of my life compared to the last.

I’m curious to know why. My conspiratorial thoughts say that it’s down to the money behind music promotion being very different over those intervening decades, but I suspect it’s much more nuanced.

So, why are there fewer protest songs? Alternatively, why I am not aware of recent ones?

[–] phase_change@sh.itjust.works 10 points 1 year ago (3 children)

Spock, Uhura, Chapel, heck even M’Benga don’t make it a prequel, but a lieutenant Kirk does?

Because most people aren’t technical enough to understand there are alternatives, particularly if those alternatives involve removing a scary label telling you not to.

 

So, I’ve been self-hosting for decades, but on physical hardware. I’ve had things like MythTV and an asterisk voip system, but those have been abandoned for years. I’ve got a web server, but it’s serving static content that’s only viewed by bots and attackers.

My mail server, that’s been active for more than two decades is still in active use.

All of this makes me weird in the self-hosted community.

About a month ago, I put in a beefy system for virtualization with the intent to start branching out the self hosting. I primarily considered Proxmox and xcp-ng. I went with xcp-ng, primarily because it seems to have more enterprise features. I’m early enough in my exploration that switching isn’t a problem.

For those of you more advanced in a home-lab hypervisor, what did you go with and why? Right now, I’m pretty agnostic. I’m comfortable with xcp-ng but have no problems switching. I’m particularly interested in opinions that have a particularly negative view of one or the other, so long as you explain why.

Kids these days with their containers and their pipelines and their devops. Back in my day…

Don’t get me started about the internal devs at work. You’ve already got me triggered.

And, I can just imagine the posts they’re making about how the internal IT slows them down and causes issues with the development cycle.

 

TL;DR: old guy wants logs and more security in docker settings. Doesn’t want to deal with the modern world.

I’m on the sh.itjust.works lemmy instance. I don’t know how to reference another community thread so that it works for everyone, so my apologies for pointing at sh.itjust.works, but my thoughts here are inspired by https://sh.itjust.works/post/54990 and my attempts to set up a Lemmy server.

I’m old school. I’m in my mid-50’s. I was in academia as a student and then an employee from the mid-80’s through most of the 90’s. I’ve been in IT in the private sector since the late 90’s.

That means I was actively using irc and Usenet before http existed. I’ve managed publically facing mail and web servers in my job since the 90’s. I’ve run personal mail and web servers since the early 00’s. I even had a static HTML page that was the number one Google hit for an obscure financial search term for much of the 2000’s. The referer ip’s and search terms could probably have been mined for data.

On the work side, I’ve seen multiple email account compromises. (I’d note zero when it was on premise Lotus Notes. All of the compromises were after moving to O365. Those stopped for years once we moved to MFA, but this year we’ve seen two where the bad actors were able to MitM MFA. That said I don’t regret no longer supporting an on-prem Domino server: https://m.youtube.com/watch?v=Bk1dbsBWQ3k )

I’ve also seen a sophisticated vendor typo squatting email, combined with an internal email compromise cost us significant cash.

Other than email compromise, I’m not aware of any other intrusions. (There are two kinds of companies: those that know they’ve been hacked and those that don’t). I am friends with some IT people in a company where they were ransomwared. I still believe they have a tighter security stack than we do.

I’m paranoid about security because like Farmer’s I’ve seen a thing or two. We keep logs for a year, dumped into a SIEM that is designed to make it unlikely bad actors can get into it even if they take over A/D or VMWare. My home logging is less secure but still extensive. The idea is even if I’m hit, I hope I have the logs to help me understand how and how extensively.

I still have public websites at home, but they don’t contain any content that matters. The only traffic they see is attack attempts and indexers that will index them and then shove them down into oblivion. I’m fine with that.

I still run a mail server at home. It’s mostly used so all my unique email addresses (sh.itjust.works@foo.com) can get forwarded to my personal O365 instance. If I need to reply using a unique address, I use alpine in an ssh session.

Long prolog to explain my experience playing with a Lemmy instance this weekend. I’ve got an xcp-ng instance in the home lab and used it to get a Lemmy docker instance running. It’s not yet exposed to the outside world.

I’m new to docker. I’m new to Lemmy. I’m new to Nginx. (See the “old school” in the title.). At work and at home, I deal with Apache. I’ve got custom mod_rewrite rules and mod_security in place to deal with many attacks. I’m comfortable dealing with the tweaks on both for websites that break because of some rules.

I’ve tried putting an Apache proxy in front of my xcp-ng Lemmy instance, but it won’t work because Lemmy assumes an initial contact via http/1.1 with an http status code of 101 to push to http/2.0. Apache can proxy either but not both. And Lemmy isn’t happy of the initial connection is http/2.0.

I’m also uncomfortable with my lack of knowledge regarding Nginx. I don’t know how to recreate my mod_rewrite rules and I don’t think there’s an equivalent to mod_security.

Worse, I don’t see an easy way to retain docker logs. Yes, I can likely use volumes in a docker-compose.yml to retain them, but it’s far from clear what path that would be.

I know all of these are solveable concerns with some effort, but I suspect few put in that effort.

How do all of you who run containers in a home lab sleep at night knowing all that log data is ephemeral unless you take special effort? How do you sleep knowing the sample configs you are using in containers have little security built in?

[–] phase_change@sh.itjust.works 0 points 1 year ago (1 children)

Yep. I’ve hosted my own mail server since the early oughts. One additional hurdle I’d add to you list is rDNS. If you can’t get that set up, you’ll have a hard time reaching many mail servers. Besides port blocking, that’s one of the many reason it’s a non-starter on consumer ISP.

I actually started on a static ISDN line when rDNS wasn’t an issue for running a mail server. Moved to business class dsl, and Ameritech actually delegated rDNS to me for my /29. When I moved to Comcast business, they wouldn’t delegate the rDNS for the IPv4. They did create rDNS entries for me, and they did delegate the rDNS for the IPv6 block. Though the way they deal with the /56 IPv6 block means only the first /64 is useable for rDNS.

But, everything you list has been things I’ve needed to deal with over the years.

 

It’s not even June 12 for me, yet I suspect many subreddits went dark based on UTC.

I moved to Reddit during the Digg migration. Thus, I got the default subscriptions from back in the day. Over the years, I’ve unsubscribed to things I felt were crap, and I’ve added a number of subreddits.

Already, many have gone dark. My old.Reddit.com homepage already looks much different than normal, and I know that a few subreddits that do show have announced they’ll go dark. I assume they are US based and timing that locally.

I’ve spent more time in the Lemmy fediverse than on Reddit since joining, but I’ve spent time on both.

I’ll admit to cynical skepticism of the impact of the darkening. I still don’t think it will make a difference in Reddit policy, but I now believe it will have a larger impact on Reddit traffic than I imagined.

I still expect it to have no change in Reddit attitude or really in Reddit users.

 

I signed up and am currently logged in via an iPad. I wanted to browse and post on a computer. I’ve tried multiple browsers and incognito modes. With all of them, when signing into sh.itjust.works, I get nothing but the spinning button after clicking login.

I’m not sure if it’s some capacity issue, or if Lemmy doesn’t allow the same user logged in via multiple browsers.

I’m a bit scared to logout and see if that’s the case.

Anyone have any insight?

 

Yes, I’m certain I could final answers to all these questions via research, but I’m coming here as part of the Reddit diaspora. My guess is that there’s a benefit to others like me to have this discussion.

I can vaguely understand the federation concept, the idea that my account is hosted at an individual Lemmy server and that other servers trust that one to validate my account. What’s the network flow like? I’m posting this to the lemmy.ml /asklemmy community, but I’m composing it on the sh.itjust.works interface. I’m assuming sh.itjust.works hands this over to lemmy.ml. How does my browsing work? Is all of my traffic routed through sh.itjust.works?

Assuming there’s a mass influx of redditors, what does it look like as things fail? I’m assuming some servers can keep up under the load and some can’t. If sh.itjust.works goes down under the load, can I still browse other servers? Or, do those servers think I should have some token from sh.itjust.works, because my cookies say I’m still logged in, and I can’t even do that?

Are there easy mechanisms to allow me to grab my post history?

I’m assuming most (all?) Lemmy servers are hosted in home labs? The idea of Lemmy excites me, but the growth pain that could be coming scares me. Anybody using a CDN in front of their servers? That could be good, but with unconstrained growth, that could be costly, which is very bad.

I can imagine lots of different worse case scenarios, but I’m curious what those of you who run servers imagine for the best case scenario? A manageable growth that just gets more vibrant communities, which can’t ever lead to the breadth and variety of Reddit?

Also, for those running servers, have any of you experienced issues during this growth? What scares you?

view more: next ›