Veritas

joined 2 years ago
[–] Veritas@lemmy.ml 1 points 1 year ago (1 children)

Why don't you write what you actually want in the opening post instead of making people guess.

[–] Veritas@lemmy.ml 3 points 1 year ago (1 children)

The context window is still too short for any story. They just forget about old messages and only remember the newest context.

[–] Veritas@lemmy.ml 26 points 1 year ago* (last edited 1 year ago) (3 children)

Embarrassing, considering how un-creative and original GPT-4 is. It's an actual struggle to get ChatGPT to think outside of the box. Claude 2 on the other hand is much better at it.

But this goes to show how unimaginative the general population is if this truly is the case.

[–] Veritas@lemmy.ml -2 points 1 year ago (3 children)

Apparently they don't learn.

[–] Veritas@lemmy.ml 4 points 1 year ago* (last edited 1 year ago) (1 children)

if that doesn't work, unironically kill yourself before you end up helping Russians put trans people in a mass grave somewhere.

MonsieurHedge@kbin.social

I'll try to forget all this. I won't take advice from someone who goes around telling people to kill themselves.

[–] Veritas@lemmy.ml 0 points 1 year ago (5 children)

Can you tell me what Russian propaganda I believe in? I try to watch only independent sources.

[–] Veritas@lemmy.ml 0 points 1 year ago (7 children)

But I don't want to exchange US propaganda for Russian one. I would prefer a way to know who exactly funds each news source, but that's probably not something they make public so I don't think there is a website where I can see that easily.

 
  • Putin offering arms to other countries. Russian version of NATO.
  • ReSex offering sex to recovering Ukrainian veterans thanks to US taxpayer money.
  • Disinformation operation by Human Rights Watch to justify an invasion of Haiti.
  • Winter problems are coming for Europe.
 

After watching the documentary "JFK to 9/11: Everything Is a Rich Man's Trick", I've just learned about Operation Mockingbird, and I don't know what I can trust anymore. I already distrusted mainstream media because of how one-sided they became when covering the news about the Ukraine-Russia conflict, but after learning the CIA bought their way into every relevant news source, I just don't know what I can believe anymore. Not every event is as relevant as JFK's assassination or 9/11, so they don't have people going to such lengths to fact-check every detail. And if media outlets make it a habit to mix truth with lies, how can anyone ever figure out what is truth and what isn't on the daily events?

14
submitted 1 year ago* (last edited 1 year ago) by Veritas@lemmy.ml to c/videos@lemmy.ml
 

Today I learned about the Business Plot of 1934, a political conspiracy in the United States to overthrow the government of President Franklin D. Roosevelt and install Smedley Butler as dictator. Retired Marine Corps Major General Smedley Butler asserted that wealthy businessmen were plotting to create a fascist veterans' organization with Butler as its leader and use it in a coup d'état to overthrow Roosevelt. However, the businessmen were not punished because they were too powerful. The plot was uncovered by the McCormack-Dickstein Committee in November 1934, but the businessmen involved denied the allegations and no charges were filed. This event highlights the influence of wealthy businessmen in American politics and the potential for corruption and abuse of power.

I learned about this watching Everything Is a Rich Man’s Trick - Full Documentary

[–] Veritas@lemmy.ml 7 points 1 year ago* (last edited 1 year ago) (3 children)

No, I can't. At least not on the client I use.

 

AI summary

In a thought-provoking exchange, tech maverick George Hotz engages in a wide-ranging discussion about the implications of AI intelligence. Addressing technical and ethical complexities, the conversation touches on AI control, human alignment, coordination, and potential risks. Debates arise on the pace of AGI development, its impact on society, and strategies for regulating power and achieving desired futures. Perspectives on power-seeking AI behavior diverge, with one side viewing it as a natural outcome of optimization and the other challenging its default existence. The role of AI in human interaction is also debated, with one stance advocating caution and manipulation awareness, while the opposing view portrays AI as a potential friend. Amidst differing viewpoints, the significance of AI's impact is undeniable, as the discourse navigates the intricate landscape of its ethical and technical implications.

Partial summariesGet ready to meet George Hotz, the tech maverick of Silicon Valley known for his audacious exploits and enigmatic persona. Hotz has outsmarted major corporations, hacked iPhones, and conquered the PlayStation 3. He believes in the inevitable rise of AI intelligence and is determined to ensure its distribution is equitable. He's building super-fast AI with his startup, Micrograd, and fears concentrated power in a few unaligned entities, not AI's increasing intelligence. Connelly, an AI safety advocate, shares concerns about misuse and S-risks but agrees with Hotz on many points. Their discussion dives into alignment and potential catastrophic misuse of AI, delving into technical and ethical complexities.

The discussion revolves around two main points. Firstly, the speaker believes that solving the technical problem of AI control is challenging and sees a deadline for its resolution. Secondly, they address the idea of human alignment, emphasizing the extent of coordination among humans in the modern world and suggesting that coordination is a technology that can be developed further. They argue against the practicality of achieving widespread individual sovereignty due to existing power structures and the fear that prevents meaningful change, highlighting the complex nature of political realities and individual choices.

The discussion touches on the notion of maximizing trade surplus, addressing aesthetics, coordination, and societal systems. The interplay between individual sovereignty, fear-based domination, and the balance between work and leisure are debated. Different viewpoints on capitalism, governance, military power, and inefficiencies in various systems are explored. It becomes clear that the conversation centers on the complexity of human coordination, values, and the potential impact of AI.

The transcript delves into the concept of AI risk, focusing on the potential for AGI to emerge quickly (hard takeoff) or more gradually (soft takeoff). The debate centers around the danger of rapid AGI advancement and whether regulatory interventions are needed to control computational power. The discussion raises concerns about the potential for AGI to quickly outpace human intelligence and the associated risks, with viewpoints ranging from the need for immediate caution to the belief that the current pace of AGI development is manageable and far from existential risks. Additionally, concerns are voiced regarding AI's impact on society, including psychological operations and the potential for societal vulnerabilities.

The dialogue explores the vulnerability of AI systems to memetic manipulation and the potential risks they pose. It debates the challenges of alignment and exploitability and delves into the notion that multiple diverse and inconsistent AI systems could contribute to societal stability by countering extreme actions. The conversation discusses the uncertainty of AI's impact on society, contrasting the concepts of AI stability with the current stability in the world, ultimately raising questions about the role of AI in society's dynamics.

The discussion centers on strategies to achieve desired futures while considering personal preferences and world stability. One proposal involves regulating the total compute power accessible to one entity to prevent undue concentration, while another considers open-sourcing software to maintain balance. The conversation underscores the challenge of aligning AI with individual values and navigating the unpredictable complexities of influencing global outcomes.

The discussion revolves around the potential risks associated with powerful technology, such as AI and destructive devices. The focus is on coordinating actions to prevent catastrophic outcomes. While acknowledging the risks of both widespread distribution and centralized control, the conversation highlights the need for effective coordination and the challenges of achieving it.

The discussion delves into the potential power-seeking behavior of advanced AI systems. While one perspective sees power seeking as a natural outcome of optimization, the other side argues that it might not be a default behavior and could be influenced by how the AI's goals are defined. There's also debate about how AI might interact with humans, with one side emphasizing caution and potential manipulation, while the other side sees AI as a friend that can be treated well and reasoned with. Despite differing viewpoints, both parties acknowledge the significance of AI's impact and the complexity of its ethical and technical implications.

 

AI summary

In a thought-provoking exchange, tech maverick George Hotz engages in a wide-ranging discussion about the implications of AI intelligence. Addressing technical and ethical complexities, the conversation touches on AI control, human alignment, coordination, and potential risks. Debates arise on the pace of AGI development, its impact on society, and strategies for regulating power and achieving desired futures. Perspectives on power-seeking AI behavior diverge, with one side viewing it as a natural outcome of optimization and the other challenging its default existence. The role of AI in human interaction is also debated, with one stance advocating caution and manipulation awareness, while the opposing view portrays AI as a potential friend. Amidst differing viewpoints, the significance of AI's impact is undeniable, as the discourse navigates the intricate landscape of its ethical and technical implications.

Partial summariesGet ready to meet George Hotz, the tech maverick of Silicon Valley known for his audacious exploits and enigmatic persona. Hotz has outsmarted major corporations, hacked iPhones, and conquered the PlayStation 3. He believes in the inevitable rise of AI intelligence and is determined to ensure its distribution is equitable. He's building super-fast AI with his startup, Micrograd, and fears concentrated power in a few unaligned entities, not AI's increasing intelligence. Connelly, an AI safety advocate, shares concerns about misuse and S-risks but agrees with Hotz on many points. Their discussion dives into alignment and potential catastrophic misuse of AI, delving into technical and ethical complexities.

The discussion revolves around two main points. Firstly, the speaker believes that solving the technical problem of AI control is challenging and sees a deadline for its resolution. Secondly, they address the idea of human alignment, emphasizing the extent of coordination among humans in the modern world and suggesting that coordination is a technology that can be developed further. They argue against the practicality of achieving widespread individual sovereignty due to existing power structures and the fear that prevents meaningful change, highlighting the complex nature of political realities and individual choices.

The discussion touches on the notion of maximizing trade surplus, addressing aesthetics, coordination, and societal systems. The interplay between individual sovereignty, fear-based domination, and the balance between work and leisure are debated. Different viewpoints on capitalism, governance, military power, and inefficiencies in various systems are explored. It becomes clear that the conversation centers on the complexity of human coordination, values, and the potential impact of AI.

The transcript delves into the concept of AI risk, focusing on the potential for AGI to emerge quickly (hard takeoff) or more gradually (soft takeoff). The debate centers around the danger of rapid AGI advancement and whether regulatory interventions are needed to control computational power. The discussion raises concerns about the potential for AGI to quickly outpace human intelligence and the associated risks, with viewpoints ranging from the need for immediate caution to the belief that the current pace of AGI development is manageable and far from existential risks. Additionally, concerns are voiced regarding AI's impact on society, including psychological operations and the potential for societal vulnerabilities.

The dialogue explores the vulnerability of AI systems to memetic manipulation and the potential risks they pose. It debates the challenges of alignment and exploitability and delves into the notion that multiple diverse and inconsistent AI systems could contribute to societal stability by countering extreme actions. The conversation discusses the uncertainty of AI's impact on society, contrasting the concepts of AI stability with the current stability in the world, ultimately raising questions about the role of AI in society's dynamics.

The discussion centers on strategies to achieve desired futures while considering personal preferences and world stability. One proposal involves regulating the total compute power accessible to one entity to prevent undue concentration, while another considers open-sourcing software to maintain balance. The conversation underscores the challenge of aligning AI with individual values and navigating the unpredictable complexities of influencing global outcomes.

The discussion revolves around the potential risks associated with powerful technology, such as AI and destructive devices. The focus is on coordinating actions to prevent catastrophic outcomes. While acknowledging the risks of both widespread distribution and centralized control, the conversation highlights the need for effective coordination and the challenges of achieving it.

The discussion delves into the potential power-seeking behavior of advanced AI systems. While one perspective sees power seeking as a natural outcome of optimization, the other side argues that it might not be a default behavior and could be influenced by how the AI's goals are defined. There's also debate about how AI might interact with humans, with one side emphasizing caution and potential manipulation, while the other side sees AI as a friend that can be treated well and reasoned with. Despite differing viewpoints, both parties acknowledge the significance of AI's impact and the complexity of its ethical and technical implications.

 

I'm looking for an automatic way to search for an instance that blocks another instance. Instead of manually checking the blocked instances of multiple instances until I find the one that blocks the instance I don't want to see, I'm wondering if there is a more efficient and automated method to identify such instances.

 

AI summary:

In the past decade, AI has rapidly evolved, achieving feats like surpassing human abilities in games, image recognition, and speech. Progress is driven by increased computing power, more data, and improved algorithms. Moore's Law has lowered the cost of computing, enabling larger models. Companies invest heavily in training AI models, while AI developers tap into vast amounts of data to improve accuracy. Algorithmic advancements make better use of resources, compensating for limitations. Experts predict AI progress will persist due to growing compute, efficient data use, and algorithmic innovation. Concerns arise about misuse and potential havoc in fields like cybersecurity and biology as AI knowledge becomes more accessible.

 

It's been a while since it stopped working. Does anyone know if they are still looking for another domain or have shut down permanently? I've found myself going back to Reddit more often since the community I frequented the most was in that instance.

view more: next ›