Artificial Intelligence

1341 readers
10 users here now

Welcome to the AI Community!

Let's explore AI passionately, foster innovation, and learn together. Follow these guidelines for a vibrant and respectful community:

You can access the AI Wiki at the following link: AI Wiki

Let's create a thriving AI community together!

founded 1 year ago
1
 
 

Artificial intelligence (AI) is a pivotal catalyst for global innovation, with the United States at the forefront of the development of this transformative technology amid its ongoing great power rivalry with China. However, a notable concern has emerged: the absence of an explicit conception of AI supremacy that threatens to undermine the US' long-term AI strategy. The notion of AI supremacy traditionally has been difficult to define, paralleling disputes about whether competition over AI is a “race.” This report thus aims to accomplish two objectives: first, to define AI supremacy and anchor this concept in the realities of the AI competition thus far; and second, to revise the US' AI strategy in accordance with a more comprehensive understanding of AI supremacy.

The AI race, unsurprisingly, has drawn in actors from the Middle East. The United Arab Emirates (UAE) and Saudi Arabia, especially, are pursuing the development of indigenous AI ecosystems, each seeking to attain the regional upper hand, throwing their capital behind their stated national AI aims. This report attempts to steer the conversation on the global AI race toward a comprehensive conception of AI supremacy that is anchored in the realities of international affairs and US-China great power competition.

2
 
 

Disable JavaScript, for the best experience.

3
 
 

Abstract

Artificial Intelligence (AI) has become a disruptive technology, promising to grant a significant economic and strategic advantage to nations that harness its power. China, with its recent push towards AI adoption, is challenging the U.S.’s position as the global leader in this field. Given AI’s massive potential, as well as the fierce geopolitical tensions between China and the U.S., several recent policies have been put in place to discourage AI scientists from migrating to, or collaborating with, the other nation. Nevertheless, the extent of talent migration and cross-border collaboration are not fully understood. Here, we analyze a dataset of over 350,000 AI scientists and 5,000,000 AI papers. We find that since 2000, China and the U.S. have led the field in terms of impact, novelty, productivity, and workforce. Most AI scientists who move to China come from the U.S., and most who move to the U.S. come from China, highlighting a notable bidirectional talent migration. Moreover, the vast majority of those moving in either direction have Asian ancestry. Upon moving, those scientists continue to collaborate frequently with those in the origin country. Although the number of collaborations between the two countries has increased since the dawn of the millennium, such collaborations continue to be relatively rare. A matching experiment reveals that the two countries have always been more impactful when collaborating than when each works without the other. These findings suggest that instead of suppressing cross-border migration and collaboration between the two nations, the science could benefit from promoting such activities.

4
5
6
7
1
submitted 2 days ago* (last edited 2 days ago) by Joker@sh.itjust.works to c/ai_@lemmy.world
 
 
  1. Cracking the 50-year “grand challenge” of protein structure prediction.
  2. Showing the human brain in unprecedented detail, to support health research.
  3. Saving lives with accurate flood forecasting.
  4. Spotting wildfires earlier to help firefighters stop them faster.
  5. Predicting weather faster and with more accuracy.
  6. Advancing the frontier of mathematical reasoning.
  7. Using quantum computing to accurately predict chemical reactivity and kinetics.
  8. Accelerating materials science and the potential for more sustainable solar cells, batteries and superconductors.
  9. Taking a meaningful step toward nuclear fusion — and abundant clean energy.
8
9
10
11
12
13
 
 

The economics of how tech jobs get created and how layoffs happen is worth understanding if you're not sure how AI might fit into it.

00:00 Previously on ‪@InternetOfBugs‬ vs AI 01:50 Caveats: US Only, not Gaming 02:33 Building vs Maintaining 03:46 How Projects get funded 05:08 Hiring process 07:42 How the last decade or so has been unusual 10:26 How AI might change that 11:19 Enter the Stock Market 12:47 How Layoffs get decided on 15:58 How to ride out the apparent downturn 21:37 Bad advice from people who have never experienced a downturn 24:38 Resources on How to look for a job

#internetOfBugs

14
15
16
 
 

The artificial intelligence (AI) industry is facing a critical diversity crisis, with women severely underrepresented across all seniority levels. This data brief quantifies the multifaceted underrepresentation of women in the global and European Union (EU) AI talent pool, highlighting the pressing need for targeted interventions to increase female participation in this rapidly expanding field.

Our analysis of data on nearly 1.6 million AI professionals worldwide reveals stark gender imbalances. Women comprise only 22% of AI talent globally, with even lower representation at senior levels – occupying less than 14% of senior executive roles in AI. Within the EU, the disparity is equally concerning. Europe has closed 75% of its gender gap , with Sweden and Germany among the top five European economies in closing the gender gap. However, our data reveals a stark contrast in the AI sector: Germany and Sweden have some of the lowest female representations in their AI workforces in the EU, at 20.3% and 22.4% respectively. This discrepancy raises serious questions about the unique barriers faced by women in the AI field.

17
18
 
 

Imagine using artificial intelligence to compare two seemingly unrelated creations — biological tissue and Beethoven’s “Symphony No. 9.” At first glance, a living system and a musical masterpiece might appear to have no connection. However, a novel AI method developed by Markus J. Buehler, the McAfee Professor of Engineering and professor of civil and environmental engineering and mechanical engineering at MIT, bridges this gap, uncovering shared patterns of complexity and order.

19
 
 

[PDF] Report.

This is kind of interesting to read.

20
21
 
 

Abstract

Large language models have sparked a lot of attention in the research community in recent days, especially with the introduction of practical tools such as ChatGPT and Github Copilot. Their ability to solve complex programming tasks was also shown in several studies and commercial solutions increasing the interest in using them for software development in different fields. High performance computing is one of such fields, where parallel programming techniques have been extensively used to utilize raw computing power available in contemporary multicore and manycore processors. In this paper, we perform an evaluation of the ChatGPT and Github Copilot tools for OpenMP-based code parallelization using a proposed methodology. We used nine different benchmark applications which represent typical parallel programming workloads and compared their OpenMP-based parallel solutions produced manually and using ChatGPT and Github Copilot in terms of obtained speedup, applied optimizations, and quality of the solution. ChatGPT 3.5 and Github Copilot installed with Visual Studio Code 1.88 were used. We concluded that both tools can produce correct parallel code in most cases. However, performance-wise, ChatGPT can match manually produced and optimized parallel code only in simpler cases, as it lacks a deeper understanding of the code and the context. The results are much better with Github Copilot, where much less effort is needed to obtain correct and performant parallel solution.

22
23
 
 

First of all, let me explain what "hapax legomena" is: it refers to words (and, by extension, concepts) that occurred just once throughout an entire corpus of text. An example is the word "hebenon", occurring just once within Shakespeare's Hamlet. Therefore, "hebenon" is a hapax legomenon. The "hapax legomenon" concept itself is a kind of hapax legomenon, IMO.

According to Wikipedia, hapax legomena are generally discarded from NLP as they hold "little value for computational techniques". By extension, the same applies to LLMs, I guess.

While "hapax legomena" originally refers to words/tokens, I'm extending it to entire concepts, described by these extremely unknown words.

I am a curious mind, actively seeking knowledge, and I'm constantly trying to learn a myriad of "random" topics across the many fields of human knowledge, especially rare/unknown concepts (that's how I learnt about "hapax legomena", for example). I use three LLMs on a daily basis (GPT-3, LLama and Gemini), expecting to get to know about words, historical/mythological figures and concepts unknown to me, lost in the vastness of human knowledge, but I now know, according to Wikipedia, that general LLMs won't point me anything "obscure" enough.

This leads me to wonder: are there LLMs and/or NLP models/datasets that do not discard hapax? Are there LLMs that favor less frequent data over more frequent data?

24
25
 
 

You heard me, I'm curious, I know there's all those dumbass deepnude programs but has anyone actually tried to make a model that takes images of nude humans and puts clothing on them? I guess they don't have to be nude but that does remove a lot of variables in the generation.

I think it would be an interesting little tool to try out new looks you never would really mess with before

view more: next ›