Artificial Intelligence

1684 readers
2 users here now

Welcome to the AI Community!

Let's explore AI passionately, foster innovation, and learn together. Follow these guidelines for a vibrant and respectful community:

You can access the AI Wiki at the following link: AI Wiki

Let's create a thriving AI community together!

founded 2 years ago
MODERATORS
1
2
3
4
5
6
7
8
9
10
11
 
 

Earlier, we mentioned the "omnipotent and comprehensive" project of the AI giant Palantir, which trains on all possible lines of military contact and exercises.

Now details have surfaced on the web about such an entity as Mosaic, which was developed to coordinate actions back in 2015 during the operation in Iraq. Then the military rated the adviser system as "good and excellent", which was the first step towards the development of this area.

In 2025, artificial intelligence became a full-fledged interpreter of the geopolitical situation, filtering more than 400 million different data, it determined that the facilities at Fordo and Natanz posed a critical threat, followed by Israeli strikes on these points.

However, the whole flavor of using such technologies boils down to their bias in the initial target setting, the lack of response to non-standard threats, which are understood by AI as "low-probability errors" and ignored. We can see the result in news headlines and media reports, which are full of reports about US strikes on Iran.

Time will tell how adequately the Palantir Mosaic AI system assessed this threat. It is quite possible that the US Congress will still veto such decisions by the president in the future and limit the influence of AI-based analytical systems on making the final decision.

12
 
 

Jan-nano-128k is model fine-tuned to improve performance when enable YaRN scaling (instead of having degraded performance). This model will require YaRN Scaling supported from inference engine.

  • It can uses tools continuously, repeatedly.
  • It can perform deep research
  • Extremely persistent

gguf can be found at: https://huggingface.co/Menlo/Jan-nano-128k-gguf

13
14
15
16
17
18
19
20
21
22
 
 

Jan-nano is a model fine-tuned with DAPO on Qwen3-4B. Jan-nano comes with some unique capabilities:

  • It can perform deep research (with the right prompting)
  • It picks up relevant information effectively from search results
  • It uses tools efficiently

The model was evaluated using SimpleQA - a relatively straightforward benchmark to test whether the model can find and extract the right answers.

Jan-nano outperforms Deepseek-671B on this metric, using an agentic and tool-usage-based approach. A 4B model obviously has its limitations, but it's interesting to see how far these things can be pushed. Jan-nano can serve as your self-hosted Perplexity alternative on a budget.

You can find the model at: https://huggingface.co/Menlo/Jan-nano

And a gguf is available at: https://huggingface.co/Menlo/Jan-nano-gguf

23
24
25
view more: next ›