• Deepseek and Taiwan
    Deepseek and Taiwan
    Like
    Love
    Wow
    3
    · 0 Commenti ·0 condivisioni ·87K Views ·0 Anteprima
  • DeepSeek my beloved
    DeepSeek my beloved
    Like
    Love
    Wow
    3
    · 0 Commenti ·0 condivisioni ·30K Views ·0 Anteprima
  • Apple's latest AI research challenges the hype around Artificial General Intelligence (AGI), revealing that today’s top models fail basic reasoning tasks once complexity increases. By designing new logic puzzles insulated from training data contamination, Apple evaluated models like Claude Thinking, DeepSeek-R1, and o3-mini. The findings were stark: model accuracy dropped to 0% on harder tasks, even when given clear step-by-step instructions. This suggests that current AI systems rely heavily on pattern matching and memorization, rather than actual understanding or reasoning.

    The research outlines three performance phases—easy puzzles were solved decently, medium ones showed minimal improvement, and difficult problems led to complete failure. Neither more compute nor prompt engineering could close this gap. According to Apple, this means that the metrics used today may dangerously overstate AI’s capabilities, giving a false impression of progress toward AGI. In reality, we may still be far from machines that can truly think.

    #AppleAI #AGIRealityCheck #ArtificialIntelligence #AIResearch #MachineLearningLimits
    Apple's latest AI research challenges the hype around Artificial General Intelligence (AGI), revealing that today’s top models fail basic reasoning tasks once complexity increases. By designing new logic puzzles insulated from training data contamination, Apple evaluated models like Claude Thinking, DeepSeek-R1, and o3-mini. The findings were stark: model accuracy dropped to 0% on harder tasks, even when given clear step-by-step instructions. This suggests that current AI systems rely heavily on pattern matching and memorization, rather than actual understanding or reasoning. The research outlines three performance phases—easy puzzles were solved decently, medium ones showed minimal improvement, and difficult problems led to complete failure. Neither more compute nor prompt engineering could close this gap. According to Apple, this means that the metrics used today may dangerously overstate AI’s capabilities, giving a false impression of progress toward AGI. In reality, we may still be far from machines that can truly think. #AppleAI #AGIRealityCheck #ArtificialIntelligence #AIResearch #MachineLearningLimits
    Like
    Love
    Wow
    3
    · 0 Commenti ·0 condivisioni ·43K Views ·0 Anteprima
  • Apple’s latest AI study is stirring the pot by exposing serious cracks in the perceived reasoning power of today’s top language models. Researchers put major players like DeepSeek-R1 and OpenAI’s O3 to the test using classic logic puzzles, revealing that while these models handle easy tasks and short chains of logic, they falter hard as complexity increases. It’s not that they lack knowledge—but that they fail to plan ahead when it counts most.

    The team observed a dramatic “reasoning collapse” once tasks became too intricate, suggesting these models are excellent imitators, not problem-solvers. Despite having plenty of memory and token space left, the models would abandon mid-task thinking or repeat patterns without adapting. Apple’s paper warns that today’s “reasoning models” may be more illusion than innovation—highlighting the gap between surface-level competence and true cognitive ability.

    #AIresearch #AppleAI #OpenAI #DeepSeek #ArtificialIntelligence
    Apple’s latest AI study is stirring the pot by exposing serious cracks in the perceived reasoning power of today’s top language models. Researchers put major players like DeepSeek-R1 and OpenAI’s O3 to the test using classic logic puzzles, revealing that while these models handle easy tasks and short chains of logic, they falter hard as complexity increases. It’s not that they lack knowledge—but that they fail to plan ahead when it counts most. The team observed a dramatic “reasoning collapse” once tasks became too intricate, suggesting these models are excellent imitators, not problem-solvers. Despite having plenty of memory and token space left, the models would abandon mid-task thinking or repeat patterns without adapting. Apple’s paper warns that today’s “reasoning models” may be more illusion than innovation—highlighting the gap between surface-level competence and true cognitive ability. #AIresearch #AppleAI #OpenAI #DeepSeek #ArtificialIntelligence
    Like
    1
    · 0 Commenti ·0 condivisioni ·35K Views ·0 Anteprima
  • VIỆT NAM HOÀN TOÀN CÓ THỂ TẠO RA MỘT DEEPSEEK HAY GPT CỦA RIÊNG MÌNH

    CEO BKAV Nguyễn Tử Quảng cho biết, AI trở thành xu hướng toàn cầu nhờ vào sự tiến bộ của Machine Learning (Học máy), Deep Learning (Học sâu), Big Data (Dữ liệu lớn) và Điện toán đám mây, được tích hợp vào nhiều lĩnh vực như y tế, tài chính, giáo dục, sản xuất, giao thông, an ninh mạng...

    Tại Việt Nam, ông Quảng khẳng định người Việt giỏi GenAI và tin rằng Việt Nam hoàn toàn có thể tạo ra một DeepSeek hay GPT của riêng mình.

    Theo: Markettimes
    VIỆT NAM HOÀN TOÀN CÓ THỂ TẠO RA MỘT DEEPSEEK HAY GPT CỦA RIÊNG MÌNH CEO BKAV Nguyễn Tử Quảng cho biết, AI trở thành xu hướng toàn cầu nhờ vào sự tiến bộ của Machine Learning (Học máy), Deep Learning (Học sâu), Big Data (Dữ liệu lớn) và Điện toán đám mây, được tích hợp vào nhiều lĩnh vực như y tế, tài chính, giáo dục, sản xuất, giao thông, an ninh mạng... Tại Việt Nam, ông Quảng khẳng định người Việt giỏi GenAI và tin rằng Việt Nam hoàn toàn có thể tạo ra một DeepSeek hay GPT của riêng mình. Theo: Markettimes
    0 Commenti ·0 condivisioni ·25K Views ·0 Anteprima
Pagine in Evidenza