DeepSeek's impact on AI market

입력 2025.01.28 (23:58)

읽어주기 기능은 크롬기반의
브라우저에서만 사용하실 수 있습니다.

[Anchor]

When the former Soviet Union launched the world's first artificial satellite, Sputnik, the shock that the United States felt was immense, and some are even comparing it to the impact that DeepSeek has had on the U.S.

Reporter Kim Ji-sook has investigated the characteristics of DeepSeek, summarized as low-cost and high-performance, and the ripple effects it may have on the global artificial intelligence industry.

[Report]

This is the IMF's English report forecasting the global economy for this year.

When asked for a Korean translation and summary from ChatGPT and DeepSeek, ChatGPT began responding after 2 seconds, while DeepSeek took 3 seconds to read the document and start answering.

There was not much difference in speed and accuracy, but the reason DeepSeek is gaining more attention is its 'cost-effectiveness'.

According to DeepSeek, the training cost of its universal model V3 is 8 billion won, which is only 10% of the investment made in Meta's 'Llama3'.

[Brian Jacobson/Chief Economist, Annex Asset Management: "They were able to really do this on what some people would call a shoestring budget."]

Due to the U.S. export controls on high-performance GPUs, the chips used are also different.

While U.S. AI companies use NVIDIA's high-performance GPU H100, DeepSeek claims to have developed its model using over 2,000 relatively low-performance H800 chips in just 2 months.

In the performance test results of the DeepSeek technology report, in terms of accuracy in the language domain, DeepSeek slightly surpassed ChatGPT and others with an accuracy in the 80% range, but in the math domain, it significantly exceeded the 70% accuracy of U.S. AI models with an accuracy in the 90% range.

This has led to comparisons with the space race between the former Soviet Union and the U.S. during the Cold War, being described as "a moment like AI's Sputnik."

However, there are also calls for cross-validation of DeepSeek's performance and costs.

[Ahn Seok-hoon/Head of Investment Content Team, Kiwoom Securities: "China has relatively lower labor costs and there are various conditions that have not been revealed..."]

The newly released DeepSeek R1 model is reported to have some features that surpass OpenAI o1, and it is also open-source, allowing free use by developers and others.

However, there are limitations noted regarding questions related to China.

As evaluations suggest that this could change the global AI landscape, domestic semiconductor companies are also keeping an eye on the repercussions.

This is KBS News, Kim Ji-sook.

■ 제보하기
▷ 카카오톡 : 'KBS제보' 검색, 채널 추가
▷ 전화 : 02-781-1234, 4444
▷ 이메일 : kbs1234@kbs.co.kr
▷ 유튜브, 네이버, 카카오에서도 KBS뉴스를 구독해주세요!


  • DeepSeek's impact on AI market
    • 입력 2025-01-28 23:58:05
    News 9
[Anchor]

When the former Soviet Union launched the world's first artificial satellite, Sputnik, the shock that the United States felt was immense, and some are even comparing it to the impact that DeepSeek has had on the U.S.

Reporter Kim Ji-sook has investigated the characteristics of DeepSeek, summarized as low-cost and high-performance, and the ripple effects it may have on the global artificial intelligence industry.

[Report]

This is the IMF's English report forecasting the global economy for this year.

When asked for a Korean translation and summary from ChatGPT and DeepSeek, ChatGPT began responding after 2 seconds, while DeepSeek took 3 seconds to read the document and start answering.

There was not much difference in speed and accuracy, but the reason DeepSeek is gaining more attention is its 'cost-effectiveness'.

According to DeepSeek, the training cost of its universal model V3 is 8 billion won, which is only 10% of the investment made in Meta's 'Llama3'.

[Brian Jacobson/Chief Economist, Annex Asset Management: "They were able to really do this on what some people would call a shoestring budget."]

Due to the U.S. export controls on high-performance GPUs, the chips used are also different.

While U.S. AI companies use NVIDIA's high-performance GPU H100, DeepSeek claims to have developed its model using over 2,000 relatively low-performance H800 chips in just 2 months.

In the performance test results of the DeepSeek technology report, in terms of accuracy in the language domain, DeepSeek slightly surpassed ChatGPT and others with an accuracy in the 80% range, but in the math domain, it significantly exceeded the 70% accuracy of U.S. AI models with an accuracy in the 90% range.

This has led to comparisons with the space race between the former Soviet Union and the U.S. during the Cold War, being described as "a moment like AI's Sputnik."

However, there are also calls for cross-validation of DeepSeek's performance and costs.

[Ahn Seok-hoon/Head of Investment Content Team, Kiwoom Securities: "China has relatively lower labor costs and there are various conditions that have not been revealed..."]

The newly released DeepSeek R1 model is reported to have some features that surpass OpenAI o1, and it is also open-source, allowing free use by developers and others.

However, there are limitations noted regarding questions related to China.

As evaluations suggest that this could change the global AI landscape, domestic semiconductor companies are also keeping an eye on the repercussions.

This is KBS News, Kim Ji-sook.

이 기사가 좋으셨다면

오늘의 핫 클릭

실시간 뜨거운 관심을 받고 있는 뉴스

이 기사에 대한 의견을 남겨주세요.

수신료 수신료