+Follow
bnby
No personal profile
10
Follow
0
Followers
0
Topic
0
Badge
Posts
Hot
bnby
01-28
no brainer to bet startup vs established? thats deep shit later on.
Does DeepSeek Spell Doomsday For Nvidia And Other AI Stocks? Here's What To Know
bnby
2024-11-21
Well, dont forget the factor that the price is determined based on how much the buyer required or satisfy their requirement.
Sorry, the original content has been removed
bnby
2024-11-08
Hmm...that hedge fund is buying blindly of uncertain hope if continue increasing shares after the recent SMCI problem facing.
Sorry, the original content has been removed
bnby
2024-10-22
Then why need to sell or export to China to begin with, even China has own domestic problem in their own economy? It doesnt make sense why need to rely into within China market at this time as China is not only the market.
Sorry, the original content has been removed
Go to Tiger App to see more news
{"i18n":{"language":"en_US"},"userPageInfo":{"id":"4136643462511252","uuid":"4136643462511252","gmtCreate":1673544707976,"gmtModify":1704447496639,"name":"bnby","pinyin":"bnby","introduction":"","introductionEn":"","signature":"","avatar":"https://community-static.tradeup.com/news/aa5afa9be18bb8fa4a29490494e7b8e3","hat":null,"hatId":null,"hatName":null,"vip":1,"status":2,"fanSize":0,"headSize":10,"tweetSize":4,"questionSize":0,"limitLevel":999,"accountStatus":4,"level":{"id":0,"name":"","nameTw":"","represent":"","factor":"","iconColor":"","bgColor":""},"themeCounts":0,"badgeCounts":0,"badges":[],"moderator":false,"superModerator":false,"manageSymbols":null,"badgeLevel":null,"boolIsFan":false,"boolIsHead":false,"favoriteSize":0,"symbols":null,"coverImage":null,"realNameVerified":"init","userBadges":[{"badgeId":"1026c425416b44e0aac28c11a0848493-2","templateUuid":"1026c425416b44e0aac28c11a0848493","name":"Senior Tiger","description":"Join the tiger community for 1000 days","bigImgUrl":"https://static.tigerbbs.com/0063fb68ea29c9ae6858c58630e182d5","smallImgUrl":"https://static.tigerbbs.com/96c699a93be4214d4b49aea6a5a5d1a4","grayImgUrl":"https://static.tigerbbs.com/35b0e542a9ff77046ed69ef602bc105d","redirectLinkEnabled":0,"redirectLink":null,"hasAllocated":1,"isWearing":1,"stamp":null,"stampPosition":0,"hasStamp":0,"allocationCount":1,"allocatedDate":"2025.10.10","exceedPercentage":null,"individualDisplayEnabled":0,"backgroundColor":null,"fontColor":null,"individualDisplaySort":0,"categoryType":1001},{"badgeId":"972123088c9646f7b6091ae0662215be-1","templateUuid":"972123088c9646f7b6091ae0662215be","name":"Elite Trader","description":"Total number of securities or futures transactions reached 30","bigImgUrl":"https://static.tigerbbs.com/ab0f87127c854ce3191a752d57b46edc","smallImgUrl":"https://static.tigerbbs.com/c9835ce48b8c8743566d344ac7a7ba8c","grayImgUrl":"https://static.tigerbbs.com/76754b53ce7a90019f132c1d2fbc698f","redirectLinkEnabled":0,"redirectLink":null,"hasAllocated":1,"isWearing":0,"stamp":null,"stampPosition":0,"hasStamp":0,"allocationCount":1,"allocatedDate":"2024.08.21","exceedPercentage":"60.72%","individualDisplayEnabled":0,"backgroundColor":null,"fontColor":null,"individualDisplaySort":0,"categoryType":1100},{"badgeId":"7a9f168ff73447fe856ed6c938b61789-1","templateUuid":"7a9f168ff73447fe856ed6c938b61789","name":"Knowledgeable Investor","description":"Traded more than 10 stocks","bigImgUrl":"https://static.tigerbbs.com/e74cc24115c4fbae6154ec1b1041bf47","smallImgUrl":"https://static.tigerbbs.com/d48265cbfd97c57f9048db29f22227b0","grayImgUrl":"https://static.tigerbbs.com/76c6d6898b073c77e1c537ebe9ac1c57","redirectLinkEnabled":0,"redirectLink":null,"hasAllocated":1,"isWearing":0,"stamp":null,"stampPosition":0,"hasStamp":0,"allocationCount":1,"allocatedDate":"2024.07.25","exceedPercentage":null,"individualDisplayEnabled":0,"backgroundColor":null,"fontColor":null,"individualDisplaySort":0,"categoryType":1102},{"badgeId":"a83d7582f45846ffbccbce770ce65d84-1","templateUuid":"a83d7582f45846ffbccbce770ce65d84","name":"Real Trader","description":"Completed a transaction","bigImgUrl":"https://static.tigerbbs.com/2e08a1cc2087a1de93402c2c290fa65b","smallImgUrl":"https://static.tigerbbs.com/4504a6397ce1137932d56e5f4ce27166","grayImgUrl":"https://static.tigerbbs.com/4b22c79415b4cd6e3d8ebc4a0fa32604","redirectLinkEnabled":0,"redirectLink":null,"hasAllocated":1,"isWearing":0,"stamp":null,"stampPosition":0,"hasStamp":0,"allocationCount":1,"allocatedDate":"2023.12.08","exceedPercentage":null,"individualDisplayEnabled":0,"backgroundColor":null,"fontColor":null,"individualDisplaySort":0,"categoryType":1100}],"userBadgeCount":4,"currentWearingBadge":{"badgeId":"1026c425416b44e0aac28c11a0848493-2","templateUuid":"1026c425416b44e0aac28c11a0848493","name":"Senior Tiger","description":"Join the tiger community for 1000 days","bigImgUrl":"https://static.tigerbbs.com/0063fb68ea29c9ae6858c58630e182d5","smallImgUrl":"https://static.tigerbbs.com/96c699a93be4214d4b49aea6a5a5d1a4","grayImgUrl":"https://static.tigerbbs.com/35b0e542a9ff77046ed69ef602bc105d","redirectLinkEnabled":0,"redirectLink":null,"hasAllocated":1,"isWearing":1,"stamp":null,"stampPosition":0,"hasStamp":0,"allocationCount":1,"allocatedDate":"2025.10.10","exceedPercentage":null,"individualDisplayEnabled":0,"backgroundColor":null,"fontColor":null,"individualDisplaySort":0,"categoryType":1001},"individualDisplayBadges":null,"crmLevel":11,"crmLevelSwitch":0,"location":null,"starInvestorFollowerNum":0,"starInvestorFlag":false,"starInvestorOrderShareNum":0,"subscribeStarInvestorNum":9,"ror":null,"winRationPercentage":null,"showRor":false,"investmentPhilosophy":null,"starInvestorSubscribeFlag":false},"baikeInfo":{},"tab":"post","tweets":[{"id":397193811496992,"gmtCreate":1737996274674,"gmtModify":1737996296153,"author":{"id":"4136643462511252","authorId":"4136643462511252","name":"bnby","avatar":"https://community-static.tradeup.com/news/aa5afa9be18bb8fa4a29490494e7b8e3","crmLevel":11,"crmLevelSwitch":0,"followedFlag":false,"authorIdStr":"4136643462511252","idStr":"4136643462511252"},"themes":[],"htmlText":"no brainer to bet startup vs established? thats deep shit later on.","listText":"no brainer to bet startup vs established? thats deep shit later on.","text":"no brainer to bet startup vs established? thats deep shit later on.","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":5,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/397193811496992","repostId":"2506374233","repostType":2,"repost":{"id":"2506374233","kind":"highlight","weMediaInfo":{"introduction":"Dow Jones publishes the world’s most trusted business news and financial information in a variety of media.","home_visible":0,"media_name":"Dow Jones","id":"106","head_image":"https://static.tigerbbs.com/150f88aa4d182df19190059f4a365e99"},"pubTimestamp":1737988752,"share":"https://ttm.financial/m/news/2506374233?lang=en_US&edition=fundamental","pubTime":"2025-01-27 22:39","market":"sh","language":"en","title":"Does DeepSeek Spell Doomsday For Nvidia And Other AI Stocks? Here's What To Know","url":"https://stock-news.laohu8.com/highlight/detail?id=2506374233","media":"Dow Jones","summary":"The Chinese AI service has Wall Street worried that it will be cheaper than expected to develop models. But as chip stocks sink, some analysts see a silver lining.That's the big question on the minds of investors Monday, given newfound attention on DeepSeek, a Chinese AI app that has climbed to the top of the U.S. App Store. The company reportedly was able to build a model that functions like OpenAI's ChatGPT without spending to the same degree.\"If DeepSeek's innovations are adopted broadly, an argument can be made that model training costs could come down significantly even at U.S. hyperscalers, potentially raising questions about the need for 1-million XPU/GPU clusters as projected by some,\" Raymond James analyst Srini Pajjuri wrote in a note to clients over the weekend.That number corresponds to DeepSeek-V3, a \"mixture-of-experts\" model that \"through a number of optimizations and clever techniques can provide similar or better performance vs other large foundational models but requi","content":"<html><head></head><body><p>The Chinese AI service has Wall Street worried that it will be cheaper than expected to develop models. But as chip stocks sink, some analysts see a silver lining.</p><p>What if companies don't need to spend nearly as much as expected to develop artificial-intelligence models?</p><p>That's the big question on the minds of investors Monday, given newfound attention on DeepSeek, a Chinese AI app that has climbed to the top of the U.S. App Store. The company reportedly was able to build a model that functions like OpenAI's ChatGPT without spending to the same degree.</p><p>Wall Street is nervous about what DeepSeek's success means for companies like Nvidia Corp. <a href=\"https://laohu8.com/S/NVDA\">$(NVDA)$</a>, Broadcom Inc. <a href=\"https://laohu8.com/S/AVGO\">$(AVGO)$</a>, Marvell Technology Inc. <a href=\"https://laohu8.com/S/MRVL\">$(MRVL)$</a> and others that have seen their stocks run up on expectations their businesses would benefit from lofty, AI-fueled capital-expenditure budgets in the years to come.</p><p>"If DeepSeek's innovations are adopted broadly, an argument can be made that model training costs could come down significantly even at U.S. hyperscalers, potentially raising questions about the need for 1-million XPU/GPU clusters as projected by some," Raymond James analyst Srini Pajjuri wrote in a note to clients over the weekend.</p><p>In a post titled "The Short Case for Nvidia Stock," former quant investor and current Web3 entrepreneur Jeffrey Emanuel said DeepSeek's success "suggests the entire industry has been massively over-provisioning compute resources."</p><p>He added that "markets eventually find a way around artificial bottlenecks that generate super-normal profits," meaning that Nvidia may face "a much rockier path to maintaining its current growth trajectory and margins than its valuation implies.."</p><p>But it's also worth digging into the numbers that have Wall Street so worried. Specifically, there's consternation about a paper that suggested DeepSeek's creator needed to spend $5.6 million to build the model. By contrast, large technology companies in the U.S. are shelling out tens of billions a year on capital expenditures and earmarking much of that for AI infrastructure.</p><p>The $5 million number, though, is highly misleading, according to Bernstein analyst Stacy Rasgon. "Did DeepSeek really 'build OpenAI for $5M?' Of course not," he wrote in a note to clients over the weekend.</p><p>That number corresponds to DeepSeek-V3, a "mixture-of-experts" model that "through a number of optimizations and clever techniques can provide similar or better performance vs other large foundational models but requires a small fraction of the compute resources to train," according to Rasgon.</p><p>But the $5 million figure "does not include all the other costs associated with prior research and experiments on architectures, algorithms, or data," he continued. And this type of model is designed "to significantly reduce cost to train and run, given that only a portion of the parameter set is active at any one time."</p><p>Meanwhile, DeepSeek also has an R1 model that "seems to be causing most of the angst" given its comparisons to OpenAI's o1 model, according to Rasgon. "DeepSeek's R1 paper did not quantify the additional resources that were required to develop the R1 model (presumably they were substantial as well)," Rasgon wrote.</p><p>That said, he thinks it's "absolutely true that DeepSeek's pricing blows away anything from the competition, with the company pricing their models anywhere from 20-40x cheaper than equivalent models from OpenAI."</p><p>But he doesn't buy that this is a "doomsday" situation for semiconductor companies: "We are still going to need, and get, a lot of chips."</p><p>Cantor Fitzgerald's C.J. Muse also saw a silver lining. "Innovation is driving down cost of adoption and making AI ubiquitous," he wrote. "We see this progress as positive in the need for more and more compute over time (not less)."</p><p>Raymond James' Pajjuri made a similar point. "A more logical implication is that DeepSeek will drive even more urgency among U.S. hyperscalers to leverage their key advantage (access to GPUs) to distance themselves from cheaper alternatives," he wrote.</p><p>Further, while the DeepSeek fears are centered on training costs, he thinks investors should also think about inferencing. Training is the process of showing a model data that will teach it to draw conclusions, and inferencing is the process of putting that model to work based on new data.</p><p>Pajjuri argued that "as training costs decline, more AI use cases could emerge, driving significant growth in inferencing," including for models like DeepSeek's R1 and OpenAI's o1.</p><p>Emanuel, though, wrote that DeepSeek is said to be "nearly 50x more compute efficient" than popular U.S. models on the training side, and perhaps even more so when it comes to inference.</p></body></html>","collect":0,"html":"<!DOCTYPE html>\n<html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\" />\n<meta name=\"viewport\" content=\"width=device-width,initial-scale=1.0,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no\"/>\n<meta name=\"format-detection\" content=\"telephone=no,email=no,address=no\" />\n<title>Does DeepSeek Spell Doomsday For Nvidia And Other AI Stocks? Here's What To Know</title>\n<style type=\"text/css\">\na,abbr,acronym,address,applet,article,aside,audio,b,big,blockquote,body,canvas,caption,center,cite,code,dd,del,details,dfn,div,dl,dt,\nem,embed,fieldset,figcaption,figure,footer,form,h1,h2,h3,h4,h5,h6,header,hgroup,html,i,iframe,img,ins,kbd,label,legend,li,mark,menu,nav,\nobject,ol,output,p,pre,q,ruby,s,samp,section,small,span,strike,strong,sub,summary,sup,table,tbody,td,tfoot,th,thead,time,tr,tt,u,ul,var,video{ font:inherit;margin:0;padding:0;vertical-align:baseline;border:0 }\nbody{ font-size:16px; line-height:1.5; color:#999; background:transparent; }\n.wrapper{ overflow:hidden;word-break:break-all;padding:10px; }\nh1,h2{ font-weight:normal; line-height:1.35; margin-bottom:.6em; }\nh3,h4,h5,h6{ line-height:1.35; margin-bottom:1em; }\nh1{ font-size:24px; }\nh2{ font-size:20px; }\nh3{ font-size:18px; }\nh4{ font-size:16px; }\nh5{ font-size:14px; }\nh6{ font-size:12px; }\np,ul,ol,blockquote,dl,table{ margin:1.2em 0; }\nul,ol{ margin-left:2em; }\nul{ list-style:disc; }\nol{ list-style:decimal; }\nli,li p{ margin:10px 0;}\nimg{ max-width:100%;display:block;margin:0 auto 1em; }\nblockquote{ color:#B5B2B1; border-left:3px solid #aaa; padding:1em; }\nstrong,b{font-weight:bold;}\nem,i{font-style:italic;}\ntable{ width:100%;border-collapse:collapse;border-spacing:1px;margin:1em 0;font-size:.9em; }\nth,td{ padding:5px;text-align:left;border:1px solid #aaa; }\nth{ font-weight:bold;background:#5d5d5d; }\n.symbol-link{font-weight:bold;}\n/* header{ border-bottom:1px solid #494756; } */\n.title{ margin:0 0 8px;line-height:1.3;color:#ddd; }\n.meta {color:#5e5c6d;font-size:13px;margin:0 0 .5em; }\na{text-decoration:none; color:#2a4b87;}\n.meta .head { display: inline-block; overflow: hidden}\n.head .h-thumb { width: 30px; height: 30px; margin: 0; padding: 0; border-radius: 50%; float: left;}\n.head .h-content { margin: 0; padding: 0 0 0 9px; float: left;}\n.head .h-name {font-size: 13px; color: #eee; margin: 0;}\n.head .h-time {font-size: 11px; color: #7E829C; margin: 0;line-height: 11px;}\n.small {font-size: 12.5px; display: inline-block; transform: scale(0.9); -webkit-transform: scale(0.9); transform-origin: left; -webkit-transform-origin: left;}\n.smaller {font-size: 12.5px; display: inline-block; transform: scale(0.8); -webkit-transform: scale(0.8); transform-origin: left; -webkit-transform-origin: left;}\n.bt-text {font-size: 12px;margin: 1.5em 0 0 0}\n.bt-text p {margin: 0}\n</style>\n</head>\n<body>\n<div class=\"wrapper\">\n<header>\n<h2 class=\"title\">\nDoes DeepSeek Spell Doomsday For Nvidia And Other AI Stocks? Here's What To Know\n</h2>\n\n<h4 class=\"meta\">\n\n\n<div class=\"head\" \">\n\n\n<div class=\"h-thumb\" style=\"background-image:url(https://static.tigerbbs.com/150f88aa4d182df19190059f4a365e99);background-size:cover;\"></div>\n\n<div class=\"h-content\">\n<p class=\"h-name\">Dow Jones </p>\n<p class=\"h-time\">2025-01-27 22:39</p>\n</div>\n\n</div>\n\n\n</h4>\n\n</header>\n<article>\n<html><head></head><body><p>The Chinese AI service has Wall Street worried that it will be cheaper than expected to develop models. But as chip stocks sink, some analysts see a silver lining.</p><p>What if companies don't need to spend nearly as much as expected to develop artificial-intelligence models?</p><p>That's the big question on the minds of investors Monday, given newfound attention on DeepSeek, a Chinese AI app that has climbed to the top of the U.S. App Store. The company reportedly was able to build a model that functions like OpenAI's ChatGPT without spending to the same degree.</p><p>Wall Street is nervous about what DeepSeek's success means for companies like Nvidia Corp. <a href=\"https://laohu8.com/S/NVDA\">$(NVDA)$</a>, Broadcom Inc. <a href=\"https://laohu8.com/S/AVGO\">$(AVGO)$</a>, Marvell Technology Inc. <a href=\"https://laohu8.com/S/MRVL\">$(MRVL)$</a> and others that have seen their stocks run up on expectations their businesses would benefit from lofty, AI-fueled capital-expenditure budgets in the years to come.</p><p>"If DeepSeek's innovations are adopted broadly, an argument can be made that model training costs could come down significantly even at U.S. hyperscalers, potentially raising questions about the need for 1-million XPU/GPU clusters as projected by some," Raymond James analyst Srini Pajjuri wrote in a note to clients over the weekend.</p><p>In a post titled "The Short Case for Nvidia Stock," former quant investor and current Web3 entrepreneur Jeffrey Emanuel said DeepSeek's success "suggests the entire industry has been massively over-provisioning compute resources."</p><p>He added that "markets eventually find a way around artificial bottlenecks that generate super-normal profits," meaning that Nvidia may face "a much rockier path to maintaining its current growth trajectory and margins than its valuation implies.."</p><p>But it's also worth digging into the numbers that have Wall Street so worried. Specifically, there's consternation about a paper that suggested DeepSeek's creator needed to spend $5.6 million to build the model. By contrast, large technology companies in the U.S. are shelling out tens of billions a year on capital expenditures and earmarking much of that for AI infrastructure.</p><p>The $5 million number, though, is highly misleading, according to Bernstein analyst Stacy Rasgon. "Did DeepSeek really 'build OpenAI for $5M?' Of course not," he wrote in a note to clients over the weekend.</p><p>That number corresponds to DeepSeek-V3, a "mixture-of-experts" model that "through a number of optimizations and clever techniques can provide similar or better performance vs other large foundational models but requires a small fraction of the compute resources to train," according to Rasgon.</p><p>But the $5 million figure "does not include all the other costs associated with prior research and experiments on architectures, algorithms, or data," he continued. And this type of model is designed "to significantly reduce cost to train and run, given that only a portion of the parameter set is active at any one time."</p><p>Meanwhile, DeepSeek also has an R1 model that "seems to be causing most of the angst" given its comparisons to OpenAI's o1 model, according to Rasgon. "DeepSeek's R1 paper did not quantify the additional resources that were required to develop the R1 model (presumably they were substantial as well)," Rasgon wrote.</p><p>That said, he thinks it's "absolutely true that DeepSeek's pricing blows away anything from the competition, with the company pricing their models anywhere from 20-40x cheaper than equivalent models from OpenAI."</p><p>But he doesn't buy that this is a "doomsday" situation for semiconductor companies: "We are still going to need, and get, a lot of chips."</p><p>Cantor Fitzgerald's C.J. Muse also saw a silver lining. "Innovation is driving down cost of adoption and making AI ubiquitous," he wrote. "We see this progress as positive in the need for more and more compute over time (not less)."</p><p>Raymond James' Pajjuri made a similar point. "A more logical implication is that DeepSeek will drive even more urgency among U.S. hyperscalers to leverage their key advantage (access to GPUs) to distance themselves from cheaper alternatives," he wrote.</p><p>Further, while the DeepSeek fears are centered on training costs, he thinks investors should also think about inferencing. Training is the process of showing a model data that will teach it to draw conclusions, and inferencing is the process of putting that model to work based on new data.</p><p>Pajjuri argued that "as training costs decline, more AI use cases could emerge, driving significant growth in inferencing," including for models like DeepSeek's R1 and OpenAI's o1.</p><p>Emanuel, though, wrote that DeepSeek is said to be "nearly 50x more compute efficient" than popular U.S. models on the training side, and perhaps even more so when it comes to inference.</p></body></html>\n\n</article>\n</div>\n</body>\n</html>\n","type":0,"thumbnail":"","relate_stocks":{"AVGO":"博通","MRVL":"迈威尔科技","NVDA":"英伟达"},"source_url":"https://dowjonesnews.com/newdjn/logon.aspx?AL=N","is_english":true,"share_image_url":"https://static.laohu8.com/e9f99090a1c2ed51c021029395664489","article_id":"2506374233","content_text":"The Chinese AI service has Wall Street worried that it will be cheaper than expected to develop models. But as chip stocks sink, some analysts see a silver lining.What if companies don't need to spend nearly as much as expected to develop artificial-intelligence models?That's the big question on the minds of investors Monday, given newfound attention on DeepSeek, a Chinese AI app that has climbed to the top of the U.S. App Store. The company reportedly was able to build a model that functions like OpenAI's ChatGPT without spending to the same degree.Wall Street is nervous about what DeepSeek's success means for companies like Nvidia Corp. $(NVDA)$, Broadcom Inc. $(AVGO)$, Marvell Technology Inc. $(MRVL)$ and others that have seen their stocks run up on expectations their businesses would benefit from lofty, AI-fueled capital-expenditure budgets in the years to come.\"If DeepSeek's innovations are adopted broadly, an argument can be made that model training costs could come down significantly even at U.S. hyperscalers, potentially raising questions about the need for 1-million XPU/GPU clusters as projected by some,\" Raymond James analyst Srini Pajjuri wrote in a note to clients over the weekend.In a post titled \"The Short Case for Nvidia Stock,\" former quant investor and current Web3 entrepreneur Jeffrey Emanuel said DeepSeek's success \"suggests the entire industry has been massively over-provisioning compute resources.\"He added that \"markets eventually find a way around artificial bottlenecks that generate super-normal profits,\" meaning that Nvidia may face \"a much rockier path to maintaining its current growth trajectory and margins than its valuation implies..\"But it's also worth digging into the numbers that have Wall Street so worried. Specifically, there's consternation about a paper that suggested DeepSeek's creator needed to spend $5.6 million to build the model. By contrast, large technology companies in the U.S. are shelling out tens of billions a year on capital expenditures and earmarking much of that for AI infrastructure.The $5 million number, though, is highly misleading, according to Bernstein analyst Stacy Rasgon. \"Did DeepSeek really 'build OpenAI for $5M?' Of course not,\" he wrote in a note to clients over the weekend.That number corresponds to DeepSeek-V3, a \"mixture-of-experts\" model that \"through a number of optimizations and clever techniques can provide similar or better performance vs other large foundational models but requires a small fraction of the compute resources to train,\" according to Rasgon.But the $5 million figure \"does not include all the other costs associated with prior research and experiments on architectures, algorithms, or data,\" he continued. And this type of model is designed \"to significantly reduce cost to train and run, given that only a portion of the parameter set is active at any one time.\"Meanwhile, DeepSeek also has an R1 model that \"seems to be causing most of the angst\" given its comparisons to OpenAI's o1 model, according to Rasgon. \"DeepSeek's R1 paper did not quantify the additional resources that were required to develop the R1 model (presumably they were substantial as well),\" Rasgon wrote.That said, he thinks it's \"absolutely true that DeepSeek's pricing blows away anything from the competition, with the company pricing their models anywhere from 20-40x cheaper than equivalent models from OpenAI.\"But he doesn't buy that this is a \"doomsday\" situation for semiconductor companies: \"We are still going to need, and get, a lot of chips.\"Cantor Fitzgerald's C.J. Muse also saw a silver lining. \"Innovation is driving down cost of adoption and making AI ubiquitous,\" he wrote. \"We see this progress as positive in the need for more and more compute over time (not less).\"Raymond James' Pajjuri made a similar point. \"A more logical implication is that DeepSeek will drive even more urgency among U.S. hyperscalers to leverage their key advantage (access to GPUs) to distance themselves from cheaper alternatives,\" he wrote.Further, while the DeepSeek fears are centered on training costs, he thinks investors should also think about inferencing. Training is the process of showing a model data that will teach it to draw conclusions, and inferencing is the process of putting that model to work based on new data.Pajjuri argued that \"as training costs decline, more AI use cases could emerge, driving significant growth in inferencing,\" including for models like DeepSeek's R1 and OpenAI's o1.Emanuel, though, wrote that DeepSeek is said to be \"nearly 50x more compute efficient\" than popular U.S. models on the training side, and perhaps even more so when it comes to inference.","news_type":1,"symbols_score_info":{"AVGO":0.9,"NVDA":1,"MRVL":0.9}},"isVote":1,"tweetType":1,"viewCount":244,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0},{"id":373169448808728,"gmtCreate":1732151307009,"gmtModify":1732153254311,"author":{"id":"4136643462511252","authorId":"4136643462511252","name":"bnby","avatar":"https://community-static.tradeup.com/news/aa5afa9be18bb8fa4a29490494e7b8e3","crmLevel":11,"crmLevelSwitch":0,"followedFlag":false,"authorIdStr":"4136643462511252","idStr":"4136643462511252"},"themes":[],"htmlText":"Well, dont forget the factor that the price is determined based on how much the buyer required or satisfy their requirement. ","listText":"Well, dont forget the factor that the price is determined based on how much the buyer required or satisfy their requirement. ","text":"Well, dont forget the factor that the price is determined based on how much the buyer required or satisfy their requirement.","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":1,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/373169448808728","repostId":"2485567109","repostType":2,"isVote":1,"tweetType":1,"viewCount":629,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0},{"id":368457949049184,"gmtCreate":1730997103071,"gmtModify":1730997173328,"author":{"id":"4136643462511252","authorId":"4136643462511252","name":"bnby","avatar":"https://community-static.tradeup.com/news/aa5afa9be18bb8fa4a29490494e7b8e3","crmLevel":11,"crmLevelSwitch":0,"followedFlag":false,"authorIdStr":"4136643462511252","idStr":"4136643462511252"},"themes":[],"htmlText":"Hmm...that hedge fund is buying blindly of uncertain hope if continue increasing shares after the recent SMCI problem facing. ","listText":"Hmm...that hedge fund is buying blindly of uncertain hope if continue increasing shares after the recent SMCI problem facing. ","text":"Hmm...that hedge fund is buying blindly of uncertain hope if continue increasing shares after the recent SMCI problem facing.","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":0,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/368457949049184","repostId":"2481687975","repostType":2,"isVote":1,"tweetType":1,"viewCount":341,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0},{"id":362807489691680,"gmtCreate":1729611966655,"gmtModify":1729614028189,"author":{"id":"4136643462511252","authorId":"4136643462511252","name":"bnby","avatar":"https://community-static.tradeup.com/news/aa5afa9be18bb8fa4a29490494e7b8e3","crmLevel":11,"crmLevelSwitch":0,"followedFlag":false,"authorIdStr":"4136643462511252","idStr":"4136643462511252"},"themes":[],"htmlText":"Then why need to sell or export to China to begin with, even China has own domestic problem in their own economy? It doesnt make sense why need to rely into within China market at this time as China is not only the market.","listText":"Then why need to sell or export to China to begin with, even China has own domestic problem in their own economy? It doesnt make sense why need to rely into within China market at this time as China is not only the market.","text":"Then why need to sell or export to China to begin with, even China has own domestic problem in their own economy? It doesnt make sense why need to rely into within China market at this time as China is not only the market.","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":0,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/362807489691680","repostId":"2477635833","repostType":2,"isVote":1,"tweetType":1,"viewCount":452,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0}],"hots":[{"id":397193811496992,"gmtCreate":1737996274674,"gmtModify":1737996296153,"author":{"id":"4136643462511252","authorId":"4136643462511252","name":"bnby","avatar":"https://community-static.tradeup.com/news/aa5afa9be18bb8fa4a29490494e7b8e3","crmLevel":11,"crmLevelSwitch":0,"followedFlag":false,"authorIdStr":"4136643462511252","idStr":"4136643462511252"},"themes":[],"htmlText":"no brainer to bet startup vs established? thats deep shit later on.","listText":"no brainer to bet startup vs established? thats deep shit later on.","text":"no brainer to bet startup vs established? thats deep shit later on.","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":5,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/397193811496992","repostId":"2506374233","repostType":2,"repost":{"id":"2506374233","kind":"highlight","weMediaInfo":{"introduction":"Dow Jones publishes the world’s most trusted business news and financial information in a variety of media.","home_visible":0,"media_name":"Dow Jones","id":"106","head_image":"https://static.tigerbbs.com/150f88aa4d182df19190059f4a365e99"},"pubTimestamp":1737988752,"share":"https://ttm.financial/m/news/2506374233?lang=en_US&edition=fundamental","pubTime":"2025-01-27 22:39","market":"sh","language":"en","title":"Does DeepSeek Spell Doomsday For Nvidia And Other AI Stocks? Here's What To Know","url":"https://stock-news.laohu8.com/highlight/detail?id=2506374233","media":"Dow Jones","summary":"The Chinese AI service has Wall Street worried that it will be cheaper than expected to develop models. But as chip stocks sink, some analysts see a silver lining.That's the big question on the minds of investors Monday, given newfound attention on DeepSeek, a Chinese AI app that has climbed to the top of the U.S. App Store. The company reportedly was able to build a model that functions like OpenAI's ChatGPT without spending to the same degree.\"If DeepSeek's innovations are adopted broadly, an argument can be made that model training costs could come down significantly even at U.S. hyperscalers, potentially raising questions about the need for 1-million XPU/GPU clusters as projected by some,\" Raymond James analyst Srini Pajjuri wrote in a note to clients over the weekend.That number corresponds to DeepSeek-V3, a \"mixture-of-experts\" model that \"through a number of optimizations and clever techniques can provide similar or better performance vs other large foundational models but requi","content":"<html><head></head><body><p>The Chinese AI service has Wall Street worried that it will be cheaper than expected to develop models. But as chip stocks sink, some analysts see a silver lining.</p><p>What if companies don't need to spend nearly as much as expected to develop artificial-intelligence models?</p><p>That's the big question on the minds of investors Monday, given newfound attention on DeepSeek, a Chinese AI app that has climbed to the top of the U.S. App Store. The company reportedly was able to build a model that functions like OpenAI's ChatGPT without spending to the same degree.</p><p>Wall Street is nervous about what DeepSeek's success means for companies like Nvidia Corp. <a href=\"https://laohu8.com/S/NVDA\">$(NVDA)$</a>, Broadcom Inc. <a href=\"https://laohu8.com/S/AVGO\">$(AVGO)$</a>, Marvell Technology Inc. <a href=\"https://laohu8.com/S/MRVL\">$(MRVL)$</a> and others that have seen their stocks run up on expectations their businesses would benefit from lofty, AI-fueled capital-expenditure budgets in the years to come.</p><p>"If DeepSeek's innovations are adopted broadly, an argument can be made that model training costs could come down significantly even at U.S. hyperscalers, potentially raising questions about the need for 1-million XPU/GPU clusters as projected by some," Raymond James analyst Srini Pajjuri wrote in a note to clients over the weekend.</p><p>In a post titled "The Short Case for Nvidia Stock," former quant investor and current Web3 entrepreneur Jeffrey Emanuel said DeepSeek's success "suggests the entire industry has been massively over-provisioning compute resources."</p><p>He added that "markets eventually find a way around artificial bottlenecks that generate super-normal profits," meaning that Nvidia may face "a much rockier path to maintaining its current growth trajectory and margins than its valuation implies.."</p><p>But it's also worth digging into the numbers that have Wall Street so worried. Specifically, there's consternation about a paper that suggested DeepSeek's creator needed to spend $5.6 million to build the model. By contrast, large technology companies in the U.S. are shelling out tens of billions a year on capital expenditures and earmarking much of that for AI infrastructure.</p><p>The $5 million number, though, is highly misleading, according to Bernstein analyst Stacy Rasgon. "Did DeepSeek really 'build OpenAI for $5M?' Of course not," he wrote in a note to clients over the weekend.</p><p>That number corresponds to DeepSeek-V3, a "mixture-of-experts" model that "through a number of optimizations and clever techniques can provide similar or better performance vs other large foundational models but requires a small fraction of the compute resources to train," according to Rasgon.</p><p>But the $5 million figure "does not include all the other costs associated with prior research and experiments on architectures, algorithms, or data," he continued. And this type of model is designed "to significantly reduce cost to train and run, given that only a portion of the parameter set is active at any one time."</p><p>Meanwhile, DeepSeek also has an R1 model that "seems to be causing most of the angst" given its comparisons to OpenAI's o1 model, according to Rasgon. "DeepSeek's R1 paper did not quantify the additional resources that were required to develop the R1 model (presumably they were substantial as well)," Rasgon wrote.</p><p>That said, he thinks it's "absolutely true that DeepSeek's pricing blows away anything from the competition, with the company pricing their models anywhere from 20-40x cheaper than equivalent models from OpenAI."</p><p>But he doesn't buy that this is a "doomsday" situation for semiconductor companies: "We are still going to need, and get, a lot of chips."</p><p>Cantor Fitzgerald's C.J. Muse also saw a silver lining. "Innovation is driving down cost of adoption and making AI ubiquitous," he wrote. "We see this progress as positive in the need for more and more compute over time (not less)."</p><p>Raymond James' Pajjuri made a similar point. "A more logical implication is that DeepSeek will drive even more urgency among U.S. hyperscalers to leverage their key advantage (access to GPUs) to distance themselves from cheaper alternatives," he wrote.</p><p>Further, while the DeepSeek fears are centered on training costs, he thinks investors should also think about inferencing. Training is the process of showing a model data that will teach it to draw conclusions, and inferencing is the process of putting that model to work based on new data.</p><p>Pajjuri argued that "as training costs decline, more AI use cases could emerge, driving significant growth in inferencing," including for models like DeepSeek's R1 and OpenAI's o1.</p><p>Emanuel, though, wrote that DeepSeek is said to be "nearly 50x more compute efficient" than popular U.S. models on the training side, and perhaps even more so when it comes to inference.</p></body></html>","collect":0,"html":"<!DOCTYPE html>\n<html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\" />\n<meta name=\"viewport\" content=\"width=device-width,initial-scale=1.0,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no\"/>\n<meta name=\"format-detection\" content=\"telephone=no,email=no,address=no\" />\n<title>Does DeepSeek Spell Doomsday For Nvidia And Other AI Stocks? Here's What To Know</title>\n<style type=\"text/css\">\na,abbr,acronym,address,applet,article,aside,audio,b,big,blockquote,body,canvas,caption,center,cite,code,dd,del,details,dfn,div,dl,dt,\nem,embed,fieldset,figcaption,figure,footer,form,h1,h2,h3,h4,h5,h6,header,hgroup,html,i,iframe,img,ins,kbd,label,legend,li,mark,menu,nav,\nobject,ol,output,p,pre,q,ruby,s,samp,section,small,span,strike,strong,sub,summary,sup,table,tbody,td,tfoot,th,thead,time,tr,tt,u,ul,var,video{ font:inherit;margin:0;padding:0;vertical-align:baseline;border:0 }\nbody{ font-size:16px; line-height:1.5; color:#999; background:transparent; }\n.wrapper{ overflow:hidden;word-break:break-all;padding:10px; }\nh1,h2{ font-weight:normal; line-height:1.35; margin-bottom:.6em; }\nh3,h4,h5,h6{ line-height:1.35; margin-bottom:1em; }\nh1{ font-size:24px; }\nh2{ font-size:20px; }\nh3{ font-size:18px; }\nh4{ font-size:16px; }\nh5{ font-size:14px; }\nh6{ font-size:12px; }\np,ul,ol,blockquote,dl,table{ margin:1.2em 0; }\nul,ol{ margin-left:2em; }\nul{ list-style:disc; }\nol{ list-style:decimal; }\nli,li p{ margin:10px 0;}\nimg{ max-width:100%;display:block;margin:0 auto 1em; }\nblockquote{ color:#B5B2B1; border-left:3px solid #aaa; padding:1em; }\nstrong,b{font-weight:bold;}\nem,i{font-style:italic;}\ntable{ width:100%;border-collapse:collapse;border-spacing:1px;margin:1em 0;font-size:.9em; }\nth,td{ padding:5px;text-align:left;border:1px solid #aaa; }\nth{ font-weight:bold;background:#5d5d5d; }\n.symbol-link{font-weight:bold;}\n/* header{ border-bottom:1px solid #494756; } */\n.title{ margin:0 0 8px;line-height:1.3;color:#ddd; }\n.meta {color:#5e5c6d;font-size:13px;margin:0 0 .5em; }\na{text-decoration:none; color:#2a4b87;}\n.meta .head { display: inline-block; overflow: hidden}\n.head .h-thumb { width: 30px; height: 30px; margin: 0; padding: 0; border-radius: 50%; float: left;}\n.head .h-content { margin: 0; padding: 0 0 0 9px; float: left;}\n.head .h-name {font-size: 13px; color: #eee; margin: 0;}\n.head .h-time {font-size: 11px; color: #7E829C; margin: 0;line-height: 11px;}\n.small {font-size: 12.5px; display: inline-block; transform: scale(0.9); -webkit-transform: scale(0.9); transform-origin: left; -webkit-transform-origin: left;}\n.smaller {font-size: 12.5px; display: inline-block; transform: scale(0.8); -webkit-transform: scale(0.8); transform-origin: left; -webkit-transform-origin: left;}\n.bt-text {font-size: 12px;margin: 1.5em 0 0 0}\n.bt-text p {margin: 0}\n</style>\n</head>\n<body>\n<div class=\"wrapper\">\n<header>\n<h2 class=\"title\">\nDoes DeepSeek Spell Doomsday For Nvidia And Other AI Stocks? Here's What To Know\n</h2>\n\n<h4 class=\"meta\">\n\n\n<div class=\"head\" \">\n\n\n<div class=\"h-thumb\" style=\"background-image:url(https://static.tigerbbs.com/150f88aa4d182df19190059f4a365e99);background-size:cover;\"></div>\n\n<div class=\"h-content\">\n<p class=\"h-name\">Dow Jones </p>\n<p class=\"h-time\">2025-01-27 22:39</p>\n</div>\n\n</div>\n\n\n</h4>\n\n</header>\n<article>\n<html><head></head><body><p>The Chinese AI service has Wall Street worried that it will be cheaper than expected to develop models. But as chip stocks sink, some analysts see a silver lining.</p><p>What if companies don't need to spend nearly as much as expected to develop artificial-intelligence models?</p><p>That's the big question on the minds of investors Monday, given newfound attention on DeepSeek, a Chinese AI app that has climbed to the top of the U.S. App Store. The company reportedly was able to build a model that functions like OpenAI's ChatGPT without spending to the same degree.</p><p>Wall Street is nervous about what DeepSeek's success means for companies like Nvidia Corp. <a href=\"https://laohu8.com/S/NVDA\">$(NVDA)$</a>, Broadcom Inc. <a href=\"https://laohu8.com/S/AVGO\">$(AVGO)$</a>, Marvell Technology Inc. <a href=\"https://laohu8.com/S/MRVL\">$(MRVL)$</a> and others that have seen their stocks run up on expectations their businesses would benefit from lofty, AI-fueled capital-expenditure budgets in the years to come.</p><p>"If DeepSeek's innovations are adopted broadly, an argument can be made that model training costs could come down significantly even at U.S. hyperscalers, potentially raising questions about the need for 1-million XPU/GPU clusters as projected by some," Raymond James analyst Srini Pajjuri wrote in a note to clients over the weekend.</p><p>In a post titled "The Short Case for Nvidia Stock," former quant investor and current Web3 entrepreneur Jeffrey Emanuel said DeepSeek's success "suggests the entire industry has been massively over-provisioning compute resources."</p><p>He added that "markets eventually find a way around artificial bottlenecks that generate super-normal profits," meaning that Nvidia may face "a much rockier path to maintaining its current growth trajectory and margins than its valuation implies.."</p><p>But it's also worth digging into the numbers that have Wall Street so worried. Specifically, there's consternation about a paper that suggested DeepSeek's creator needed to spend $5.6 million to build the model. By contrast, large technology companies in the U.S. are shelling out tens of billions a year on capital expenditures and earmarking much of that for AI infrastructure.</p><p>The $5 million number, though, is highly misleading, according to Bernstein analyst Stacy Rasgon. "Did DeepSeek really 'build OpenAI for $5M?' Of course not," he wrote in a note to clients over the weekend.</p><p>That number corresponds to DeepSeek-V3, a "mixture-of-experts" model that "through a number of optimizations and clever techniques can provide similar or better performance vs other large foundational models but requires a small fraction of the compute resources to train," according to Rasgon.</p><p>But the $5 million figure "does not include all the other costs associated with prior research and experiments on architectures, algorithms, or data," he continued. And this type of model is designed "to significantly reduce cost to train and run, given that only a portion of the parameter set is active at any one time."</p><p>Meanwhile, DeepSeek also has an R1 model that "seems to be causing most of the angst" given its comparisons to OpenAI's o1 model, according to Rasgon. "DeepSeek's R1 paper did not quantify the additional resources that were required to develop the R1 model (presumably they were substantial as well)," Rasgon wrote.</p><p>That said, he thinks it's "absolutely true that DeepSeek's pricing blows away anything from the competition, with the company pricing their models anywhere from 20-40x cheaper than equivalent models from OpenAI."</p><p>But he doesn't buy that this is a "doomsday" situation for semiconductor companies: "We are still going to need, and get, a lot of chips."</p><p>Cantor Fitzgerald's C.J. Muse also saw a silver lining. "Innovation is driving down cost of adoption and making AI ubiquitous," he wrote. "We see this progress as positive in the need for more and more compute over time (not less)."</p><p>Raymond James' Pajjuri made a similar point. "A more logical implication is that DeepSeek will drive even more urgency among U.S. hyperscalers to leverage their key advantage (access to GPUs) to distance themselves from cheaper alternatives," he wrote.</p><p>Further, while the DeepSeek fears are centered on training costs, he thinks investors should also think about inferencing. Training is the process of showing a model data that will teach it to draw conclusions, and inferencing is the process of putting that model to work based on new data.</p><p>Pajjuri argued that "as training costs decline, more AI use cases could emerge, driving significant growth in inferencing," including for models like DeepSeek's R1 and OpenAI's o1.</p><p>Emanuel, though, wrote that DeepSeek is said to be "nearly 50x more compute efficient" than popular U.S. models on the training side, and perhaps even more so when it comes to inference.</p></body></html>\n\n</article>\n</div>\n</body>\n</html>\n","type":0,"thumbnail":"","relate_stocks":{"AVGO":"博通","MRVL":"迈威尔科技","NVDA":"英伟达"},"source_url":"https://dowjonesnews.com/newdjn/logon.aspx?AL=N","is_english":true,"share_image_url":"https://static.laohu8.com/e9f99090a1c2ed51c021029395664489","article_id":"2506374233","content_text":"The Chinese AI service has Wall Street worried that it will be cheaper than expected to develop models. But as chip stocks sink, some analysts see a silver lining.What if companies don't need to spend nearly as much as expected to develop artificial-intelligence models?That's the big question on the minds of investors Monday, given newfound attention on DeepSeek, a Chinese AI app that has climbed to the top of the U.S. App Store. The company reportedly was able to build a model that functions like OpenAI's ChatGPT without spending to the same degree.Wall Street is nervous about what DeepSeek's success means for companies like Nvidia Corp. $(NVDA)$, Broadcom Inc. $(AVGO)$, Marvell Technology Inc. $(MRVL)$ and others that have seen their stocks run up on expectations their businesses would benefit from lofty, AI-fueled capital-expenditure budgets in the years to come.\"If DeepSeek's innovations are adopted broadly, an argument can be made that model training costs could come down significantly even at U.S. hyperscalers, potentially raising questions about the need for 1-million XPU/GPU clusters as projected by some,\" Raymond James analyst Srini Pajjuri wrote in a note to clients over the weekend.In a post titled \"The Short Case for Nvidia Stock,\" former quant investor and current Web3 entrepreneur Jeffrey Emanuel said DeepSeek's success \"suggests the entire industry has been massively over-provisioning compute resources.\"He added that \"markets eventually find a way around artificial bottlenecks that generate super-normal profits,\" meaning that Nvidia may face \"a much rockier path to maintaining its current growth trajectory and margins than its valuation implies..\"But it's also worth digging into the numbers that have Wall Street so worried. Specifically, there's consternation about a paper that suggested DeepSeek's creator needed to spend $5.6 million to build the model. By contrast, large technology companies in the U.S. are shelling out tens of billions a year on capital expenditures and earmarking much of that for AI infrastructure.The $5 million number, though, is highly misleading, according to Bernstein analyst Stacy Rasgon. \"Did DeepSeek really 'build OpenAI for $5M?' Of course not,\" he wrote in a note to clients over the weekend.That number corresponds to DeepSeek-V3, a \"mixture-of-experts\" model that \"through a number of optimizations and clever techniques can provide similar or better performance vs other large foundational models but requires a small fraction of the compute resources to train,\" according to Rasgon.But the $5 million figure \"does not include all the other costs associated with prior research and experiments on architectures, algorithms, or data,\" he continued. And this type of model is designed \"to significantly reduce cost to train and run, given that only a portion of the parameter set is active at any one time.\"Meanwhile, DeepSeek also has an R1 model that \"seems to be causing most of the angst\" given its comparisons to OpenAI's o1 model, according to Rasgon. \"DeepSeek's R1 paper did not quantify the additional resources that were required to develop the R1 model (presumably they were substantial as well),\" Rasgon wrote.That said, he thinks it's \"absolutely true that DeepSeek's pricing blows away anything from the competition, with the company pricing their models anywhere from 20-40x cheaper than equivalent models from OpenAI.\"But he doesn't buy that this is a \"doomsday\" situation for semiconductor companies: \"We are still going to need, and get, a lot of chips.\"Cantor Fitzgerald's C.J. Muse also saw a silver lining. \"Innovation is driving down cost of adoption and making AI ubiquitous,\" he wrote. \"We see this progress as positive in the need for more and more compute over time (not less).\"Raymond James' Pajjuri made a similar point. \"A more logical implication is that DeepSeek will drive even more urgency among U.S. hyperscalers to leverage their key advantage (access to GPUs) to distance themselves from cheaper alternatives,\" he wrote.Further, while the DeepSeek fears are centered on training costs, he thinks investors should also think about inferencing. Training is the process of showing a model data that will teach it to draw conclusions, and inferencing is the process of putting that model to work based on new data.Pajjuri argued that \"as training costs decline, more AI use cases could emerge, driving significant growth in inferencing,\" including for models like DeepSeek's R1 and OpenAI's o1.Emanuel, though, wrote that DeepSeek is said to be \"nearly 50x more compute efficient\" than popular U.S. models on the training side, and perhaps even more so when it comes to inference.","news_type":1,"symbols_score_info":{"AVGO":0.9,"NVDA":1,"MRVL":0.9}},"isVote":1,"tweetType":1,"viewCount":244,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0},{"id":373169448808728,"gmtCreate":1732151307009,"gmtModify":1732153254311,"author":{"id":"4136643462511252","authorId":"4136643462511252","name":"bnby","avatar":"https://community-static.tradeup.com/news/aa5afa9be18bb8fa4a29490494e7b8e3","crmLevel":11,"crmLevelSwitch":0,"followedFlag":false,"authorIdStr":"4136643462511252","idStr":"4136643462511252"},"themes":[],"htmlText":"Well, dont forget the factor that the price is determined based on how much the buyer required or satisfy their requirement. ","listText":"Well, dont forget the factor that the price is determined based on how much the buyer required or satisfy their requirement. ","text":"Well, dont forget the factor that the price is determined based on how much the buyer required or satisfy their requirement.","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":1,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/373169448808728","repostId":"2485567109","repostType":2,"isVote":1,"tweetType":1,"viewCount":629,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0},{"id":368457949049184,"gmtCreate":1730997103071,"gmtModify":1730997173328,"author":{"id":"4136643462511252","authorId":"4136643462511252","name":"bnby","avatar":"https://community-static.tradeup.com/news/aa5afa9be18bb8fa4a29490494e7b8e3","crmLevel":11,"crmLevelSwitch":0,"followedFlag":false,"authorIdStr":"4136643462511252","idStr":"4136643462511252"},"themes":[],"htmlText":"Hmm...that hedge fund is buying blindly of uncertain hope if continue increasing shares after the recent SMCI problem facing. ","listText":"Hmm...that hedge fund is buying blindly of uncertain hope if continue increasing shares after the recent SMCI problem facing. ","text":"Hmm...that hedge fund is buying blindly of uncertain hope if continue increasing shares after the recent SMCI problem facing.","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":0,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/368457949049184","repostId":"2481687975","repostType":2,"isVote":1,"tweetType":1,"viewCount":341,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0},{"id":362807489691680,"gmtCreate":1729611966655,"gmtModify":1729614028189,"author":{"id":"4136643462511252","authorId":"4136643462511252","name":"bnby","avatar":"https://community-static.tradeup.com/news/aa5afa9be18bb8fa4a29490494e7b8e3","crmLevel":11,"crmLevelSwitch":0,"followedFlag":false,"authorIdStr":"4136643462511252","idStr":"4136643462511252"},"themes":[],"htmlText":"Then why need to sell or export to China to begin with, even China has own domestic problem in their own economy? It doesnt make sense why need to rely into within China market at this time as China is not only the market.","listText":"Then why need to sell or export to China to begin with, even China has own domestic problem in their own economy? It doesnt make sense why need to rely into within China market at this time as China is not only the market.","text":"Then why need to sell or export to China to begin with, even China has own domestic problem in their own economy? It doesnt make sense why need to rely into within China market at this time as China is not only the market.","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":0,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/362807489691680","repostId":"2477635833","repostType":2,"isVote":1,"tweetType":1,"viewCount":452,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0}],"lives":[]}