Sign in to view Kailash’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New York City Metropolitan Area
Contact Info
Sign in to view Kailash’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
5K followers
500+ connections
Sign in to view Kailash’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Kailash
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Kailash
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Kailash’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Activity
Sign in to view Kailash’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
-
🐉 Compiler Engineers Wanted! 🖥️ ✨✨✨🚀 Join EnChargeAI! 🚀✨✨✨ 📍 Germany: Stuttgart/Munich/Remote 🌍 💻 Build AI Compiler Stacks 🔧 🔝 Cutting-Edge…
🐉 Compiler Engineers Wanted! 🖥️ ✨✨✨🚀 Join EnChargeAI! 🚀✨✨✨ 📍 Germany: Stuttgart/Munich/Remote 🌍 💻 Build AI Compiler Stacks 🔧 🔝 Cutting-Edge…
Liked by Kailash Gopalakrishnan
-
I often get asked about how the TPU chip came to be at Google? There were actually three attempts to build an AI accelerator chip at Google that…
I often get asked about how the TPU chip came to be at Google? There were actually three attempts to build an AI accelerator chip at Google that…
Liked by Kailash Gopalakrishnan
-
5 Harsh Truths I know at 46, I Wish I knew at 26 👇 1.Failure Happens, and That’s Okay No matter what you do, you’re going to fail at something…
5 Harsh Truths I know at 46, I Wish I knew at 26 👇 1.Failure Happens, and That’s Okay No matter what you do, you’re going to fail at something…
Liked by Kailash Gopalakrishnan
Experience & Education
-
EnCharge AI
***** ********** ******* (***)
-
***
*** ******
-
******** **********
****** ** ********** (**.*.) ********** ***********
-
-
******** **********
******'* ****** ********** *** *********** ***********
-
View Kailash’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View Kailash’s full profile
-
See who you know in common
-
Get introduced
-
Contact Kailash directly
Sign in
Stay updated on your professional world
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Other similar profiles
-
Yasaman Khazaeni
Needham, MAConnect -
Ramis Movassagh
Quantum Researcher @ Google | Ph.D. Mathematics MIT Twitter/X:@Ramis_Movassagh
Los Angeles, CAConnect -
Aly Megahed
Menlo Park, CAConnect -
Adrien Gaidon
Los Altos, CAConnect -
Alborz Geramifard
Menlo Park, CAConnect -
Pankaj Jha
AI/ML | Supercomputing | Computer Science | Aerospace Engineering | IIT | Penn State
San Francisco Bay AreaConnect -
Jeffrey Welser
San Jose, CAConnect -
Huan Wang
San Francisco Bay AreaConnect -
Nick Bronn
Yorktown Heights, NYConnect -
Wan-Yen Lo
Mountain View, CAConnect -
John Gunnels
Somers, NYConnect -
Pradeep Dubey
Intel Senior Fellow at Intel Labs
Cupertino, CAConnect -
Lin Li
Sunnyvale, CAConnect -
Azalia Mirhoseini
San Francisco Bay AreaConnect -
Jiaxin Zhang
Mountain View, CAConnect -
Jen Wang
Greater BostonConnect -
Ashish Vaswani
San Francisco, CAConnect -
Andrew Parker, PhD
Los Angeles, CAConnect -
John Blair, Ph.D.
New York City Metropolitan AreaConnect -
Aris Gkoulalas-Divanis
Connect
Explore more posts
-
Stefan Brenner
Level up Chip Design Innovation with MATLAB and Silicon Catalyst! #MathWorks News & Stories brings you an exciting collaboration between MathWorks and Silicon Catalyst. This article explores how MATLAB an Simulink empowers chip-based semiconductor startups to develop cutting-edge solutions in medicine and wireless technology. Key Takeaways: >> Richard Curtin, Managing Partner at Silicon Catalyst, highlights the growing demand for "specialized design tools to overcome the complexities of modern chip design." >> Raphael Guimond, Antenna Designer at SPARK Microsystems, emphasizes how MATLAB's capabilities streamline the antenna design process, allowing engineers to "focus on innovation rather than repetitive tasks." ➡️ Dive deeper and discover how MATLAB can: >> Shorten design cycles through its powerful simulation and modeling tools. ⏱️➡️ >> Optimize performance with efficient algorithms and code generation capabilities. >> Facilitate collaboration with a robust platform for data sharing and version control. Don't miss out! Read the full story here: https://lnkd.in/eAv7JhYV #engineering #semiconductor #wireless #innovation #MATLAB #medicine #Simulink #Probius #SPARK #Microsystems #SiliconCatalyst
4
-
George Z. Lin
UIUC/Georgia State create LlamaF, an architecture with FPGA-based accelerationdesigned to improve the inference performance of large language models (LLMs) such as TinyLlama 1.1B on embedded systems. As LLMs transform natural language processing across multiple sectors, their implementation on resource-constrained devices presents challenges due to significant memory and computational requirements. LlamaF addresses these challenges through post-training quantization and an architecture specifically optimized for embedded FPGAs. By employing a group-wise quantization strategy, it reduces the model size from 4.4GB to 1.1GB, which lowers off-chip memory bandwidth needs while maintaining predictive performance. The architecture includes a fully pipelined accelerator for group-wise quantized matrix-vector multiplication (GQMV), allowing for asynchronous computation during weight transfers and resulting in notable performance enhancements. Testing on the Xilinx ZCU102 platform highlights LlamaF's capabilities, achieving a speedup in inference speed ranging from 14.3 to 15.8 times and a 6.1 times improvement in power efficiency compared to executing the model solely on the ZCU102 processing system. This performance enhancement is largely attributed to the efficient execution of matrix operations, which are a significant portion of the runtime during inference. The design of LlamaF integrates advanced software techniques, including task-level scheduling that overlaps off-chip parameter transfers with kernel execution, thereby improving throughput and reducing latency. The hardware design is organized into three primary stages: pre-processing, dot-product, and accumulation, which together facilitate efficient data flow and computation for LLM inference. LlamaF signifies a notable advancement in the acceleration of LLMs on embedded FPGAs, enabling more effective deployment of these sophisticated models in environments with limited resources. Arxiv: https://lnkd.in/exhriYyj
5
1 Comment -
PS Lee
How Digital Twin Solutions for Data Centers Can Make AI Greener Summary: As AI technologies continue to revolutionize industries, the energy demands placed on data centers have surged, creating an urgent need for more sustainable solutions. By leveraging digital twin technology to help data centers manage energy consumption more efficiently and transition towards renewable energy sources. The Role of Digital Twin Technology in Data Centers Digital twin technology serves as a virtual replica of physical systems, allowing data centers to simulate, analyze, and optimize their operations in real-time. This technology offers several key benefits: Energy Efficiency Testing: Digital twins enable data center managers to simulate and test new energy strategies, such as the integration of renewable energy, without disrupting ongoing operations. This allows for informed decision-making that prioritizes sustainability. Enhanced Resource Allocation: As AI increases power densities, digital twins help analyze and optimize cooling strategies and resource distribution. By dynamically adjusting to renewable energy availability, data centers can maximize the use of clean energy sources. Data-Driven Decisions: Insights from digital twins allow for improved tracking of energy usage, capacity planning, and risk management, leading to more sustainable operations and better Power Usage Effectiveness (PUE) scores. Supporting Sustainability Metrics: Digital twins capture crucial sustainability data, aiding data centers in reporting environmental impacts and achieving compliance with emerging regulations. This supports a smoother transition to greener practices. Overcoming Challenges with Digital Twin Technology The transition to renewable energy in data centers is fraught with challenges, including the complexities of sourcing sustainable energy and overcoming public opposition to new facilities. Digital twin technology offers a strategic approach to these challenges by allowing data centers to simulate various scenarios, anticipate potential issues, and optimize their energy mix. Conclusion The rapid growth of AI and the increasing focus on sustainability require data centers to rethink their operations. Digital twin technology is a powerful tool that enables data centers to meet the demands of AI while maintaining a commitment to sustainability. By adopting such innovative technologies, data center operators can turn challenges into opportunities, leading the charge towards a greener, more responsible digital future. #Sustainability #DataCenters #ArtificialIntelligence #DigitalTwin #RenewableEnergy #EnergyEfficiency #CadenceDesignSystems #PowerUsageEffectiveness #GreenTech
6
1 Comment -
Joshua Schoen
The Cerebras IPO looking to break up the NVIDIA monopoly might be successful in the short-term for an IPO pop, but long term will likely fail. NVIDIA’s dominance doesn’t come just from its chip but from the CUDA software that is the de facto standard and can take advantage of chip architecture improvements. No wonder why 87% of Cerebras’ revenue is from the Middle East. The people buying it aren’t the engineers using it. https://lnkd.in/ePRqdFpt
4
3 Comments -
Rafael Brown
Reuters: Intel foundry business to make custom chip for Amazon, chipmaker's shares jump" (Max A. Cherney) (September 16, 2024) --Intel shares jump 8% after hours --Intel CEO Gelsinger details Amazon deal in memo --Chip maker's memo also outlines cost cuts "Intel's foundry, or contract manufacturing business, has signed up Amazon's cloud services unit as a customer for making custom artificial intelligence chips, the companies said on Monday, a deal that gives the chip maker a vote of confidence. Intel's shares rose roughly 8% in extended trading after CEO Pat Gelsinger released a memo to employees announcing Intel had secured the Amazon unit as be a multibillion-dollar customer, paying Santa Clara, California-based Intel for design services and manufacturing. The memo also outlined Intel's planned cost cuts. Amazon's AWS cloud computing division already designs several chips for use in its data centers and has hired Intel to package at least one version. Intel will produce an "artificial intelligence fabric chip" for AWS and use the chip maker's 18A process, the most advanced version available for outside customers, the companies said. Last month, it reported disastrous second quarter earnings. "The board and I agreed that we have a lot of work ahead to drive greater efficiency, improve our profitability and enhance our market competitiveness," Gelsinger wrote in the memo. Among steps the board has decided to take, Intel is selling a stake in its programmable chip business Altera. It also said it would pause construction at its chip factory project in Germany for two years, a move Reuters had previously reported. The company plans to pause its project in Poland as well. Intel said there are no changes to its plans to expand manufacturing in the U.S. Intel plans to keep its manufacturing business, or foundry, inside the company, confirming earlier Reuters reporting. The foundry business is crucial to Gelsinger's turnaround plan for the company, which he outlined in 2021. Until Amazon, Intel has struggled to find marquee customers that it could discuss publicly. But in the memo, Gelsinger said the foundry business would have greater independence. Intel plans to establish it as an independent subsidiary, with an operating board that will oversee the foundry operation. The foundry unit separated its financial performance from the design business earlier this year. The company is also taking several steps to prioritize the core technology behind its CPUs, and is reorganizing several divisions, including its automotive and edge businesses. On Monday Intel also said it was awarded up to $3 billion in direct funding from the U.S. CHIPS and Science Act, as part of the Secure Enclave program. The company said it plans to send notices in the middle of October to the roughly 15,000 employees it said in August it would lay off." Reuters: https://lnkd.in/gtiqE9Ea #semiconductors #intel #amazon
3
1 Comment -
Karan Sharma
🌟 Transforming Technology: The Role of FLOPS and Gold in Chip Innovation In today's rapidly evolving tech landscape, the measurement of computational power plays a critical role in determining the capabilities of our devices. One such metric, FLOPS (Floating Point Operations Per Second), stands at the forefront of assessing the processing power of chips, particularly those integral to deep learning, AI advancements, and high-performance computing. FLOPS and Chip Innovation FLOPS quantifies how many floating-point calculations a chip can execute per second. This metric is essential not only for evaluating the speed and efficiency of processors but also for driving innovations in fields like quantum computing, 3D modeling, and high-speed video processing. As demands for more complex computations grow, so too does the importance of FLOPS in defining technological progress. Nvidia's Breakthroughs Leading the charge in GPU technology, Nvidia has been at the forefront of achieving groundbreaking FLOPS capabilities. This relentless pursuit of faster speeds and higher efficiencies has propelled Nvidia to surpass traditional giants, such as Apple, in market prominence. Nvidia's GPUs are increasingly vital for powering AI applications and other data-intensive tasks, making their advancements in FLOPS a cornerstone of modern computing. Gold's Crucial Role Behind the scenes of these technological advancements lies another essential element: gold. Gold's unique properties—excellent electrical conductivity and malleability—make it indispensable in the production of high-performance chips. In the intricate process of manufacturing high-FLOPS chips, materials like tantalum, tungsten, tin, and gold play critical roles, with gold especially valued for its ability to enable fast and efficient circuitry.Gold's malleability enables the production of incredibly thin wires, approximately 80 kilometers long from just 31.103 grams of gold, with a thickness of 0.000018 centimeters. Market Dynamics and Future Outlook As the global chip market expands exponentially, the demand for gold in tech manufacturing is also on the rise. This surge in demand not only highlights gold's fundamental role in advancing technology but also impacts its market dynamics. The integration of gold into chip production underscores its criticality in shaping the future of computing, ensuring that advancements in processing power continue to drive innovation across industries. In conclusion, FLOPS and the utilization of materials like gold represent more than just technological metrics—they embody the relentless pursuit of innovation and efficiency in computing. As we look towards the future, the intersection of FLOPS, GPU advancements, and materials science promises to redefine what's possible in the realm of technology. 🎨 Share your thoughts and passion for innovation in the comments! 🚀 #Technology #Innovation #AI #GPU #FLOPS #Nvidia #Gold #QuantumComputing #LinkedIn
-
Rupert Baines
Really interesting to see this: how Europe is moving up. It is interest to contrast this qualitative survey with the #siliconcatalyst analysys that UK is sixth in Europe in funding value fir silicon investment. And I will put in a plug for our #UKTIN Forward Capabilities paper on recommendations for UK Semiconductor sector: https://lnkd.in/eNNAsCAz
8
-
Christian Said
Exciting times ahead for the AI industry! Cerebras Systems, a leading AI chip developer, has announced plans for an IPO. This bold move positions them to take on industry giants like Nvidia, shaking up the competitive landscape. Cerebras has already made waves with their innovative chip architecture, designed specifically to accelerate AI workloads. Their Wafer-Scale Engine, the largest chip ever built, offers unprecedented performance for AI models, and their IPO could further fuel their ambitious roadmap. With AI applications rapidly expanding across industries, the demand for high-performance hardware is skyrocketing. Cerebras aims to meet this demand by providing cutting-edge solutions that push the boundaries of what's possible in AI computation. As they gear up for this significant milestone, Cerebras' vision and technology could reshape the AI hardware market. The IPO will not only bolster their financial strength but also enhance their ability to innovate and scale. Kudos to the Cerebras team for their pioneering efforts! It will be thrilling to see how their journey unfolds and how they stack up against Nvidia and other established players. #AI #TechInnovation #Cerebras #IPO #AIChips #Nvidia #AIHardware #FutureOfAI
3
2 Comments -
Scott Sutherland
Goldman Sachs and Citi Predict More Gains for Nvidia Supplier Hynix After 90% Rally : sky-high potential for AI artificial intelligence “The current share valuation isn’t fully reflecting the potential of high-bandwidth memory chips” “The market is treating HBM’s valuation the same as traditional memory chips, but HBM is almost twice as profitable” Demand for HBM may not be fully reflected yet as the world is unfamiliar with the potential of AI artificial intelligence citing the earlier introduction of smart phones “Because this is a market that didn’t exist in the past, we have never seen how far it can go” Nineteen analysts have raised forecasts in past month alone : Current valuations don’t reflect potential of HBM: Infinity ---- generative artificial intelligence ---
-
Jihoon Jeong
The Silicon Kingdom's Crisis: A Call for a Semiconductor Revolution As we witness the rise of TSMC's 'Silicon Kingdom', controlling 56% of the global foundry market and over 90% of cutting-edge processes below 5nm, the semiconductor industry faces a critical juncture. This dominance, coupled with Taiwan's geopolitical instability, poses significant risks to global tech innovation and security. While semiconductor demand soars, driven by AI and high-performance computing, our over-reliance on a single company in an unstable region threatens innovation and economic growth. The 2021 chip shortage was a stark reminder of this vulnerability. Samsung Electronics and Intel, once formidable competitors, now struggle to keep pace. Both face technical challenges and disappointed shareholders, with stock prices reflecting these difficulties. However, a game-changing development has emerged: Intel plans to spin out its foundry business. This move aligns perfectly with my proposed paradigm shift - a joint venture between Samsung Electronics and Intel Foundry's businesses. By combining Samsung's advanced process technology with Intel's design capabilities and newly independent foundry, they could create a powerful TSMC competitor. This venture would diversify production geographically, mitigate geopolitical risks, and provide a neutral alternative for customers. The venture could also become a collaboration platform across the tech ecosystem, potentially partnering with NVIDIA, Google, and Apple to strengthen its position further. Challenges lie ahead – from cultural integration to regulatory approvals. But these obstacles also present opportunities for innovation and synergy. The ball is now in the court of Samsung's Chairman Lee Jae-yong and Intel's CEO Pat Gelsinger. We urge these leaders to seize this moment and make the bold decisions necessary to reshape the semiconductor landscape. This isn't just about corporate success; it's about securing the future of global technology and innovation. The proposed Samsung-Intel joint venture could be the key to unlocking a new era in semiconductor production and ensuring a stable foundation for our AI-driven future. Read my in-depth analysis here: https://lnkd.in/gtFQHR9N What are your thoughts on this potential collaboration? How can we ensure a more stable and innovative semiconductor industry? #Semiconductors #TechInnovation #GlobalSupplyChain #AI #FutureOfTech
16
-
Bharadwaj Pudipeddi
Bummed I couldn't attend Hotchips just when AI looms bright and red all around and over us, but I managed to glance through the proceedings, and Sophia Song gave me an excellent summary. So there are lots of AI hardware startups announcing products! Especially, for inferencing and particularly for the Llama ecosystem. I often read reactions from the media that no one can displace the King from the Throne especially now with low precisions and B200s ripping over 10K TPS on llama-70B. And some think software is a Sisyphean task. Clearly, a major challenge for training, but not so much for inference. Here is what I think. I loved all the AI h/w presentations. I think the world is looking for highly capable inferencing for generative AI on the edge (as in your PC and maybe even in your phone) - I am sure many will agree is a burning problem sooner than later. I think the current designed-for-cnn NPUs only go so far before the customers get frustrated due to their lack of muscle to run language models. I also think h/w that specializes in high per-session TPS will find a place in both the data center and edge. Now, this is potentially contentious. There isn't a gen inferencing problem in the data center side, some might say, that cannot be batched and dispatched into a 5ms per token train. But I think there is always room for high interactivity. The current GPUs just cannot do that, so we only see as far as we can climb. The agentic workflows aren't yet widely in deployment (kind of a mess actually, still evolving), but there would be lots of value in there. Groq has been punching the scoreboards, and now Sambanova and Cerebras join the leaderboard. All for the Llama sized models, but they are setting the tone here on speed and innovation. I definitely love the work on lower precisions. Especially, as some clever precisions don't even require heavy lifting in quantization. This is the most important factor for energy efficiency particularly for larger models. And also, I like the idea of having inference h/w that doesn't have to run the model on HBM. I am not a sworn HBM-enemy. It is, after all, one of the most energy efficient external memories out there. But we need alternatives that are burn even less power, fail less often, and overall faster by 10x. The litmus test for all non-HBM solutions seems to be the size of the model. The pipelining and other forms of running model inferencing are completely justified as long as speed and economics work out. Heck, we don't complain when we run a training job on 1000s of nodes, each node with GPUs on high-speed links, and each GPU on high-speed backend networks to other nodes, running in synchronized execution where one failure brings everything to a halt. If that is ok - well, connecting a few hundred chips to run Llama-405B is all right provided it actually makes *sense*! (performance, tco, failure recovery time etc.) ps: two back2back posts - now back to hibernation!
55
10 Comments -
William Kilmer
Intel is a 56-year old company, that has been on a two-decade decline. It's unfortunate that they've lost Pat Gelsinger, who was only part way through his turnaround plan. Years ago, Intel went through an existential crisis in the PC processor market and very frankly faced the reality that they were turning the battleship and won. This battle is much, much tougher. The company has missed multiple market opportunities: mobile processors, graphics processors and subsequently AI processing, as well as falling behind in foundry technology. Incumbent disadvantages are real. In Intel's case, it's often been summarized in their approach to innovation: > The market’s not big enough to move the needle > It couldn’t ever be as profitable as our main business > Our competencies are too important to give away, or… > We don’t have the right competencies to do it (stick to our knitting was a common phrase) It's hard for any one person or team to erase decades of poor decisions and missed markets. Interestingly, the US government has made a big, nearly $8B bet on Intel. Let's hope it pays off. #intel #semiconductors #chips #AI #mobile #processors #strategy #turnaround #williamkilmer https://lnkd.in/e6Yih6ty
18
1 Comment -
Daniel Tu
OpenAI scales back foundry ambition - At the same time, it is developing AI inference chips, with Broadcom helping with chip design and securing TSMC for manufacturing. OpenAI also plans to diversify chip supply by adding AI chips from AMD. "OpenAI, the fast-growing company behind ChatGPT, has examined a range of options to diversify chip supply and reduce costs. OpenAI considered building everything in-house and raising capital for an expensive plan to build a network of factories known as "foundries" for chip manufacturing. The company has dropped the ambitious foundry plans for now due to the costs and time needed to build a network, and plans instead to focus on in-house chip design efforts, according to sources, who requested anonymity as they were not authorized to discuss private matters. The company's strategy, detailed here for the first time, highlights how the Silicon Valley startup is leveraging industry partnerships and a mix of internal and external approaches to secure chip supply and manage costs like larger rivals Amazon, Meta, Google and Microsoft. As one of the largest buyers of chips, OpenAI's decision to source from a diverse array of chipmakers while developing its customized chip could have broader tech sector implications." #openai #broadcom #nvidia #amd #chips #design #manufacturing https://lnkd.in/gfp-4t4m
6
-
Jessie Chen
#Semiconductor companies related to #AI infrastructure are hot, in both the public market and private early-stage stage. At-scale AI application requires new infrastructures, which lead to opportunities at the silicon and hardware level. Thus far this year, VC-backed chip startups have raised nearly $5.3 billion in just 175 deals, per Crunchbase data. https://lnkd.in/gC2FZwC6 Previous articles from us already feature reasons of these trends: The WHY behind Impressive IPO of AI Infra Startup 👉 AI Compute's Bottleneck Lies in #Connectivity 👉 Advanced Packaging and Si #Photonics https://lnkd.in/eN35P5YY Next AI Infrastructures for New AI Decade 👉 Power Hunger Issue https://lnkd.in/gdcbB6S8 AI Infrastructure Hardware and Software Accrue the Most Value in AI Stack 👉 Big funding from VCs or tech titans going into the software infrastructure of generative AI stack in the past year only increased the devastated demand for AI servers and GPUs – hardware is more a bottleneck than software now.... https://lnkd.in/gYK7aaBP
2
-
Kelvin Mu
Sorry for the delay in last week's AI news - just got back from Taiwan and all of its Jensenity! 🔙 𝐓𝐡𝐞 𝐁𝐚𝐜𝐤𝐰𝐚𝐫𝐝 𝐏𝐚𝐬𝐬: 𝐑𝐞𝐯𝐞𝐫𝐬𝐢𝐧𝐠 𝐓𝐡𝐫𝐨𝐮𝐠𝐡 𝐭𝐡𝐞 𝐀𝐈 𝐖𝐞𝐞𝐤 Week 42 | June 3-9 JENSENITY at Computex Industry: 🚨AMD unveiled its latest AI processors at the Computex trade show. It also detailed plans to compete with Nvidia (https://lnkd.in/ggvTrCre) 🚨 NVIDIA is said to be prepping AI PC chip with next-gen Arm cores and its Blackwell GPU architecture (https://lnkd.in/gS6Qv2rm?) Financing and M&A: 🚨 Text-to-video platform Pika raises $80M led by Spark Capital with participation from Greycroft, Lightspeed, and others (https://pika.art/blog) 🚨 Twelve Labs, a leader in the multi-modal foundational model space, raised $50m led by New Enterprise Associates (NEA) and nVentures. The platform is being used by companies in the media & entertainment, advertising and automotive sectors (https://lnkd.in/gvwRkxAr) 🚨 Tektonic AI raises $10m led by Point72 and Madrona to build AI agents to help automate business operations. CEO Nic Surpatanu previously held leadership roles at Tanium, UiPath and Microsoft (https://lnkd.in/gGfxiNW6?). 🚨 Hoop, a next-generation task management platform, raises $5M seed round from Index Ventures 🚨 Greptile raises $4m seed round for code understanding (https://lnkd.in/gDF8MD-C?) 🚨 Cartwheel, a generative 3d animation tool raises $5.6m seed led by Accel and KV Research & Development: 🚨 Qwen2 from Alibaba launched. According to internal released benchmarks, the language model demonstrates superior performance over Llama3-70B and Mixtral 8x22B (https://lnkd.in/gEJi2shn?) 🚨LlamaCare: A Large Medical Language Model for Enhancing Healthcare Knowledge Sharing (https://lnkd.in/g44Q8Pyv?) Other Interesting Resources: 🚨 A good article on robotics in the new wave of LLMs and foundational models. Highly recommend a quick read through for anyone interested in robotics (https://lnkd.in/gn5gbrTR) 🚨Interesting article from John Luttig from Founders Fund on open-source vs. closed-source 🚨Interesting article from Anthropic on Claude's character based on Anthropic's constitutional AI (https://lnkd.in/gK43G7kF?) 🚨Jensen keynote address at Computex: https://lnkd.in/g_eBy-he
76
-
William (Bill) Kemp
"A study published in Opto-Electronic Science discusses high-intensity spatial-mode steerable frequency up-converter toward on-chip integration. Integrated photonic devices consisting of micro-lasers, amplifiers, optical waveguides, frequency converters, and modulators on a single chip, enabling control over photon's spatial modes, frequencies, angular momenta, and phases, are essential for preparing high-dimensional quantum entangled states, high-capacity photon information processing, all-optical communication, and miniaturization of photonic computing. However, current nonlinear waveguide devices, integrating spatial modes and photon frequency conversions, heavily rely on external optical path control and spatial light modulators, failing to meet the crucial requirement of on-chip integration for photonic devices." #photonics
1
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore MoreOthers named Kailash Gopalakrishnan
-
Kailash Gopalakrishnan
Projektingenieur Konstruktion Lichttechnik (Development Engineer - Lighting)
Germany -
Kailash Gopalakrishnan
Senior Consultat at IBM
Chennai -
Kailash Gopalakrishnan
Attended SAILORS MARITIME ACADEMY
Chennai
3 others named Kailash Gopalakrishnan are on LinkedIn
See others named Kailash Gopalakrishnan