Home > Industry News > Detail

Chip design ushers in huge changes

Date:2022-11-01 10:40:28    Views:605

Dennard scaling has disappeared, Amdahl's Law has reached its limits, and Moore's Law has become increasingly difficult and costly to follow, especially as power and performance advantages have diminished. While none of this has reduced the opportunity for faster, lower-power chips, it has significantly changed their design and manufacturing dynamics.


More than just different process nodes and half-nodes, chip development companies - traditional chip companies, automotive OEMs, fabless and non-fabless IDMs, and large system companies - are now grappling with more options and more unique challenges as they seek the best solutions for their specific applications. They are all putting more demands on the EDA ecosystem, which is racing to keep up with these changes, including various types of advanced packaging, small chips and the need for integrated and customized hardware and software.


"While heterogeneous integration predates Denard scaling or the flattening of Moore's Law by several years, chip designers and system architects are now embracing this paradigm to keep their pursuit of PPA goals - without the rule of thumb and its derivatives," said Saugat Sen, vice president and head of R&D at Cadence. "While there are many architectural and design challenges in this era, addressing thermal issues has become a top priority. For some time, the efficiency of design and implementation has been intricately tied to the closed-loop integration of multi-physics field analysis. more-than-Moore has created a compelling case for enabling analysis of the microcosm to go beyond the architecture of system design, from chip to package and beyond, and even more so in systems companies at the forefront of design innovation."


Defining the power and energy requirements of system processors is becoming increasingly difficult.


The power consumption and total energy use of computing is a huge issue, and it's currently getting bigger because of geopolitical developments, rising energy costs and environmental concerns, said Christian Jacobi, IBM researcher and chief technology officer for IBM Systems Architecture and Design. "At the same time, since Moore's Law and Denard scaling are essentially over, as architects we want to keep adding features, functionality, performance and more cores to each chip without increasing our energy footprint. As a result, we must manage energy in the chip more intelligently, from how to optimize power and performance at any point in time, how to take advantage of periods of less activity when not all computing resources are being fully utilized, to reducing power consumption in the chip assembly."


IBM's solution for its Z Systems is to integrate AI into the processor chip. "We can access the data that it already has," Jacobi said. "If the data is in the processor chip and in the cache of the processor chip - because that's where any other business process does its calculations on that data, such as a bank transaction or a credit card transaction - I don't need to take that data and move it somewhere else, to a different device or to an adapter that connects to I/O over a network or through a PCI interface. Instead, I have localized AI engines where I can access that data. I don't have to move it half a meter or a meter or a kilometer to go to a different device. This obviously significantly reduces the energy footprint of performing AI. The actual computation itself, the addition and multiplication, they still consume power.


This means complexity for the rest of the ecosystem, because not every chip or package will do things the same way. "There are still many changes that need to be made to support the ecosystem and product complexity," said Chris Mueth, senior manager of new markets and digital twin program manager at Yestech. "Product complexity is the main driver because everyone wants more miniaturization. Everyone wants more functionality in the products they have. So more integration is needed. While it looks like we're approaching asymptotic conditions, I don't think we're dead yet."


In fact, there are at least a few more process nodes on the Moore's Law roadmap, and all three leading foundries - Samsung, TSMC and Intel - have roadmaps that extend into the 1.x nanometer range. "It's very important because we have to make transistors smaller for two reasons: one is speed and the other is heat. When you're timing countless transistors on a chip, you're generating a lot of heat. The solution to this problem is to shrink everything, but at some point we'll reach an asymptotic peak."


Steven Woo, Rambus' brilliant inventor, agrees. "Now that Dennard scaling has essentially stopped, you really can't reliably reduce power anymore," he says. "So if you want to continue to get performance, and you want to continue to increase computational density, you're going to have to find ways to suck the heat out. There are only a few ways to do that out of the box. That's one of them."


In electric vehicles, for example, this means that ECUs must be designed with very limited power throughout the electrical system. Traditionally, hardware designers have solved such problems by adding multiple modes that can be turned off and monitored to gauge what the system is doing, such as slowing down.


"What we're seeing more of in AI that might work across all domains is that software engineers really understand the tradeoffs between system performance and system accuracy," Woo said. "If they're constrained in some way in terms of bandwidth, energy or whatever, they make it a software problem. If they need more bandwidth, they can reduce the precision of the numbers and train specifically for reduced precision or sparsity. In AI, there's a whole integrated view between the software side of things and the hardware side. In the last 20 years, in the same way, programmers have been forced to become more aware of the architecture, given cache sizes and processor architectures. In the future, programmers will have to become more aware of issues like power limitations in their systems and try to use tools and APIs that allow them to trade performance for power consumption. This evolution will happen. It will take time. It took about a generation of programmers to really understand that when you write programs, you can no longer be so abstract about what the architecture looks like. Over the next 20 years, it's probably going to move in that direction."


This is especially true in the automotive space, where chips need to operate reliably over time and need to be updated as algorithms and communication protocols change.


"One of the big trends we're seeing is the monitoring of health conditions," said Roland Jancke, head of engineering design methods at Fraunhofer IIS' Adaptive Systems Division. "If you are no longer able to control the behavior or the chip at design time, then you need to monitor and switch to spare parts or some other backup during operation. With automotive electronics, you need to consider everything that can happen during operation. But if you say, 'Let's develop this based on the likelihood of part failure,' and then you put in some spare parts, then you're going to go over your budget."


The key, Jancke says, is being able to failover to another system in an emergency, but that can be a very complex process. Like many of the changes underway in chip design, it requires breaking down some of the traditional silos in which system, semiconductor, packaging and software engineers work together on heterogeneous architectures.


"Heterogeneous architecture is not a new concept," said Vik Karvat, senior vice president of product, marketing and planning at Movellus. "It has been developed and expanded across many verticals, including mobile, automotive and artificial intelligence. The difference now is that heterogeneous computing elements are larger and more powerful, as exemplified by NVIDIA's Hopper + Grace solution and Intel's Sapphire Rapids and Falcon Shore platforms. However, as these elements become larger and data center computing needs and density targets continue their geometric growth curve, heterogeneous monoliths will transition to a heterogeneous small-chip approach to continue scaling. This will require a concerted effort from systems, semis and packaging companies."


Where we are


In 1974, Robert Dennard wrote a paper dealing with MOSFETs, which stated that as transistors became smaller and smaller, their power density remained constant. This continued until 2005, when leakage power started to become an issue. "That was really the engine behind Moore's Law," said Roddy Urquhart, Codasip's senior director of technical marketing. "Denard scaling and Moore's Law allowed you to take advantage of a new generation of silicon geometries that essentially doubled the number of transistors and increased clock rates by about 40 percent per generation. Interestingly, around that time, Intel was planning to launch the Pentium 5 processor and wanted it to reach 5 or 7 GHz, but they ended up having to cancel it because of thermal issues."


Another limiting factor in processor design was the upper limit of the CMOS clock frequency, as shown below.


1667269471983.jpg


Because of these limitations, there was a noticeable shift to multi-core designs starting with mobile devices. "The first was having processors dedicated to specific functions, such as GPUs for phone graphics, dedicated microcontrollers, handling things like Wi-Fi or Bluetooth," Urquhart said. "Second, there were initially multi-core systems for dual cores. Today, there are four-core systems that can run things like Android. For running something like an operating system, some operations can be parallelized while others are inherently sequential, and that's where Amdahl's Law applies."


Emerging challenges such as AI/ML can take advantage of data parallelism to create specialized architectures to solve very specific problems. There are other opportunities in embedded devices. For example, Urquhart describes some of the research Codasip has been doing to quantify traditional three-level pipeline, 32-bit RISC-V cores using Google's TensorFlow Lite for Microcontrollers. The company is then creating custom RISC-V instructions to accelerate neural networks using very limited computational resources.


Urquhart said this works well for IoT devices, which need to do simple sensing or simple video processing. "For applications like augmented reality or autonomous driving, you're dealing with much larger amounts of video data. The way to leverage it would be to take advantage of the inherent parallelism of the data. There are a lot of examples of this. Google is said to be using its tensor processing unit for image recognition on its server farms. the TPU is a pulsating array, so it processes the matrix very efficiently. This is one approach the industry is taking."


Where we are headed


To make progress in computational performance, one approach is to take relatively traditional cores and augment them with additional instructions or additional processing units, where you can speed up certain things but still retain a certain amount of general-purpose functionality. "Otherwise, you're going to have to use the special arrays that some people talk about for AI/ML purposes," Urquhart said. "There are also novel approaches. 10 years ago you wouldn't have thought of using simulation for matrix processing, but companies like Mythic have found that for inference purposes they don't need ultra-high precision. So they've been using analog arrays for matrix processing."


This illustrates today's proliferation toward custom silicon approaches, which is having a pull effect on what the EDA ecosystem needs to deliver. eDA has been racing to provide solutions to solve the architecture, design and verification problems that design teams encounter.


"Rather than pushing these things, EDA is committed to making people's inventions possible," notes Simon Davidmann, CEO of Imperas Software, noting that EDA also tries to help those who push the boundaries of everything. "Often, those who push the boundaries tend to figure out their own way of doing things. Then EDA provides help to make it more cost effective, scalable and shareable. Then the industry can move forward, and it's not filled with proprietary one-off approaches. If there is a market for EDA, it tends to evolve. If someone has a crazy idea to do something, but no one else wants to do it, EDA won't touch it. They will have to build it themselves. And if they're pioneering a new way of doing things, which is a common problem.


It also demonstrates the capabilities of EDA engineers. "The people who build semiconductors and architectures are very smart, but EDAs have to be equally smart to understand what they need and then help them do it - and do it in a more generic, cost-effective way," Davidmann said. "However, there is a tension between inventors inventing and EDA providing them with help. the EDA works very hard to get close to the leaders and build effective solutions for them. The change/end of these laws will affect EDA as the industry must continue to strive to do better and do more in everything as usual. As architectures change, engineering teams need new technologies. eda listens to the challenges and battles customers face and then tries to provide them with solutions. The end of the laws keeps the EDA world on its toes. You have to be very agile and fit into the EDA industry or your technology will become mainstream and irrelevant. eda's goal is to solve the world's design implementation problems, do it better, and help customers do it."


Even considering the challenges of traditional laws governing chip design and EDA end-of-life, there are still many possibilities. "It's really a question of the scope of optimization that can be solved and how the system is divided," said Neil Hand, director of digital verification technology strategy at Siemens EDA.


Until recently, most designs relied on an initial best-effort system decomposition, followed by local implementation optimization of each part of the system. "While this was effective, it left a lot of potential for optimization," Hand said. "The key to unlocking this potential will be new and/or enhanced tools/methods that enable a model-based approach to design for Cybertronics systems engineering (MBCSE), including informed functional assignments within the system. These tools and methods will allow system designers to perform system analysis and tradeoffs during the design process and monitor the design as it evolves."


While the concept is not new and has been successfully applied to other design disciplines, it needs to be adapted to "on top of electronics" systems and traditional EDA tool users. They can work together. "In addition to these new tools and methods, the EDA industry needs to work with the industry to create an ecosystem that can share system design data and create virtual vertically integrated systems companies. So it's not just about tools and methods, but also about effective data and metadata sharing."


This includes workflow. Mueth of Yestech observes that workflows are the new frontier in the evolution of the EDA industry.


"Many of the technologies in EDA technology have matured to a large extent," Mueth says. "While everyone is making progress and solving problems incrementally, the biggest bottleneck right now is the workflows around these complex systems. You have to think about the entire product development cycle because that's the task at hand. Let's say you have a team that has put together a number of workflows across multiple functions and they're all working on designing the concept. Let's validate it. Let's transfer it to production. So it goes from concept to design and design verification, and then to prototype DVT testing. That's validation in the hardware space. Then comes pilot production, where you can do some limited runs and figure out how to make it really effective for manufacturing. Then comes manufacturing. This means that there are six major steps in the product development process. Today's workflow consists of many manual processes. The trick is to remove those, link everything to share IP, introduce digital threads, and include interoperability of data and tools. This has to be part of the ecosystem, but things are so complex that you can't manage this manually anymore."


It remains to be seen how this will apply to a market where custom designs are increasingly available. We are already seeing many applications where software-based SoC or server-side solutions are no longer sufficient or competitive," said Stuart Clubb, director of CSD marketing for Siemens EDA. "Custom hardware gas pedals are making inroads as a solution to provide lower power and higher performance, with the ability to adapt the hardware to specific application requirements."


Many of these gas pedals are highly algorithmic by nature, and their design and verification in RTL remains a challenge in terms of time and engineering resources. System companies are building their own SoCs to meet their specific needs for the role at hand. In contrast, chip companies need to respond with a broad range of products, often containing variations of the same gas pedal for different markets, Clubb said.


That's where advanced synthesis (HLS) and advanced verification (HLV) are growing in popularity.


Clubb explained that the combined use of HLS/HLV can significantly reduce design and verification time compared to traditional RTL while providing a more competitive solution in the gas pedal space. He expects this market demand and application to continue to grow in a wide range of vertical markets, from battery-sensitive edge applications all the way to solutions in server farms. "System architects and chip designers need to build smarter, more specialized hardware to take advantage of available process nodes and transistors, but be mindful of the physical limitations we are seeing now as Amdahl's Law and Denard scaling begin to bend and break," he said.


Urquhart also noted that some of the major improvements in computing performance stemmed from the ASIC revolution of the 1990s.


"Then, in the early 2000s, more general-purpose compute units took over because they were then able to do the heavy lifting with EDA tools that included synthesis," he said. "With the transition to SoCs over the last decade, and other interesting things like creating small chips and packaging systems together, one of the key enablers - especially in SoCs - has been processor The availability of IP. But we've seen the limitations of it. Even Arm has an extremely broad product portfolio, from application processors to embedded processes. If you're going to get more performance, you're going to have to have further specialization. That means you will have to have a broader community involved in the design, or more likely fine-tune or customize the processor core. This becomes an EDA problem. There are many opportunities for processor design automation, and by automating the design, we will have to have a broader community designing or modifying the processor. In the past, it was either employees of microprocessor companies like Intel and AMD, or process or IP companies like Arm, Synopsys and Cadence, but we will have to open up to a broader community."


Conclusion


As chipmakers move from monolithic solutions to multi-chip solutions, new fundamental challenges emerge that require innovative solutions. "Semiconductor suppliers will face OCV issues and turn-off time issues for photomask-sized chips," said Movellus' Karvat. "Package levels will face staging and power challenges, and we need to figure out how to make multi-chip solutions behave like monolithic solutions from a performance, verification and reliability perspective. eda plays a pivotal role in this."


But it will require a substantial shift in semiconductor design, and IBM's Jacobi believes the semiconductor ecosystem doesn't yet fully understand what the end of Dennard scaling really means. "It will drive innovation, and it will drive other changes. Architects will contribute more by figuring out how things should work in a world where we can't take advantage of the value from Moore's Law and Dennard scaling that has been generated over the last 20 or 30 years. That trend is changing, and the architecture industry is becoming more important than ever."


  • STEP 1

    Enter Electronic Component part number below.

  • STEP 2

    Click the button below.It's that easy.

  • Contact name/company*
  • Email address*
  • Telephone number*
  • Part number and quantity and target price