48
April/May 2010 INTEGRATED IP GOES VERTICAL PROTOTYPING OPTIONS FOR HARDWARE/ SOFTWARE DEVELOPMENT MAKING ABSTRACTION PRACTICAL AVOID THAT EMBARRASSING CALL TO THE FIRMWARE VENDOR Also in this issue: Lead Story: www.chipdesignmag.com Affiliate Sponsors: chipdesignmag.com: community portals, blogs, videos, trends-surveys, e-letters (IP, PLDs, and Chip Designer), resource catalog, back issues, ... DESIGN MEETS AUTOMATION SEE INSERT FOR DETAILS ON THE MAIN EVENT FOR ELECTRONIC DESIGN

Chip Design Magazine April-May 2010

  • Upload
    sunilsm

  • View
    100

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Chip Design Magazine April-May 2010

April/May 2010

INTEGRATED IP GOES VERTICAL

PROTOTYPING OPTIONS FOR HARDWARE/SOFTWARE DEVELOPMENT

MAKING ABSTRACTION PRACTICAL

AVOID THAT EMBARRASSING CALL TO THE FIRMWARE VENDOR

Also in this issue:

Lead Story:

www.chipdesignmag.com

Affiliate Sponsors:

chipdesignmag.com: community portals, blogs, videos, trends-surveys, e-letters (IP, PLDs, and Chip Designer), resource catalog, back issues, ...

DESIGNMEETS

AUTOMATION

SEE INSERT FOR DETAILS ON THE MAIN EVENT FOR ELECTRONIC

DESIGN

Page 2: Chip Design Magazine April-May 2010

In technicalcooperation with:

Sponsored by:

www.dac.comFOR MORE DETAILS, VISIT:

KEYNOTESTUESDAY, JUNE 15

WEDNESDAY, JUNE 16

THURSDAY, JUNE 17

Exciting Events on the Exhibit FloorMONDAY, JUNE 14-169:00AM - 6:00PM

Technical Program Highlights

ECHOES OF DAC’S PAST: FROM PREDICTION TO REALIZATION, AND WATTS NEXT?

Bernard Meyerson

FROM CONTRACT TO COLLABORATION: DELIVERING A NEW APPROACH TO FOUNDRY

Douglas Grose

DESIGNING THE MOTOROLA DROID

Iqbal Arshad

Page 3: Chip Design Magazine April-May 2010

ANAHEIM CONVENTION CENTER

JUNE 13-18Anaheim, CA USA

PanelsTuesday, June 15

Wednesday, June 16

Thursday, June 17

Embedded/SOC Enablement Day - Thursday, June 17“Embedded Systems Meet Hardware”

TutorialsMonday, June 14

Friday June 18

User Track SessionsPosters and presentations by and for users of EDA tools.

Tuesday, June 15

Wednesday, June 16

Thursday, June 17

Management DayTuesday, June 15

Page 4: Chip Design Magazine April-May 2010

2 • April / May 2010 Chip Design • www.chipdesignmag.com

www.chipdesignmag.com

Publisher & Sales DirectorKaren Popp (415) 255-0390 x19

[email protected]

EDITORIAL STAFF

Editor-in-ChiefJohn Blyler (503) 614-1082

[email protected]

Consulting EditorEd Sperling

Managing EditorJim Kobylecky

Coordinating Regional EditorPallab Chatterjee

Associate Editor—China Jane Lin-Li

Executive Editor – iDesignClive "Max" Maxfield

Contributing EditorsCheryl Ajluni, Dave Bursky, Brian Fuller, Ann Steffora

Mutschler, Craig Szydlowski

Editorial BoardTom Anderson, Product Marketing Director, Cadence • Cheryl

Ajluni, Technical Consultant, Custom Media Solutions • Karen Bartleson, Standards Program Manager, Synopsys • Chuck Byers, Director Communications, TSMC • Lisa

Hebert, PR Manager, Agilent • Kathryn Kranen, CEO, Jasper Design Automation • Tom Moxon, Consultant, Moxon Design

• Walter Ng, Senior Director, Design Services, Chartered Semiconductor • Scott Sandler, CEO, Novas Software • Steve

Schulz, President, SI2 • Adam Traidman, Chip Estimate

CREATIVE/PRODUCTION

Graphic DesignersKeith Kelly & Brandon Solem

Production CoordinatorSpryte Heithecker

SALES STAFF

Advertising and ReprintsKaren Popp (415) 255-0390 x19

[email protected]

Audience Development / CirculationJenna Johnson • [email protected]

President Vince Ridley (415) 255-0390 x18

[email protected]

Vice President, Marketing & Product DevelopmentKaren Murray • [email protected]

Vice President, Business DevelopmentMelissa Sterling • [email protected]

Vice President, Sales Embedded Systems Media Group

Clair Bright • [email protected]

TO SUBSCRIBE OR UPDATE YOUR PROFILEwww.chipdesignmag.com/subscribe

SPECIAL THANKS TO OUR SPONSORS Chip Design is sent free to design engineers and engineering managers in

the U.S. and Canada developing advanced semiconductor designs. Price for international subscriptions is $125, US is $95 and Canada is $105.

Chip Design is published bimonthly by Extension Media LLC, 1786 18th Street, San Francisco, CA 94107. Copyright © 2009 by Extension Media

LLC. All rights reserved. Printed in the U.S.

IN THIS ISSUE

14

Cover Story—Focus Report

Integrated IP Goes Vertical

By Ed Sperling Departments

4 Chip Design Online

6 Editor's Note: EDA Tool Vendor

- A Rose by any other Name? By John Blyler, Editor in Chief

10 In the News--People in the News

By Jim Kobylecky

12 SoCs Move Beyond Digital and

Memory Blocks

By John Blyler

33 Dot.Org-- Creatively Supporting

the EDA Community By John Darringer, President of the

IEEE Council on EDA

35 Top view-- NoC Technology

Offers Smaller, Faster, and More

Efficient Solutions

By K. Charles Janac, Arteris Holdings

36 No Respins— MEMS Is Poised

to Cross the Chasm By Dr. Joost van Kuijk, Coventor

23

Features

Prototyping Options for Hardware/Software Development

How to choose the right prototype for pre-silicon software development.

By Frank Schirrmeister, Synopsys

Making Abstraction Practical

A Vision for a TLM-to-RTL Flow

By Lauro Rizzatti, EVE-USA

Avoid That Embarrassing Call to the Firmware Vendor (and Other Tricks of Low-Power Verification) Simulation-based hardware/software co-verification lets you address power problems early and well.

By Marc Bryan and Barry Pangrle, Mentor Graphics

Low-Power Tradeoffs Start at the

System Level To succeed at the end, your design methodology must weigh all design constraints from the very beginning.

By Ed Steve Svoboda, Cadence Design Systems

Take A New Approach to the Power-Optimization of Algorithms and Functions

By Rishiyur S. Nikhil, Bluespec, Inc.

19

16

26

READ ONLINEwww.ch ipdes ignmag.com

29

Page 5: Chip Design Magazine April-May 2010

Six powerful Virtex®-6 FPGAs, up to 24 Million ASIC gates, clock speeds to710 Mhz: this new board races ahead of last generation solutions. The Dini Group

has implemented new Xilinx V6 technology in an easy to use PCIe hosted or standalone board that features:

• 4 DDR3 SODIMMs, up to 4GB per socket

• Hosted in a 4-lane PCIe, Gen 1 slot

• 4 Serial-ATA ports for high speed data transfer

• Easy configuration via PCIe, USB, or GbE

• Three independent low-skew global clock networks

The higher gate count FPGAs, with 700 MHz LVDS chip to chip interconnects, provideeasier logic partitioning. The on-board Marvell Dual Sheeva processor provides multiplehigh speed interfaces optimized for data throughput. Both CPUs are capable of2 GFLOPS and can be dedicated to customer applications.

Order this board stuffed with 6 SX475Ts—that’s 12,096 multipliers and more than 21million ASIC logic gates—an ideal platform for your DSP based algorithmic accelerationand HPC applications.

Don’t spin your wheels with last year’s FPGAs, call Dini Group today and run yourbigger designs even faster.

www.dinigroup.com • 7469 Draper Avenue • La Jolla, CA 92037 • (858) 454-3419 • e-mail: [email protected]

DNV6F6PCIe

Page 6: Chip Design Magazine April-May 2010

4 • April / May 2010 Chip Design • www.chipdesignmag.com

CHIP DESIGN ONLINEwww.chipdesignmag.com

Blogs: www.chipdesignmag.com/blogs

JB’S CIRCUIT

John finds blossoms and thorns

woven into today’s EDA.

COLLABORATIVE ADVANTAGE

Steve Schulz explores the

boundaries work against

cooperation in EDA standards.

NPD MANAGEMENT CORNER

Jeff Jorvig takes us inside new

product development.

EDA THOUGHTS

Daniel Payne takes Single Event

Upset for a Prius testdrive.

WIZARDS OF MICROWAVE

Marc Petersen and Colin

Warwick play Flip-Chip.

PALLAB’S PLACE

Pallab Chatterjee finds a

welcome focus on innovative

customer solutions.

TUNING IN TO JIM

Jim Lipman sees a positive

direction for semiconductor

development.

Look for Women in Electronic Design,

a new blog series that will be featuring

many of the engineers and companies

who are the future of EDA.

KOBY’S KAOS

In defense of arrogance.

VISIT

www.chipdesignmag.com

TODAY

PORTALSVisit our growing technology

communities with other chip

architects, engineers and managers.

System-Level Design portal:

www.chipdesignmag.com/sld

Low-Power Design portal:

www.chipdesignmag.com/lpd

Be sure to visit our growing selection

of community blogs including Editor’s

Note by Ed Sperling, ESL Edge by Jon

McDonald, A View From the Top by Frank

Schirrmeister, The Vipster by Vipin Tiwari,

VC Corner with Jim Hogan and Peter L.

Levin, Voltedge by Bhanu Kapoor, Power

Play with Arvind Narayanan, Embedded

Logic by Markus Levy, and Absolute Power

with Cary Chin and Darin Hauer.

BUT WAIT, THERE’S MORE

New and familiar bloggers are coming to

the individual resource pages EECatalog at

http://eecatalog.com/

ASIC-ASSP Prototyping Survey Report

Report ID: CS051608

Price: $299

EDA Tools-Trends Survey

Report ID: CDT063009

Price: $299

IDESIGN: www.chipdesignmag.com/idesign

Power Architecture ISA 2.06 Stride N

Prefetch Engines to Boost Application's

Performance

Three top engineers of the IBM India

System and Technology Labs take us inside

superscalar POWER design.

Billions of Cycles for Billions of Gates

How does a design team execute one-

billion cycles on a one-billion gate design?

Discover what happens when hardware-

assisted verification comes into play.

e-Newsletterswww.chipdesignmag.com/enewsletters

Track the latest industry news and views with our e-newsletters.

CHIP DESIGNER

• The 5-minute Guide to Chip-level

Audio Test

• Thanks for the Memories, But…

• IC Design Flow Must Evolve For

Challenges of 28nm and Below

IP DESIGNER & INTEGRATOR

• Designers – Start Your

Characterization Engines

• SoC Designers Must Have Tangible

Quality Metrics for IP

PROGRAMMABLE LOGIC DEVICE DESIGNER

• Today’s FPGAs Offer High-

Performance DSP Capabilities

• The Academy Award Goes to …

Verification Engineers of all Stripes

CHIP DESIGNER PLUS

• Still Room for Startups?

• Typing words with your brain

• How to Stay Current When You’re

Out of Work

• The Latest in Wearable Media

Technology

Page 7: Chip Design Magazine April-May 2010

Chip Design • www.chipdesignmag.com April / May 2010 • 5

PORTALS

www.SLDCommunity.com

www.LPDCommunity.com

Participate in these Growing Online Communities for System-Level

and Low-Power Designers

Page 8: Chip Design Magazine April-May 2010

6 • April / May 2010 Chip Design • www.chipdesignmag.com

By John Blyler, Editor-in-ChiefEDITOR'S NOTE

What is happening in the EDA

industry? Irmgard Lafrentz, President

and Founder of Globalpress, poised

this question to me in a recent phone

call. She did a good job of capturing the

essence of the conversation in a recent

blog: Something’s happening in EDA,

and this time it’s good! (http://globalink.globalpresspr.

com/blog/2010/04/somethings-

happening-in-eda-and-this-

time-its-good.html) I want

to explore this question in a

bit more detail.

First, let’s consider the

media side of this question.

With the collapse--but

not total annihilation--of

the print business model as a

primary means for funding the development of meaningful

content, EDA companies are finding fewer and fewer

venues for their technology and product announcements.

Couple that challenge with the necessity of reaching a

global audience with their message. This means that EDA

companies must look for coverage beyond the traditional

sources--hence the push into online and social media

outlets. (Yes – there are other reasons for the push, too.)

Secondly, many EDA companies are making a serious

push into vertical markets with their technology and

products, markets like medical, industrial, automotive and

communications.

This “shift” way from being seen as an “EDA tool vendor”

to instead being perceived as a “system solution provider” is

both evolutionary and essential for business survival. EDA

companies can no longer focus solely on the design and

manufacture of the chips. Instead, they must consider the

chips in relationship to the package and board--and even

in terms of both hardware and software. This is one reason

why IP has become so critical in the EDA tool chain.

This move away from the nomenclature of the “EDA tool

vendor” can be seen in the restructuring of most of the big

technical trade shows, too--like DAC and ESC.

So, if you don’t want to be known as an “EDA tool vendor,”

then what is the correct phrase? System Solution Provider?

That sounds rather PR-ish. But what phrase will capture

the essences of today’s EDA company? I’m not sure. What

do you think?

++++++++++++++

Selected responses:

None of the EDA vendors

today will be able to become

a solutions provider for any

domain any time soon.

While they have a large portfolio of products, this portfolio

is absolutely incomplete/ has gaping holes when looked at

EDA vendor by EDA vendor.Any one vendor who wants to

succeed really as a solutions provider will have to address

the COT aspect and really cooperate with other EDA

vendors to jointly integrate different vendor products into

one solution. Something CAD teams do today, and do well.

They are currently the real EDA solution providers in the

IC industry.

Will such cooperation happen? With the current back-

stabbing/ throat-cutting attitude in the EDA industry, I

really wonder.

Is this just a problem of scale, i.e. is the EDA industry

too small?There are plenty of other examples in the

SW domain where collaborations/ networks/ seamless

integration between multiple vendors is reality.

Once EDA realizes this as well, they have a chance to

become solution providers. -- T

EDA Tool Vendor – A Rose by any other Name?

+

S

N

t

a

d

EDA companies can no longer focus solely on the design and manufacture of

the chips.

Page 9: Chip Design Magazine April-May 2010

Let Our Eyes Catch Yours

PHY SerDes TX/RX PLL/DLL Analog Blocks

4423 Fortran Court, Suite 170San Jose, CA 95134 • 408.942.9300

www.Mixel.com

Mobile • MIPI • MIPI/MDDI Unifi ed Solution • MDDI

Timing • Frequency Synthesis • Fractional-N • Spread-spectrum • De-skewing • CDR

Storage & Networking • PCI-Express • DisplayPort • XAUI • Fibre-Channel • SATA • GPON

Interface • LVDS • CE-ATA • CardBus • PATA

• DDR2 • SSTL • HSTL • PCI-X

Come visit us at DAC Booth # 416

Page 10: Chip Design Magazine April-May 2010
Page 11: Chip Design Magazine April-May 2010

In technicalcooperation with:

Sponsored by:

Page 12: Chip Design Magazine April-May 2010

GENERAL CHAIR’S WELCOMELET’S MEET AT DAC!

Dear Colleague:The 47th edition of the Design Automation Conference in Anaheim is just around the corner, and I look forward to welcoming you there. As the central “meeting place” for electronic design and design automation where the industry puts on its grand annual show, there are many facets to DAC. It’s the place new contacts are made, where deals are sealed, where theory meets practice, where colleagues across the industry network, where the seeds of great new ideas are sowed – and much more. DAC is our annual signpost that points the way to the future.

As organizers of the event, we work with DAC’s sponsors and hundreds of volunteers to make it worth your time to attend. This year, in addition to reinforcing traditional strengths, we have added a number of exciting new elements. Here’s a sample of what you can see at DAC:

• The keynote lineup features three distinguished and accomplished industry luminaries: Doug Grose, CEO of GLOBALFOUNDRIES will address the central role of the foundry in electronic design on Tuesday. Bernie Meyerson, Vice President for Innovation at IBM Corporation, will discuss his vision for next-generation IT infrastructure for EDA and the move towards cloud computing, and Iqbal Arshad, Corporate Vice President of Innovation Products at Motorola, will overview his experiences in driving the Motorola Droid from concept to product.

• A vibrant exhibition showcases nearly 200 companies, including all of the largest EDA vendors and a significant foundry presence. The Exhibitor Forum theater features focused technical presentations from exhibitors, while IC Design Central’s exhibit area and a presentation stage that brings together the entire ecosystem for SOC enablement, including IP providers, design services provides, and foundries.

• A special Embedded/SOC Enablement Day on Thursday is designed to further advance DAC’s partner eco-system, attracts a mix of chip creators, ecosystem suppliers, and research-focused participants.

• A robust and exciting technical program includes an exciting array of panels and special sessions that complement a carefully selected subset of the contributed research papers.

• The User Track program, specifically designed by and for EDA tool users, features presentations and poster sessions that highlight outstanding solutions to critical design and methodology challenges, and case studies of innovative tool use. In its second year, it is 50% larger than last year’s acclaimed program.

• An excellent slate of tutorials covers topics such as low-power design, ESL, and software development for the EDA professional.

• Management Day includes invited presentations and networking opportunities for decision-makers in the industry, and highlights issues at the intersection of business and technology.

• An impressive constellation of fourteen colocated events and six DAC workshops complements the DAC program: this includes established conferences and symposia such as AHS, DFM&Y, DSNOC, HOST, HLDVT, NANOARCH, SASP, and SLIP, as well as meetings on emerging topics such as bio-design automation, mobile/cloud computing, and smart grids.

As you can see, there’s tons of good stuff in store – come join us in Anaheim!

Sachin S. SapatnekarGeneral Chair, 47th DAC

TECHNICAL PROGRAM HIGHLIGHTSThe technical program for DAC 2010 exceptional-quality technical papers, panels, special sessions, WACI (Wild and Crazy Ideas), full day tutorials and User Track. The program is tailored for researchers and developers in the electronic design and design automation industry, design engineers, and management. It highlights the advancements and emerging trends in the design of electronic circuits and systems.

The core of the technical program consists of 148 peer-reviewed papers selected from 607 submissions (a 24% acceptance ratio). Organized in 35 technical sessions, these papers cover a broad set of topics ranging from system-level design, low-power design, physical design and manufacturing, embedded systems, logic and high level synthesis, simulation, verification, test and emerging technologies.

Popular submission themes included:

1. Power Analysis and Low-Power Design

(83 submissions, 5 sessions)

2. Physical Design and Manufacturability

(72 submissions, 4 sessions)

3. System-Level Design and Analysis

(69 submissions, 4 sessions)

Some of the novel ideas presented in these papers includes cutting-edge research in property checking, global routing, variation characterization, silicon mismatch, cache design for routers, rewiring, logic optimization with don’t cares, Boolean matching, and low energy processor design. The papers reflect the increasing importance of system-level design, low-power design and analysis, and physical design and manufacturability.

Page 13: Chip Design Magazine April-May 2010

KEYNOTES

FROM CONTRACT TO COLLABORATION: DELIVERING A NEW APPROACH TO FOUNDRYDouglas Grose, Chief Executive Officer, GLOBALFOUNDRIES, Sunnyvale, CA

The list of challenges facing the semiconductor industry is daunting. Chip design continues to increase in complexity, driven by product requirements that demand exponentially more performance, functionality and power efficiency, integrated into a smaller area. In parallel, manufacturing technology is facing increased challenges in materials, cost and shorter product lifecycles. This confluence of factors puts the industry at a crossroads and the foundry industry at center stage.

Chip design companies need to redefine relationships with their manufacturing partners, and foundries must create a new model that brings manufacturing and design into an integrated and collaborative process. This presentation will explore the challenges of bringing the next generation of chip innovation to market through leveraging an integrated global ecosystem of talent and technology. The world’s top design companies want more than a contract manufacturer; they want a level of collaboration and flexibility supported by a robust partner ecosystem of leading providers in the EDA, IP and design services sectors.

TUESDAY JUNE 15

ECHOES OF DAC’S PAST: FROM PREDICTION TO REALIZATION, AND WATTS NEXT?Bernard S. Meyerson, IBM Fellow, Vice President-Innovation, IBM Corp., Yorktown Hts., NY

Over the last five years the semiconductor industry has acknowledged, but struggled to deal with, the end of classical device scaling in silicon technology. This has had ramifications across all aspects of the technology spectrum, as a steady stream of innovations, ever more fundamental, have been required to drive accustomed generational improvements in Information Technology (IT). Adding to this challenge on the demand side there has been an accelerating and seemingly insatiable need for IT resources, driven by the emergence of the ‘Internet of Things’. With such heavy and growing IT demands, key metrics such as system power, cost/performance, and application specific benchmarks have become a core focus of emerging solutions. It is these same metrics and constraints that also require advances in the efficiency and optimization of IT. In this talk, I will review how our industry is dealing with each of these challenges, and explore emerging compute paradigms, such as Cloud Computing, that are impacting EDA directly.

WEDNESDAY JUNE 16

DESIGNING THE MOTOROLA DROIDIqbal Arshad, Corporate VP, Innovation Products, Motorola Mobile Devices, Inc., Schaumburg, IL

As mobile internet usage skyrockets and more sophisticated mobile applications are being developed, the device formerly known as the cell phone is at a major technological inflection point. To meet this challenge, we must design devices and services that enable a transformation in the way we work, socially interact, use the web and utilize computing power. A key ingredient to making this happen is the synthesis of new hardware that is tightly coupled with a new software experience or business opportunity. Similarly, when launching new high-technology products, the success of the product largely depends on how well the target consumer is educated about the availability and capability of the new device. This talk will discuss how designing the Droid helped Motorola to address this shift in the market place. THURSDAY

JUNE 17

Page 14: Chip Design Magazine April-May 2010

USER TRACKThe DAC User Track brings together IC designers from across the globe. The User Track program offers a unique opportunity to pick up the latest tips and tricks from the industry expert IC designers, and features over 110 presentations on a wide variety of topics. Designers from Intel, IBM, Samsung, TI, Toshiba, Qualcomm, AMD, Freescale and other leading IC companies will present their experiences on building effective design flows, design methods, and tool usage. Come to DAC to attend the User Track - there is no other way to improve your ‘design IQ’ in just a short amount of time!

The list of topics is longer and more diverse than ever, and there is something in the User Track for every designer. Low-power design is one of the core topics addressed at the system-level, RTL, and during the place-and-route stages of design. The User Track features talks on efficient design for low-power and on power delivery on the chip and through the package. In other talks, designers will address dealing with the significant variability at 32nm and below. You will find interesting presentations on variation-robust design methods, with ways to quickly converge on high-yield designs. Timing closure is another theme that is addressed by speakers from a several perspectives, both front-end and back-end. Designers will present innovative ways for partitioning, budgeting and retiming. Also featured are presentations on several timing-driven ECO physical optimization methods, and system-level case studies that use formal verification and much more. Please check out the program in the new dac.com website for the full details on the presentations.

The Design Automation Conference and the User Track bring together thousands of like-minded professionals, making this event an opportunity you cannot miss. The User Track runs for three packed days as a parallel track within the DAC technical program. Learn from expert designers in person, and find out the truth about design tools. Stroll through the DAC trade show, attend keynotes and cutting-edge technical sessions, or just talk to colleagues from other companies. Whether it’s for the full three days or just a single day, DAC has it all. And since DAC 2010 is held right next to Disneyland, this is a great opportunity to bring your family along.

USER TRACK SESSIONS

• Timing is Everything

• Front-End Design Experiences

• Taming Back-End Verification and DFM

• Case Studies in Formal Verification

• Cornered: Dealing with Variability

• Front-End Testing and Verification

• Power Delivery from Package to Chip

• Advances in System-Level Design and Synthesis

• User Track Poster Sessions

MANAGEMENT DAY Tuesday, June 15

Management Day 2010 is focused on issues at the intersection of business and technology, and is specifically directed to managers and decision-makers. Three sessions make up this year’s event. Two sessions will feature managers representing IDMs, fablight ASIC providers, and fabless companies, as well as senior managers designing today’s most complex nanometer chips, and will discuss the latest solutions and their economic impact. The third session will be a panel that involves the presenters and the audience in a brainstorming discussion.

EMBEDDED/SOC ENABLEMENT DAY Thursday, June 17

The Embedded/SoC Enablement Day is dedicated to bringing industry stakeholders together in one room to shed light on where SoC design is headed. The event comprises presentations from leading SoC enabling sectors including embedded processors, embedded systems, EDA, FPGA, IP, foundry, and design services. Presenters will focus on the optimization of embedded and application-domain-specific operating systems, system architectures for future SoCs, application-specific architectures based on embedded processors, and technical/business decision making processes by program developers. This program consists of three sessions, and provides an opportunity to foster discussions that address all aspects of the SoC development ecosystem.

TUTORIALSThe DAC program includes seven tutorials on timely subjects, including four design topics: 3-D integrated circuits, analog mixed-signal design, system-level design, and low-power design. This year also features tutorials on two special topics. The first is an overview of software engineering that includes introductions to agile, lean, scrum, and other software best practices—topics that will offer immediate and practical value to students, EDA developers, and SOC firmware engineers. The second is a tutorial on the importance of effective marketing and should appeal to a broad range of DAC attendees that wish to better understand this aspect of business success. As in the past, the goal of the DAC tutorials is to provide practical, usable, and up-to-date knowledge that attendees can immediately apply in their jobs or studies.

MONDAY TUTORIALS

• ESL Design and Virtual Prototyping of MPSOCs

• Low-Power from A to Z

• Marketing of Technology - The Last Critical Step

FRIDAY TUTORIALS

• 3-D: New Dimensions in IC Design

• Advancing the State-of-the-Art in Analog Circuit Optimizers

• Best Practices for Writing Better Software

• SystemC for Holistic System Design with Digital Hardware, Analog Hardware, and Software

Detailed conference and exhibition information is now available online: www.dac.com.

Register today!

QUESTIONS? Call +1-303-530-4333

Sponsored by:

Sponsored by:

Page 15: Chip Design Magazine April-May 2010

PANELSThis year’s DAC panels cover nearly every aspect of the design flow. The panel sessions start off with a look to the future by a wide range of leaders from the semiconductor industry. The other seven panels have something for everyone. Panels will explore the future of TSV/3D technology, the current state of high-level synthesis, different approaches to addressing process variability, the future of low-power design methodologies and how to bridge pre-silicon verification/post-silicon validation. One panel will also take a look at what is needed for an always-connected car. Finally, if you’ve wondered what cloud computing is all about, a panel will explore how cloud computing fits in with the EDA industry.

TUESDAY, JUNE 15

• EDA Challenges and Options: Investing For the Future

• Bridging Pre-Silicon Verification and Post-Silicon Validation

• Who Solves the Variability Problem?

WEDNESDAY, JUNE 16

• 3-D Stacked Die: Now or the Future?

• Does IC Design Have a Future in the Clouds?

• What’s Cool for the Future of Ultra Low-Power Designs?

THURSDAY, JUNE 17

• Designing the Always-Connected Car of the Future

• Joint User Track Panel - (Session 8UB) - What Will Make Your Next Design Experience a Much Better One?

• What Input Language is the Best Choice for High-Level Synthesis (HLS)?

SPECIAL SESSIONSSpecial sessions will deal with a wide variety of themes such as progress in networks-on-chip research, virtualization for mobile embedded devices, challenges in analog modeling, introduction to cyber-physical systems, design for reliability, designing resilient systems from unreliable components, a holistic view on energy management – cell phones to power grids and post-silicon validation. Leading research and industry experts will present their views on these topics.

TUESDAY, JUNE 15

• Post-Silicon Validation or Avoiding the $50 Million Paperweight

• Virtualization in the Embedded Systems: Where Do We Go?

• Joint DAC/IWBDA Special Session - Engineering Biology: Fundamentals and Applications

WEDNESDAY, JUNE 16

• A Decade of NOC Research - Where Do We Stand?

• The Analog Model Crisis - How Can We Solve It?

• Design Closure for Reliability

THURSDAY, JUNE 17

• WACI: Wild and Crazy Ideas

• Cyber-Physical Systems Demystified

• Computing Without Guarantees

• Smart Power: From your Cell Phone to your Home

WORKSHOPSSUNDAY, JUNE 13

• DAC Workshop on Synergies between Design Automation & Smart Grid

• Multiprocessor System-On-Chip (MPSOC): Programmability, Run-Time Support and Hardware Platforms for High Performance Applications at DAC

• DAC Workshop on Diagnostic Services in Network-On-Chips (DSNOC) - 4th Edition

MONDAY, JUNE 14

• IWBDA: International Workshop on Bio-Design Automation at DAC

• DAC Workshop on “Mobile and Cloud Computing”

• DAC Workshop: More Than Core Competence...What it Takes for Your Career to Survive, and Thrive! Hosted by Women in Electronic Design (WWED)

COLOCATED EVENTSFRIDAY, JUNE 11

• IEEE International High-Level Design Validation and Test Workshop (HLDVT 2010)

SUNDAY, JUNE 13

• International Symposium on Hardware-Oriented Security and Trust (HOST)

• 8th IEEE Symposium on Application Specific Processors (SASP 2010)

• Design for Manufacturability Coalition Workshop - “A New Era for DFM”

• IEEE/ACM 12th International Workshop on System-Level Interconnect Prediction (SLIP)

• North American SystemC Users Group (NASCUG 13 Meeting)

• System and SOC Debug Integration and Applications

MONDAY, JUNE 14

• 4th IEEE International Workshop on Design for Manufacturability & Yield (DFM&Y)

• Choosing Advanced Verification Methods: So Many Possibilities, So Little Time

• Advances in Process Design Kits Worshop

TUESDAY, JUNE 15

• ACM Research Competition

• NASA/ESA Conference on Adaptive Hardware and Systems (AHS-2010)

THURSDAY, JUNE 17

• IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH’10)

FRIDAY, JUNE 18

• 19th International Workshop on Logic & Synthesis (IWLS)

Page 16: Chip Design Magazine April-May 2010

Accelicon Technologies, Inc. ACCIT - New Systems Research ACE Associated Compiler Experts bv Agilent Technologies Agnisys, Inc. Aldec, Inc. Altair Engineering Altos Design Automation Amiq Consulting S.R.L. AnaGlobe Technology, Inc.Analog Bits Inc. Apache Design Solutions, Inc. Applied Simulation Technology Artwork Conversion Software, Inc. ASIC Analytic, LLC ATEEDA Atoptech Atrenta Inc. austriamicrosystems AutoESL Design Technologies, Inc. Avant Technology Inc. Avery Design Systems, Inc. Axiom Design Automation BEEcube, Inc. Berkeley Design Automation, Inc. BigC Blue Pearl Software Bluespec, Inc. Breker Verification Systems Cadence Design Systems, Inc. Calypto Design Systems Cambridge Analog Technologies CAST, Inc. ChipEstimate.com Ciranova, Inc. CISC Semiconductor Design+Consulting GmbH ClioSoft, Inc. CMP CoFluent Design Concept Engineering GmbH Coupling Wave Solutions CST of America, Inc. DAC Pavilion Dassault Systemes Americas Corp. DATE 2011 Denali Software, Inc. Design and Reuse Dini Group DOCEA Power Dorado Design Automation, Inc. Duolog Technologies Ltd. E-System Design EDA Cafe-IB Systems EDXACT SA Entasys Inc. Enterpoint Ltd. EVE-USA, Inc. Exhibitor Forum ExpertIO, Inc. Extension Media LLC Extreme DA FishTail Design Automation, Inc. Forte Design Systems Gary Stringham & Associates, LLC GateRocket, Inc. GiDEL Global Foundries Gradient Design Automation

Helic, Inc. Hewlett-Packard Co. HiPEAC IBM Corp. IC Manage, Inc. ICDC Partner Pavilion & Stage IMEC - Europractice Imera Systems, Inc. Infotech Enterprises iNoCs Interra Systems, Inc. Jasper Design Automation, Inc. Jspeed Design Automation, Inc. JTAG Technologies Laflin Limited Legend Design Technology, Inc. Library Technologies, Inc. Lynguent, Inc. Magillem Design Services Magma Design Automation, Inc. Magwel NV MathWorks, Inc. (The) MentaMentor Graphics Corp. Mephisto Design Automation Methodics LLC Micro Magic, Inc. Micrologic Design Automation, Inc. Mirabilis Design Inc. Mixel, Inc. MOSIS MunEDA GmbH Nangate NextOp Software, Inc. Nusym Technology, Inc. Oasys Design Systems, Inc. OneSpin Solutions GmbH OptEM Engineering Inc. OVM World Physware, Inc. PLDA POLYTEDA Software Corp. Progate Group Corp. Prolific, Inc. Pulsic Inc. R3 Logic Inc. Rapid Bridge, LLCReal Intent, Inc. Reed Business Information RTC Group - EDA Tech Forum Runtime Design Automation Sagantec Sapient Systems Satin IP Technologies Seloco, Inc. Semifore, Inc. Si2 Sigrity, Inc. Silicon Design Solutions Silicon Frontline TechnologySKILLCAD Inc. Solido Design AutomationSonnet Software, Inc. Springer SpringSoft, Inc. StarNet Communications Synapse Design Synchronicity - see Dassault Systèmes

Synfora, Inc. Synopsys, Inc. Synopsys, Inc. - Standards Booth Synopsys-ARM-Common Platform Innovation SynTest Technologies, Inc. Tanner EDA Target Compiler Technologies NV Teklatech Tela Innovations Tiempo TOOL Corp. True Circuits, Inc. TSMC TSMC Open Innovation Forum, Apache TSMC Open Innovation Forum, Cadence TSMC Open Innovation Forum, eSilicon TSMC Open Innovation Forum, Helic, Inc.TSMC Open Innovation Forum, IntegrandTSMC Open Innovation Forum, LorentzTSMC Open Innovation Forum, Magma TSMC Open Innovation Forum, Mentor TSMC Open Innovation Forum, MoSys TSMC Open Innovation Forum, SolidoTSMC Open Innovation Forum, SpringSoft TSMC Open Innovation Forum, Synopsys TSMC Open Innovation Forum, Tela Innovations TSMC Open Innovation Forum, Virage Logic TSSI - Test Systems Strategies, Inc. Tuscany Design Automation, Inc. UMIC Research Centre Uniquify, Inc. Univa UD Vennsa Technologies, Inc. Verific Design Automation Veritools, Inc. WinterLogic Inc. X-FAB Semiconductor Foundries XJTAG XYALIS Z Circuit Automation Zocalo Tech, Inc.

Orange text denotes a new exhibitor

EXHIBITOR LIST (AS OF APRIL 12, 2010)

Page 17: Chip Design Magazine April-May 2010

EXHIBITOR FORUM DAC is continuing the popular Exhibitor Forum again this year. The Exhibitor Forum provides a theater on the exhibit floor where exhibitors present focused, practical technical content to attendees. The presentations are selected by an all-user Exhibitor Forum Committee chaired by Magdy Abadir of Freescale Semiconductor, Inc. Each session is devoted entirely to a specific domain (e.g., verification or system-level design) and consists of presentations from three companies.

The Exhibitor Forum is in Hall B in Booth 1684. Topics include: System-Level Design/Embedded Software, Physical Design and Sign-Off, Verification, Power Management/Signal Integrity, Analog/Mixed-Signal and RF, Design for Manufacturability, Intellectual Property Cores, Design for Test and Manufacturing Test, Package Design, and Silicon Validation and Debug.

DAC PAVILIONThe popular DAC Pavilion is located in Hall C in Booth 694. The DAC Pavilion will feature 17 presentations on business and technical issues.

MONDAY, JUNE 14

• Gary Smith on EDA: Trends and What’s Hot at DAC

• The Multiplier Effect: Developing Multi-Core, Multi-OS Applications

• Career Outlook: Job Market 2010

• Outsourcing...!@#$*&!!?

• EDA Heritage - Meet Verilog Inventor Dr. Moorby and Formal Verification Pioneer Prof. Bryant

• A Conversation with the 2010 Marie Pistilli Award Winner

TUESDAY, JUNE 15

• Hogan’s Heroes: What Design and Lithography Nightmares will 22nm Bring?

• Everyone Loves a Teardown (ARM)

• Is the FPGA Tool Opportunity an Oasis or a Mirage?

• 28nm and Below: SOC Design Ecosystem at a Crossroad

• Hot and SPICEy: Users Review Different Flavors of SPICE and Fast SPICE

WEDNESDAY, JUNE 16

• Lucio’s Litmus Test: Is Your Start-Up Ready for the 21st Century?

• IP Commercialization: Beyond the Code

• Everyone Loves a Teardown (Virage Logic)

• High-School Panel: You Don’t Know Jack!

• Analog Interoperability: What’s the ROI?

• SOC Verification: Are We There Yet?

THE IC DESIGN CENTRAL PARTNER PAVILION—PUTTING MORE DESIGN INTO DACThe IC Design Central Partner Pavilion, located in Hall B, stage #1710, brings together vendors supplying products and services that address many of the critical design functions necessary to produce working silicon on time and on budget. Companies from all areas of the design and product development process—EDA, Foundry, IP, Design Services, Assembly/Package, Test, and System Interconnect—must cooperate to offer integrated front-to-back solutions that ensure first-time-successful silicon and predictable time-to-market. Visit the ICDC Partner Pavilion and find design flows and solutions needed to create today’s challenging designs.

The ICDC Partner Pavilion is a combination of exhibit booths and 30-minute presentations by each participating vendor. The combination of product displays in the exhibits and technical product presentations in the ICDC Theater offers attendees an in-depth look into flows and methodologies from vendors featuring a variety of products and services for the entire design ecosystem.

CURRENT PARTICIPATING ICDC EXHIBITORS INCLUDE:

EXHIBIT-ONLY PASSRegister for an exhibit-only pass and receive admission to all days of the exhibition, all Keynotes, all DAC Pavilion and Exhibitor Forum sessions, the IC Design Central Partner Pavilion, plus the Tuesday night DAC party, and a T-shirt —all for $50 when you register by May 17.

EXHIBITION HOURSMONDAY, JUNE 14 - WEDNESDAY, JUNE 169:00am - 6:00pm

Altair Engineering

Amiq Consulting S.R.L.

ASIC Analytic, LLC

Avant Technology Inc.

BEEcube, Inc.

Cambridge Analog Technologies

CoFluent Design

CISC Semiconductor Design

& Consulting

Enterpoint Ltd.

ExpertIO, Inc.

Gary Stringham & Associates, LLC

IBM Corp.

iNoCs

Progate Group Corp.

R3 Logic Inc.

TSSI - Test Systems Strategies, Inc.

X-FAB Semiconductor Foundries

Zocalo Tech, Inc.

EXHIBITIONThe 47th DAC exhibition is located in Halls B and C of the Anaheim Convention Center.

Visit the DAC exhibition for an in-depth view of new products and services from nearly 200 vendors spanning all aspects of the electronic design process, including EDA tools, IP cores, embedded system and system-level tools, as well as silicon foundry and design services.

Sponsored by:

Page 18: Chip Design Magazine April-May 2010

REGISTRATION OPTIONS: Internet registration is open through June 18. Mail/fax registrations are accepted through June 8.

FULL CONFERENCE REGISTRATION includes: access to all three days of the Technical Sessions, User Track Sessions, Embedded/SOC Enablement Day, access to the Exhibition, Monday through Wednesday, the 47 Years of DAC DVD Proceedings and the Tuesday Night Party.

STUDENTS FULL CONFERENCE REGISTRATION IEEE MEMBER OR ACM MEMBERA special student rate applies to individuals who are members of ACM or IEEE and are currently enrolled in school. Students must provide a valid ACM or IEEE student membership number and a valid student ID. ACM/IEEE Student registration includes: all three days of the Technical Conference, Embedded/SOC Enablement Day, access to the Exhibition, Monday through Wednesday, the 47 Years of DAC DVD Proceedings and the Tuesday Night Party.

ONE/TWO DAY REGISTRATION INCLUDES: include the day(s) you select for the Technical Conference, access to the Exhibition, User Track (UT) Sessions, Monday through Wednesday, and the “47 Years of DAC” DVD Proceedings.

EXHIBIT-ONLY REGISTRATION allows admittance to the Exhibition, Monday through Wednesday and includes the Tuesday Night Party.

USER TRACK SESSIONS registration includes entrance to the Exhibition, Monday through Wednesday and all Keynotes. User Track Sessions are included in the Full Conference registration and the One-/Two-day registration on the day(s) attending the technical conference.

MANAGEMENT DAY registration for this event includes entrance to the Exhibition, Monday through Wednesday, and all Keynotes.

TUTORIALS are offered on Monday, June 14 and Friday, June 18. There is one quarter-day tutorial, two half-day tutorials, and four full-day tutorials. The full-day tutorial registration fee includes: continental breakfast, lunch, refreshments and tutorial notes. The half-day tutorial registration fee includes: continental breakfast, refreshments and tutorial notes. The quarter-day tutorial registration fee includes: refreshments and tutorial notes.

EMBEDDED/SOC ENABLEMENT DAY is a day-long track of sessions dedicated to bringing industry stakeholders together in one room to shed light on where SOC design is headed. The day is comprised of presentations from leading SOC enabling sectors, including embedded processors, embedded systems, EDA, FPGA, IP, foundry, and design services.

Advance Rate Received by May 17

Late/On-site Rate Received After May 17

Full Conference $475 $595 $230 $570 $695 $295

One-Day Only (Tue., Wed., Thurs.) $325 $325

Two-Day Only (Tue, Wed., Thurs.) $525 $525

Exhibit-only access all days (Mon. - Wed.)

$50 $95

Monday Exhibit-only (Monday) FREE FREE

Management Day (Tuesday) $95 $95

Embedded/SOC Enablement Day $95 $95

Full-day $300 $400 $200

Half-day $180 $240 $120

Quarter-day $100 $130 $80

User Track Sessions $185 $240

DAC Workshop on Diagnostic Services in Network-on-Chips (DSNOC) - 4th Edition

Sunday, June 13

$150 $195Multiprocessor System on Chip (MPSOC): Programmability, Run-Time Support and Hardware Platforms for High Performance Applications at DAC

DAC Workshop on Synergies between Design Automation & Smart Grid

DAC Workshop: More Than Core Competence...What it Takes for Your Career to Survive, and Thrive! Hosted by Women in Electronic Design (WWED)

Monday, June 14

FREE up to 100 attendees

DAC Workshop on “Mobile and Cloud Computing”

$150 $195

International Workshop on Bio-Design Automation at DAC (IWBDA)

Monday, June 14 & Tuesday, June 15

$230 $305

ACM/IEEE Member

Non-member

ACM/IEEE Student Member

Student Non-member

WORKSHOPS/COLOCATED EVENT registration also includes entrance to the Exhibition, Monday through Wednesday.

Visit the DAC website for online registration, complete conference and exhibition details, travel and hotel reservations and information on visiting Anaheim at www.dac.com.

CANCELLATION/REFUND POLICY:Written requests for cancellations must be received in the DAC office by Monday, May 17, 2010 and are subject to a $25.00 processing fee. Cancellations received after May 17, 2010 will NOT be honored and all registration fees will be forfeited. No faxed or mailed registrations will be accepted after June 8, 2010.

Telephone registrations are not accepted!

Faxed or mailed registrations without payment will be discarded.

Dates Advance Registration Late/Onsite Registration

IEEE International High Level Design Validation and Test Workshop 2010 (HLDVT) Fri. June 11 - Sat., June 12

$350 $450 $250 $250 $450 $575 $300 $300

IEEE/ACM 12th International Workshop on System Level Interconnect Prediction (SLIP) Sun., June 13

$250 $320 $200 $200 $300 $370 $250 $250

Design for Manufacturability Coalition Workshop FREE

IEEE International Symposium on Hardware-Oriented Security and Trust (HOST) Sun., June 13 - Mon., June 14

$300 $375 $150 $190 $360 $450 $180 $225

8th IEEE Symposium on Application Specific Processors (SASP 2010) $315 $410 $190 $190 $420 $530 $245 $245

4th IEEE International Workshop on Design for Manufacturability & Yield (DFM&Y)Mon., June 14

$150 $200 $100 $150 $200 $250 $130 $200

Advances in Process Design Kits Workshop FREE

NASA/ESA Conference on Adaptive Hardware and Systems (AHS-2010) Tue., June 15 - Fri., June 18

$560 $560 $410 $410 $690 $690 $510 $510

IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH’10)

Thurs., June 17 & Fri., June 18

$285 $375 $225 $300 $350 $440 $240 $315

Register Online by May 17 and Save!

Page 19: Chip Design Magazine April-May 2010

Visit Our Website!

More of everything you’ve come to expect from Chip Design:

News

Technology Trends

Design Centers

Blogs

iDesign

Focus Report

Commentary

Technical Papers

Resource Catalogs and Guides

Email Newsletters

ChipDesignMag.com

Dedicated to the information needsof the IC design market

Page 20: Chip Design Magazine April-May 2010

10 • April / May 2010 Chip Design • www.chipdesignmag.com

IN THE NEWSBy Jim Kobylecky, Managing Editor

SANDEEP VIJ NAMED CEO OF MIPS TECHNOLOGIESMIPS Technologies, Inc. has appointed

Sandeep Vij as president, chief executive

officer and director. Mr. Vij brings to

MIPS Technologies more than 20 years

of senior-level management and marketing

experience in the semiconductor industry. Prior to joining

MIPS Technologies, he was vice president and general

manager of the Broadband and Consumer Division of

Cavium Networks. Mr. Vij was had senior roles with Xilinx

Inc. and Altera. He is a graduate of General Electric’s Edison

Engineering Program and Advanced Courses in Engineering

and holds an MSEE from Stanford University and a BSEE

from San Jose State University.

THOMAS KAILATH WINS BBVA FOUNDATION AWARD FOR CHIP MINIATURIZATIONA BBVA Foundation Frontiers of Knowledge Award went to

engineer and mathematician Thomas Kailath, (Pune, India,

1935), the Hitachi America Professor of Engineering at

Stanford University, as author of a mathematical development

enabling the production of increasingly small size chips.

Kailath has invented methods to pattern integrated circuits

with components finer even than the lightwaves used in

their production. "I was able to see the opportunities and

enter new fields because I learned to use my students as

intelligence amplifiers," says Kailath. "So I regard this prize as

a tribute also to them, to their brilliance and dedication. No

comparable award scheme reserves a category for Information

Technologies. At 400,000 euros, it is the largest monetary

award in the ICT field.

XMOS APPOINTS CHARLES COTTON INTERIM CEOXMOS has appointed Charles Cotton

as a board member and interim Chief

Executive Officer. Cotton's deep experience

in leading and managing semiconductor

and software companies include his roles

as Executive Chairman of GlobespanVirata Inc. and CEO

of Virata Corp. before its merger with Globespan. He is

currently a Director of semiconductor companies Solarflare

and Staccato and two other companies in California. His UK

directorships include Cambridge Enterprise, the University of

Cambridge organization responsible for spin-outs, licensing

and consulting activities. Cotton was a Board member of

digital maps supplier Tele Atlas prior to its acquisition by

TomTom.

ESILICON NAMES AJAY LALWANI VP OF STRATEGIC SOURCINGeSilicon appointed of Ajay Lalwani as vice president, strategic

sourcing. In this role, Lalwani is responsible for expanding

eSilicon's capabilities in strategic sourcing across eSilicon's

entire global semiconductor supply chain. Additionally,

Lalwani will develop and manage strategic alliances with key

IP, EDA, photomask, wafer, assembly and test suppliers He

has over 22 years of experience in the industry and has gained

extensive insights into how business strategy, sales, marketing

and organizational development impact supply chains.

Lalwani has a BSEE and MBA from Santa Clara University.

CHIL SEMICONDUCTOR BOARD ADDS INTEL VP THOMAS MACDONALDThomas R. Macdonald has joined

the board of CHiL Semiconductor

Corporation. Mr. Macdonald has held

a range of general management and

high level strategic microprocessor and

platform marketing management positions since joining

Intel in 1988. Mr. Macdonald received his bachelor's degree

in mechanical engineering from Stanford University and his

MBA from the Kellogg Graduate School of Management,

Northwestern University.

DR. LEON O. CHUA RECEIVES ISQED QUALITY AWARD,The International Society for Quality Electronic Design

announced the winner of the prestigious 2010 ISQED

Quality Award (IQ-Award), Dr. Leon O. Chua of the

Electrical Engineering and Computer Sciences Department,

University of California, Berkeley. Dr. Chua is well known

as a pioneer in three major research areas, nonlinear circuits,

chaos and cellular neural networks. His work in these

areas has been recognized internationally by major awards,

including 12 honorary doctorates from major universities in

Europe and Japan and seven USA patents.

People in the News

Page 21: Chip Design Magazine April-May 2010

Chip Design • www.chipdesignmag.com April / May 2010 • 11

IN THE NEWSBy Jim Kobylecky, Managing Editor

TRIDENT MICROSYSTEMS SELECTS DR. J. DUANE NORTHCUTT AS CTOTrident Microsystems, Inc. has named Dr. J. Duane

Northcutt as Chief Technology Officer (CTO). Prior to

joining Trident, Dr. Northcutt was with Silicon Image, Inc.

and was a Distinguished Engineer at Sun Microsystems.

Earlier, Mr. Northcutt was a member of the research faculty

at Carnegie Mellon University's School of Computer Science.

He currently holds over twenty patents. Dr. Northcutt

received both a Master's of Science degree and a Ph.D. degree

in computer and electrical engineering from Carnegie-Mellon

University.

ROB ROY JOINS ATRENTA EXECUTIVE TEAMAtrenta Inc. has selected Dr. Rob Roy to

be chief of business development. Dr. Roy

was a co-founder and chief technology

strategist of Mobilian Corporation. His

previous experience includes engineering

and management positions at Intel, NEC, GE, and AT&T

Bell Labs. Dr. Roy has published over 50 research papers in

prestigious journals and international conferences, including

three highly-prestigious Best Paper Awards. He holds 15

patents. He earned his M.S. and Ph.D. degrees in Electrical

& Computer Engineering from the University of Illinois at

Urbana-Champaign.

MAGMA NAMES ALOK MEHROTRA MANAGING DIRECTOR OF INDIAMagma Design Automation Inc. has

appointed Alok Mehrotra as managing

director of the company's India

operations. More than 30 percent of

Magma's worldwide workforce operates

in the company's Bangalore, Mumbai and Noida facilities.

Mehrotra had previously worked for the company as director

of Asia-Pacific Sales from 2001 to 2005, when he established

operations in India, Singapore, Malaysia and Australia.

Mehrotra holds an MBA from Santa Clara University;

an M.S. in electrical engineering from State University of

New York at Stony Brook; and a B.S. degree in electronics

& communication engineering from Manipal University in

Karnataka, India.

VERILAB PROMOTES JL GRAY TO VICE PRESIDENTVerilab Ltd. has promoted JL Gray to

the position of vice president, reporting

directly to chief executive officer, Tommy

Kelly. JL has contributed to the EDA

industry as Verilab’s representative on

the Accellera Verification IP Technical Subcommittee.

He is also the author of “Cool Verification,” a blog about

hardware verification from a consultant’s perspective. He has

worked extensively on the application of social media to the

EDA industry as a means of fostering collaboration in the

wider engineering community. JL has a BSEE from Purdue

University in West Lafayette, Indiana.

CYCLEO HIRES FRANÇOIS SFORZA AS VP OF SALESCycleo SAS has named François Sforza

to the position of Cycleo VP of Sales.

Sforza brings more than 20 years of Sales

& Marketing management experience in

wireless and semiconductors companies.

Prior to joining Cycleo, he was Regional Manager for Europe

at Wipro Technologies, providing Global Solution Services

to the Telecom, Semiconductor, Consumer and Automotive

Industries. François Hede, Cycleo’s CEO, said that “François

has a proven track record in the Wireless and Semiconductor

markets, as well as developing an ecosystem with partners &

end-users to help make our customers successful”.

MAGMA APPOINTS NORIAKI KIKUCHI PRESIDENT OF MAGMA KKMagma Design Automation Inc. has

named Noriaki Kikuchi president of

Magma KK, Magma’s Japanese subsidiary.

Kikuchi has more than 30 years experience

in electronic design automation and other

technology industries. With a history of sales and senior

management positions showing increasing responsibilities,

Kikuchi most recently was president of Japan operations for

Brion Technologies Inc., a subsidiary of ASML. Previously

he was president of Tera Systems Japan, and held senior sales

and field operations positions with Synopsys Japan and Seiko

Instruments Inc. Kikuchi holds a Bachelor of Arts degree in

management from Chuo University in Tokyo.

Page 22: Chip Design Magazine April-May 2010

12 • April / May 2010 Chip Design • www.chipdesignmag.com

BEHIND THE NUMBERS

What is the functional make-up of today’s System-on-Chip

(SoC) designs? Unlike the past, when SoC were dominated

by digital logic and memory cores, today’s devices contain

a more balanced set of functional blocks. This viewpoint is

supported by the findings from a recent survey of the Chip

Design magazine readership (see Figure). The response was

surprising in that Digital Logic design edged out Analog

and Mixed Signal (AMS) by a slight margin – 6 percent.

Further, embedded processor cores edged out memory in

terms of the type of functional blocks utilized in SoC.

The shrinking cost, improved performance and lower power of

embedded processor, married with the strong growth in consumer

and mobile devices, is one reason for the increased presence of

embedded cores. A separate survey question concerning the type

of processor IP used in SoC designs revealed ARC as the leader,

followed by ARM, MIPS, Intel and others.

Interestingly, when survey respondents were asked about

third-party IP usage for their SoC designs, they selected

AMS blocks as being dominate, followed in a distant second

by embedded processor, memory and the digital logic IP.

These trends complement the move toward more embedded

processor designs and the increasing need for connectivity and

sensors in the growing market for consumer and mobile devices.

SoCs Move beyond Digital and Memory Blocks

By John Blyler, Editor-in-Chief

IC Functional Block

Analog andMixed

Signal, 59,20%

DigitalLogic, 77,

26%

EmbeddedProcessors,

49, 17%

Input-Output, 40,

14%

Memory,47, 16%

RF/Wireless, 22, 7%

Blank = 18 or 6%Total Responses = 312 (Multiple Choice)Total Respondents = 116

Dual Core Embedded Processors Bring Benefits And ChallengesBy John BlylerThe embedded processor market has now fully embraced the

multicore world with the recent introduction of the dual core

option for Intel’s Atom devices. Dual-core embedded proces-

sors offer designers many new benefits while presenting new

challenges. How will the multicore option affect low power

designs, virtualization, and single-threaded legacy software?

Will these devices lead to more connectivity? Is the embedded

processor market looking like the ASSP market of the future?

To answer these questions, Low-Power Engineering talked

with Jonathan Luse, Director of Marketing for the Low-Pow-

er Embedded Products Division of Intel.

LPE: How does dual-core affect power consumption?

Luse: It’s best to think of the Atom as roughly split into two

vectors–performance and power. The performance vector is

a little less power constrained and a little more performance

oriented, but still low power compared to Intel’s Core family

of processor. The other major vector is low power. At the

winter Embedded World Conference in Nürnberg, Germany,

we introduced our entry performance level processors, which

included the dual-core option at about 13watts thermal

design power (TDP) to 5.5 watts for the single-core kit at

1.6kHz. This was designed to have a little more tolerance for

power, with the expectation that Input/Output (IO) interface

and performance would be increased over time.

To read more, please visit: lpdcommunity.com

,

Page 23: Chip Design Magazine April-May 2010

Is your company a missing piece in the EDA & IP industry?The EDA Consortium Market Statistics Service (MSS) is the timeliest and most detailed market data available on EDA & IP. Top companies have been participating in the MSS since 1996, but no report is complete without ALL the pieces.

If your company is not contributing to the MSS, it is a missing piece in the puzzle. Contribute your company’s data to the MSS and receive a complimentary Annual MSS Executive Summary.

What’s in it for your company?• Worldwide statistics on EDA & IP revenue• Ability to size your market• Six years of market data for trends analysis• The opportunity to make a positive difference in the industry

For details contact:[email protected](408) 287-3322www.edac.org

Receive a Complimentary EDA & IP Industry Market Report

Page 24: Chip Design Magazine April-May 2010

14 • April / May 2010 Chip Design • www.chipdesignmag.com

FOCUS REPORTBy Ed Sperling, Contributing Editor

The consolidation of intellectual property from small

developers to large players with integrated IP blocks is

accelerating. Large IP companies are now developing

integrated suites that are pre-tested for specific vertical

markets, and new companies are sprouting up to make it

easier to put even broader collections of IP together in

meaningful ways.

It’s difficult to tell whether the trend is being driven more

by the IP vendors or pulled through by chip developers

looking to cut costs—or whether it builds upon the stamp

of approval by foundries for certain pieces of IP. The net

effect, however, is the creation of subsystems and partial

platforms that are one step below reference platforms.

“A reference design suggests a complete solution,” said Eric

Schorn, vice president of marketing for ARM’s processor

division. “Customers don’t want us to go that far. But we

are moving in a segment-oriented fashion. That’s the reason

we bought a graphics processor company. We are making a

processor along with a graphics socket for mobile phones

and set-top boxes.”

The company isn’t alone in recognizing the opportunity for

putting together more pieces of IP in very specific ways.

Virage Logic’s recent acquisitions of ARC and NXP’s IP

unit have positioned it to lead with integrated subsystems

in markets such as high-performance audio and video.

“You have to have a reference platform these days,” said

Yankin Tenurhan, vice president and general manager of

Virage’s ARC business unit. “ That’s not much different

from the good old days of silicon, though, when you needed

a complete solution and a full blown prototype. Philips,

NXP, Texas Instruments and ST all have demonstrator

chips for whatever you want on a cell phone. The same is

happening in the IP world.”

PUTTING TOGETHER THE PIECES

It’s not just the IP vendors that are putting together suites

of IP. Two startups are focused on making IP easier to

understand and integrate. Parallel Engines, which emerged

from stealth mode this week, is focused on organizing IP by

data mining pertinent information about everything from

power requirements to the interfaces and interconnects.

“ There are 12,000 pieces of IP out there, including 8,000

pieces of hard IP that are made by about 50 companies and

about 4,000 pieces of soft IP,” said George Janac, CEO of

Parallel Engines. “ The hard IP is already in FPGAs from

companies like Actel, Xilinx and Altera. You just need the

soft IP to make it work.”

Somewhat conveniently, Janac’s brother, Charlie, is the

CEO of Arteris, which makes network on chip technology

that can be used to glue together these IP blocks.

“A company may have one or two pieces of IP that are the

secret sauce and some software,” Charlie Janac said. “Why

not drop those into an FPGA and connect up the other

pieces of IP? Those two worlds are merging. We’re going to

see much more custom logic on an FPGA.”

Another company involved in bringing IP together is Silicon

IP, run by Kurt Wolf (formerly of TSMC), who said there’s

a disconnect between chipmakers and IP vendors that still

needs to be closed. “ The chip guys distrust the IP industry,”

Wolf said. “ There’s more integration of IP, but there’s still

a lack of confidence about how to choose, buy and license

IP.”

Wolf ’s company is focused more on bringing the two sides

together with better information and connecting the pieces

in an organized way.

THE FUTURE

All of these efforts—by both large IP vendors and

startups—are signs of just how important commercial

IP has become in chip development. What began with

embedded processors and standard memory designs has

evolved into a huge market that actually gained momentum

in the recent downturn.

Integrated IP Goes Vertical

Page 25: Chip Design Magazine April-May 2010

Chip Design • www.chipdesignmag.com April / May 2010 • 15

FOCUS REPORTBy Ed Sperling, Contributing Editor

Outsourcing is gaining ground at every level of business,

even outside of the semiconductor world, but in the past

most of the gains have been in areas where there was little

value add. Outsourcing traditionally has been relegated to

commodity services. What’s changing is that IP now includes

areas that companies cannot do themselves in addition to

those they don’t want to do, as well as the extremely tedious

and time-consuming integration work that is necessary to

create a final product.

When most analysts predicted a massive growth in IP at

the beginning of the decade they were largely talking about

small, relatively unsophisticated IP blocks pieces that can

be put together by highly sophisticated companies. In the

future, the differentiation may be less around the technology

and more on getting very complex chips assembled and to

market faster for specific market segments.

Ed Sperling is Contributing Editor for

Embedded Intel® Solutions and the Edi-

tor-in-Chief of the “System Level Design”

portal. Ed has received numerous awards

for technical journalism.

WWW.si2.orgFeatured DownloadsFeatured DownloadsAt: www.si2.org/?page=1038

• 90+ Si2 Corporate Members

• All coalitions led by companieswho design ICs

• Board of Directors: AMD, ARM,Cadence, IBM, Intel, LSI, NationalSemiconductor, NXP, SequenceDesign, Synposys

Join the leaders in thesemi and EDA industry!

Far more than just Standards....Far more than just Standards....

......It’s about Adoption and ROI......It’s about Adoption and ROI

Slow Adoption for ESLBy Brian FullerIt’s been more than a decade since electronic system

level (ESL) abstraction started to gain traction in

EDA. It’s been more than a few years since the in-

dustry began to plan for the day when the benefits of

embracing C-language approaches to design descrip-

tion and validation would find designers churning out

massively complex and profitable designs while sitting

in lawn chairs sipping drinks with little pink umbrel-

las in them.

What happened?

Well, it’s still with us, but no one’s broken out the lawn

chairs just yet

To read more, please visit the System-Level Design com-

munity at sldcommunity.com

Page 26: Chip Design Magazine April-May 2010

16 • April / May 2010 Chip Design • www.chipdesignmag.com

HAR

DW

AR

E-S

OFTW

AR

ED

ESIG

NPR

OTO

TYPIN

G

By Frank Schirrmeister, Synopsys

Prototyping Options for

Hardware/Software DevelopmentHow to choose the right prototype for pre-silicon software development

With software development playing an increasing role in

determining overall project effort and time-to-market for

consumer, wireless and automotive projects, pre-silicon software

prototyping has become a necessity to ensure that hardware and

software interact correctly with each other. However, choosing the

appropriate prototype for software development and hardware/

software co-design is not a trivial undertaking.

To understand the tradeoffs involved in this process, let’s examine the

concrete project development example shown in Figure 1. The upper

portion of the Gantt chart displays the timeline for the different project

phases, while the bottom portion shows the percentage of overall

project effort expended on all of these phases. Qualification of the IP

actually takes significant effort, as does the actual design management.

Relevant for the modeling issues is that verifying the RTL dominates

both the timeline and the actual effort. For this project, it took a total

of four quarters to get to verified RTL, and silicon prototypes were

not available until eight quarters into the project. The overall software

development took five quarters, so if it indeed had to wait for silicon

to be developed, the project would be significantly delayed.

Prototyping for software development can be done at different

stages, with various pros and cons. Although not reflected in Figure

1, previous-generation chips are often used for actual application

software development while the project is under way. Depending

on the number of changes from one chip generation to the next,

software can be developed on the previous device, and as soon as

new drivers are available, they can be replaced. If product capabilities

change significantly between product generations, this option

becomes less attractive, as the important new capabilities need to be

developed after updated drivers and silicon are available.

DIFFERENT PROTOTYPING OPTIONS AND THEIR AVAILABILITY

Available earliest in a project are virtual prototypes. They represent

fully functional software models of systems on chip (SoCs), boards,

I/Os and user interfaces, they execute unmodified production code,

and they run close to real-time with virtualized external interfaces.

They also offer the highest system visibility and control, including

multi-core debug. While virtual prototypes offer very high speed

(multiple tens of MIPS) when using so-called loosely timed models,

their limited timing accuracy often causes design teams used to

hardware prototypes to be skeptical of their value. The speed of

virtual platforms will degrade to the single-digit MIPS range or even

lower if users choose to mix in more timing-accurate software models.

However, because virtual prototypes are available earliest in the flow

if models are available, they are great for debugging and control, as

simulation provides almost unlimited insight, and they are also easy

to replicate. They offer true hardware/software co-development

capabilities because changes in the hardware can still be made if the

virtual prototype is available early enough in the design flow.

Variations of virtual platforms are so-called software development

kits (SDKs) – for example the iPhone SDK, which was downloaded

more than 100,000 times in the first couple of days of its availability.

While they offer most of the advantages of the standard virtual

prototypes, their accuracy is often more limited because they may

not represent the actual registers as accurately as virtual prototypes

but instead allow programming toward higher-level application

programming interfaces (APIs) and often require re-compilation

of the code to the actual target processor after users have verified

functionality on the host machine on which the SDK executes.

Available later in the design flow, but still well before silicon, FPGA

prototypes can serve as a vehicle for software development, as well.

They are fully functional hardware representations of SoCs, boards

and I/Os. They implement unmodified ASIC RTL code and run

at almost real-time speed, with all external interfaces and stimulus

connected. They offer higher system visibility and control than the

Figure 1: Concrete hardware software project development example.

(Source: Joint analysis by Synopsys and International Business Strategies)

Page 27: Chip Design Magazine April-May 2010

Chip Design • www.chipdesignmag.com April / May 2010 • 17

ESL

HAR

DW

AR

E-S

OFTW

AR

ED

ESIG

NPR

OTO

TYPIN

G

Figure 2: Eight model characteristics for choosing prototyping solutions

actual silicon will provide later, but do not quite match the debug and

control capabilities of virtual platforms. Their key advantage is their

ability to run at high speed – multiple MIPS or even 10s of MIPS

– while maintaining RTL accuracy, but depending on the complexity

of the project, they will typically be available much later in the design

flow than virtual prototypes. Due to the complexity and effort of

mapping the RTL to FPGA prototypes, it is not really feasible to use

them before RTL verification has stabilized. Finally, once stable and

available, the cost of replication and delivery for FPGA prototypes is

higher than for software-based virtual platforms.

Emulation provides another hardware-assisted alternative to

enable software development. It differs from FPGA prototypes

in that it enables better automated mapping of RTL into the

hardware together with faster compile times, but the execution

speed will be lower and typically drop to the single-MIPS range

or below. The cost of emulation is also often seen as a deterrent to

replicating it easily for software development. Both emulation and

FPGA prototypes are limited when it comes to true hardware/

software co-development because at this point in the design flow,

the hardware is pretty much fixed, as RTL is almost verified.

Design teams will be very hesitant to change the hardware

architecture unless a major architecture bug has been found.

Finally, after the actual silicon is available, early prototype boards using

first silicon samples can enable software development on the actual

silicon. Once the chip is in production, very low-cost development

boards can be made available. At this point, the prototype will run at

real-time speed and full accuracy. Software debug is typically achieved

with specific hardware connectors using the JTAG interface and

connections to standard software debuggers. While prototype boards

using the actual silicon are probably the lowest-cost option, they are

available very late in the design flow and allow almost no head start

on software development. In addition, the control and debug insight

into hardware prototypes is very limited unless specific on chip

instrumentation (OCI) capabilities are made available. In comparison

to virtual prototypes, they are also more difficult to replicate – it is

much easier to provide a virtual platform for download via the

Internet then to ship a board and deal with customs, bring-up and

potential damages to the physical hardware.

WHICH PROTOTYPE SHOULD I CHOOSE?

So, how do users choose the appropriate prototype for early software

development? Several characteristics determine the applicability of

the chosen prototyping approach and the models it is built from.

Summarized in Figure 2, they fall into the following eight categories:

• Time of Availability: The later models become available in the

design flow compared to real silicon, the less their perceived

value to hardware/software developers will be.

• Execution Speed: Developers normally ask for the fastest

models available. Execution speed almost always is achieved by

omitting model detail, so it often has to be traded off against

accuracy.

• Accuracy: Developers normally ask for the most accurate models

available. The type of software being developed determines how

accurate the development method must be to represent the

actual target hardware, ensuring that issues are identified at

the hardware/software boundary. However, increased accuracy

requires simulating more detail, which typically means lower

execution speed.

• Production Cost: The production cost determines how easily a

model can be replicated for furnishing to software developers. In

general, software models are very cost-effective to produce and

can be distributed as soon as they are developed. Hardware-

based representations, like FPGA prototypes, require hardware

availability for each developer, often preventing proliferation to a

large number of software developers.

• Bring-up Cost: Any required activity needed to enable a models

outside of what is absolute necessary to get to silicon can be

considered overhead. The bring-up cost for virtual prototypes

and FPGA prototypes is often seen as a barrier to their use.

• Debug Insight: The ability to analyze the inside of a design,

i.e., being able to access signals, registers and the state of the

hardware/software design, is considered crucial. Software

simulations expose all available internals and provide the best

debug insight.

• Execution Control: During debug, it is important to stop the

representation of the target hardware using assertions in the

hardware or breakpoints in the software. In the actual target

hardware, this is very difficult – sometimes impossible – to

achieve. Software simulations allow the most flexible execution

control.

Page 28: Chip Design Magazine April-May 2010

18 • April / May 2010 Chip Design • www.chipdesignmag.com

HAR

DW

AR

E-S

OFTW

AR

ED

ESIG

NPR

OTO

TYPIN

G

Figure 3: Combination of different prototyping options

• System Interfaces: It is often important to be able to connect

the design under development to real-world interfaces. While

FPGA prototypes often execute fast enough to connect directly,

development using virtualized interfaces of new standards, e.g.,

USB 3.0 can be done even before hardware is available.

The choice of prototype is often simplified to the trade-off between

speed and accuracy. In this context, the type of software to be

developed directly determines the requirements regarding how

accurately hardware needs to be executed.

• Application software can often be developed without taking the

actual target hardware accuracy into account. This is the main

premise of SDKs, which allow programming against high-level

APIs representing the hardware.

• For middleware and drivers, some representation of timing may

be required. For basic cases of performance analysis, timing

annotation to caches and memory management units may be

sufficient, as they are often more important than static timing of

instructions when it comes to performance.

• For real-time software, high-level cycle timing of instructions can

be important in combination with micro-architectural effects.

• For time-critical software – for example, the exact response

behavior of interrupt service routines (ISRs) – fully cycle-

accurate representations are preferred.

Given the above considerations, it comes as no surprise that none of

the prototyping techniques fits all applications. For users who need

to balance time of availability, speed and accuracy of prototypes,

combining different prototyping techniques offers a viable solution.

Figure 3 compares six different prototyping techniques and some of

their combinations using the example of an ARM-based platform

executing Linux connected to a USB 2.0 interface. Depending on the

use case, different combinations of TLM and signal-level execution

may be preferable. For example:

• For verification, the combination of transaction-level models

(TLM) with signal-level RTL offers quite an attractive speed-

up, and users have started to adopt this combination of mixed-

level simulation for increased verification efficiency. This use

model is effective even when RTL is not fully verified yet and

FPAG prototypes are not yet feasible.

• For software development, system prototypes, i.e. the

combination of virtual prototype using TLMs and FPGA

prototypes at the signal-level using standard interfaces like SCE-

MI, have become an attractive alternative for providing balanced

time of availability, speed, accuracy and debug insight for both

the hardware and software. This use case is most feasible once

RTL is mostly verified and the investment of mapping it into

FPGA prototypes is worth the return in higher speed.

Today, many companies already view prototyping as mandatory

to ensuring functional correctness of their designs and enabling

early software development. As this article has illustrated,

however, there is no “one size fits all” prototyping solution –

developers must select the approach that best meets their specific

project requirements. One thing is certain: with the trend toward

software continuing to escalate, implementing prototyping and

combinations of different prototyping techniques will gain even

greater importance for future design projects.

As director of product marketing at Synopsys, Inc.,

Frank Schirrmeister is responsible for the System-

Level Solutions products Innovator, DesignWare®

System-Level Library and System Studio, with

a focus on virtual platforms for early software

development. Prior to joining Synopsys, Frank

held senior management positions at Imperas, ChipVision, Cadence,

AXYS Design Automation and SICAN Microelectronics.

... there is no “one size fits all” pro-totyping solution – developers must select the approach that best meets their specific project requirements.

Page 29: Chip Design Magazine April-May 2010

Chip Design • www.chipdesignmag.com April / May 2010 • 19

TR

AN

SACTIO

N-L

EVEL

MO

DELIN

GESL

By Lauro Rizzatti, EVE-USA

Making Abstraction PracticalA Vision for a TLM-to-RTL Flow

The ballooning size and complexity of system-on-chip

(SoC) designs has become an urgent driver of higher

levels of design abstraction. Just as it’s been a long time

since you could design electronic circuits one transistor at a

time, it’s now impossible to create an SoC one gate at a time.

Not only would your design time push you way beyond the

useful market window, but coordinating the roles of the

many members of your design team, from architect to tester,

would be impossible.

Transaction-level modeling (TLM) has provided a means

for starting designs at a more abstract level. But the path

from TLM down to physical implementation is far from

smooth. There are too many holes and inconsistencies in

the flow; each company has to invent something themselves

to get a useful result. The widespread use of third-party

intellectual property (IP) and the need to incorporate

software add entirely new dimensions to the problem. What

you need is a more unified approach to turning high-level

abstract design concepts into real chips.

MOVING UP A LEVEL

It’s hard to use the word “unified” when describing SoC

flows. Depending on the process node and performance

requirements, you have innumerable options involving

speed, power, and manufacturing yield optimization.

However, for a design described in RTL, it’s still relatively

straightforward to push the design through a flow and have

polygons come out the other end. That flow may make use of

involved scripts put together by clever CAD managers, but

variations within the digital domain are generally related to

optimization rather than actual behavior.

So even with these variations, the RTL-to-silicon flow is far

more predictable than what’s required to transform a more

abstract description into RTL. You simply can’t assemble an

SoC using a single behavioral description in one language.

• Architects need to be able to experiment with broad

ranges of functionality without having to specify gate-

level behavior. They should be able to make first-level

performance, power, and area tradeoffs.

• Verification must be achievable at an early stage without

gate- and cycle-level simulation.

• Architects and designers don’t specify in detail the

functionality of the entire SoC. Some blocks will be

designated for detailed custom design, but many will be

imported as IP, with their internal workings opaque.

• Software, executing in one or more processors on the

SoC, provides ever greater amounts of functionality.

Writing that software can take as long, or longer, than

designing the silicon platform on which it will run.

• Early validation of architecture and software often

requires emulation hardware, much of which uses

technology like FPGAs, which is far different from

what the end silicon will look like. You must therefore

be able to express the design at a level where it you can

target it at the emulator and at silicon without requiring

significant rework.

The result, even before taking into account any analog

circuitry you need to have on-chip, is a heterogeneous

amalgam of bits and pieces that you have to bring together

into a design. In the early stages of the planning, you may

designate some of the blocks as IP, with their functionality

either partially or completely known; you’ll mark others

for custom creation, and you won’t know their specific

behavior until someone actually does the design. You will

have some functionality written in RTL (even if bare-

bones), some in SystemC, and some in C/C++ or some

other software language.

This means that architects need to be able to pull the pieces

together at a “rough” level, making sure that everything plays

nicely together, or specifying the rules so that everything

will play nicely, and then dispatching the pieces for

implementation and integration. TLM provides a way for

architects to manage such high-level planning, but if the

designers that will implement custom blocks essentially

end up throwing away what the architects did and starting

their designs based on a paper spec, not only is work being

redone, but errors can be newly introduced. A flow that

connects the TLM work to the RTL work will reduce both

design time and the number of validation iterations.

Page 30: Chip Design Magazine April-May 2010

20 • April / May 2010 Chip Design • www.chipdesignmag.com

ESL

TR

AN

SACTIO

N-L

EVEL

M

OD

ELIN

G

DIFFERENT LEVELS OF TLM

It’s inaccurate simply to talk about TLM as if it were a single

level of abstraction above RTL. Abstraction comes at the

cost of accuracy, and, depending on the task, you may need

to select different levels of abstraction in order to achieve

sufficient accuracy depending on what you’re trying to do.

The key to achieving this is the fact that TLM really deals

with interfaces: different blocks are plugged together, with

their behaviors abstracted and their interfaces interacting.

Accuracy boils down to the fidelity with which the interface

will model the finished block’s behavior. Greater accuracy

comes at the cost of longer verification times, and different

development phases will require different tradeoffs between

accuracy and verification time, as illustrated in Figure 1.

• Software engineers need the least fidelity; all they need

is for the block to function correctly. Timing is, more

or less, not an issue. This allows for highly abstracted

functional models, also known as virtual prototypes,

that can execute in the range of 10 – 100 million

instructions per second.

• Architects need a higher level of accuracy so that they

can confirm, for example, that bus-level transactions

occur properly. Here the level of “handshake” may be

sufficient; the actual number of clock cycles occurring isn’t

important. Such cycle-approximate models can execute

in the range of a million instructions per second.

• For more detailed verification of RTL blocks, designers

need to verify the cycle-accurate behavior of the

interfaces. This further slows the models down to around

100,000 cycles per second (which is more than an order

of magnitude slower than the “handshake” level since

we’ve gone from instructions-per-second to cycles-per-

second, and an instruction takes more than one cycle).

This means that simply having a single TLM model for a

block or piece of IP isn’t sufficient. Different blocks may

have different accuracy levels; you can’t just plug them

together and expect them to work. In fact, designers of

IP and custom blocks may need to develop multiple

interfaces to address the needs of different steps in the

design process. Exactly what those expectations should be

has not been standardized.

The cost of developing these different levels of TLM model

varies widely. Virtual prototypes are easiest because they’re

written in software, and therefore can take advantage of all

the tools available in the software world for verification and

debug. They also require that only the salient functionality

of the model be implemented. This means that, typically,

you can get virtual prototypes of common IP functions

from companies that don’t sell the actual implementation

IP. These companies focus their value on structuring the

models so that they’ll execute quickly and efficiently.

Cycle-approximate and cycle-accurate models require

much more work to build, and are much more closely tied

to the specific IP they model. Therefore, when purchasing

IP from a given vendor, you will typically get these models

from them, since only they know, and can model, the inner

workings of their secret circuits. Developing these models

can represent as much as 30% of the effort required to

create the RTL code itself.

DEFINING A FLOW

Having identified not one, but at least three different

kinds of TLM model that, in one form or another, define

functionality that will end up in silicon (or software on

silicon), the next obvious question is how to craft a flow

that, in the ideal, allows you to synthesize from the abstract.

It’s likely that such synthesis would be inefficient in its

early days, but that was also the case with RTL when logic

synthesis was new; eventually the tools have improve to

the point where only in rare cases would you countenance

doing a digital design at a level below RTL.

Before we can have a discussion of TLM synthesis, however,

we have to decide which of the tasks currently done at the

RTL level can be pushed up to a more abstract level. If we’re

eventually going to bury RTL in a flow in the way that EDIF

is buried today, then key manual steps currently done with

RTL have to be available at the TLM level along with the

new capabilities that TLM enables.Figure 1. Different model accuracy requirements

Page 31: Chip Design Magazine April-May 2010

Chip Design • www.chipdesignmag.com April / May 2010 • 21

TR

AN

SACTIO

N-L

EVEL

MO

DELIN

GESL

The most important RTL task to push up is that of

optimization – especially for performance and power. It’s

well known that you get the biggest speed and power gains

if you optimize at the architecture level, where you can have

an impact of anywhere from fifty to hundreds of percent.

Circuit-level optimizations are good, but typically give you

a few tens of percent at best, and more and more of the

circuit-level tricks can be automated.

This means you need optimization tools that work at the

TLM level, along with models that provide estimates of

power and performance accurately enough to make the right

architectural tradeoffs.

IP evaluation is another task that you will have to manage

at the TLM level. While power and performance are a part

of that evaluation, you must also be able to confirm that

the IP you’re considering will play nicely with the rest of

the system, with as little wrapping as possible. You also

need to know that you can implement all the features you

need, and that you have to implement very few, if any, of

the features you don’t need.

Software evaluation is a newer task for the architect. Part

of the job may be actually checking out a specific piece of

software, but, to a large extent, the big problem is ensuring

that typical software can execute efficiently – you’re not

testing the software, you’re testing the system as it runs

software.

Having accomplished these tasks at the abstract level,

implementation can begin. There are really three different

elements to implementation:

• Creation of new blocks

• Assembling the blocks (newly created and IP)

• Creation of software

Functional verification then means

• Testing the architecture using cycle-approximate

models

• Checking out the new blocks using cycle-accurate

models and simulation

• Testing the assembly of the entire system. This

would actually happen in stages, starting with cycle-

approximate models and transitioning to cycle-

accurate where needed. Since full simulation of the

entire system is likely to take too long, hardware

emulation becomes a final way to test the integrated

logic of the entire design. Actual circuit simulation

might be used for very specific corner cases. Emulation

and simulation can also be run together to balance

speed and accuracy, either for hard-to-reach corner

cases or for early testing, where some models aren’t

available in abstract versions and so must be emulated

to keep performance up.

• Testing software using virtual prototypes, which tests

the software algorithms.

• Testing software on a hardware emulator, which tests

the system’s ability to execute the software.

This only makes sense if the work done early at the abstract

level can be used later to confirm the implementation

work. Today that would mean using the abstract models

to confirm the behavior of hand-generated blocks. If

TLM-level synthesis were available, then tools could be

used to confirm the correctness of that synthesis, much

the way equivalence checking was used to validate early

logic synthesis tools.

In order for this to work, however, if you have robust debug

capabilities that span the range of abstraction. If a signal in

an RTL block is misbehaving, that problem must ultimately

be correlatable to some higher-level model behavior. This

isn’t so hard when going from the specific to the abstract,

but going the other direction is harder: trying to correlate

a high-level model failure with a specific implementation

issue is tough because the high-level model has such specifics

abstracted out – by intent.

Debugging must also span different verification

methodologies. Where you are doing hardware emulation

and simulation together, for example, a debug methodology

has to recognize events and design elements on both the

simulated and emulated sides.

All of these requirements are hard enough to achieve

today in the purely digital domain. But SoCs increasingly

include significant analog functionality as well, and the

days of analog and digital ignoring each other are fast

disappearing. All of the flow elements involving modeling,

architecture, implementation, validation, and debug apply

equally – and raise even greater challenges – for analog.

Figure 2 shows what a complete TLM-to-RTL flow might

look like.

Page 32: Chip Design Magazine April-May 2010

22 • April / May 2010 Chip Design • www.chipdesignmag.com

ESL

TR

AN

SACTIO

N-L

EVEL

M

OD

ELIN

G

The phases and steps can be described as follows:

• Architecture phase

• Create a system model, including a stimulus

environment, drawing from an IP library as much as

possible.

• Create new TLM models where needed.

• Simulate to validate the architecture and functionality

(including software) at the TLM level.

• Create a virtual prototype to give to the software

development team.

• Hardware design phase

• Automatically map IP blocks to RTL and generate the RTL

interconnect.

• Create RTL blocks either by synthesizing from TLM

models or by hand, with equivalence checking to ensure

that the generated RTL matches the TLM models.

• Perform full-chip RTL simulation for select corner

cases.

• Software design phase

• Create software, validating with the virtual prototype.

• Integration phase

• Perform complete hardware/software integration

validation using simulation and emulation.

• Connect to the physical design flow for implementation.

REQUIREMENTS FOR UNIFYING FLOWS

If such a flow is going to be possible without each company

defining its own proprietary version, some common elements

need to come together in the form of standards and ecosystem

offerings.

• IP modeling needs to be made consistent, with new

characteristics that will make evaluation easier. These

include interface standards, performance estimates of

key transactions, power estimation, and area estimation.

• Formalization of different TLM levels will help ensure

that users of IP can obtain appropriate models, and that

they will know what to expect when getting them.

• Tools need to be able to manage verification across

different levels of abstraction, allowing the assembly of

abstracted or partially-complete blocks with RTL-level

blocks, providing optimal speed and accuracy points.

• It’s currently possible to mix simulation and emulation, but

continued work is required to improve that integration as

well as to enable virtual prototype execution as part of a

coordinated environment.

• A unified debug environment is needed to ensure that

problems can be easily identified regardless of the level

of abstraction or the mode of verification, whether static

or dynamic.

These elements cross the property lines of a few different standards

and standards bodies. OSCI has owned the TLM specifications;

the SPIRIT Consortium, now a part of Accellera, has focused

on IP metadata; emulation interaction has been the domain

of the SCE-MI standard, also owned by Accellera. There is no

organization specifically focusing on the needs of debug.

Coordination and cooperation between the different standards

groups and sub-groups as well as between the companies

participating in the standards will be needed to provide all the

links to make this work. While some companies resist standards

out of fear of losing a competitive advantage, there are plenty

of opportunities to compete even in the face of a unified flow.

Each step of the flow will be challenged to provide the highest

performance, the greatest productivity, and the appropriate cost.

The industry as a whole will benefit by focusing innovation on

those areas, and, as the industry moves forward, participating

companies will have greater opportunities to reap the rewards.

At the same time, users must embrace the technology and

validate flows. It’s insufficient for tools providers simply to

support designs input in higher-level languages and synthesize

RTL. All of the pieces described above must be woven together

into methodologies that gain real traction with real users. Only

then can the TLM-to-RTL flow be considered reality.

Lauro Rizzatti is general manager of

EVE-USA. He has more than 30 years

of experience in EDA and ATE, where he

held responsibilities in top management,

product marketing, technical marketing

and engineering.

Figure 2. TLM-to-RTL Flow

Page 33: Chip Design Magazine April-May 2010

Chip Design • www.chipdesignmag.com April / May 2010 • 23

FIR

MW

AR

ECO

-VER

IFIC

ATIO

N

SIM

ULATIO

N

By Marc Bryan and Barry Pangrle, Mentor Graphics

Avoid That Embarrassing Call to the Firmware

Vendor (and Other Tricks of Low-Power Verification)Simulation-based hardware/software co-verification lets you address power problems early and well.

Just how important is hardware/software co-verification

in low-power ASIC and SoC design and engineering?

You might ask the large semiconductor company* that a few

years back designed a device per the specs of a significant

customer, which assembled and sold smartphones. The

specs – that a varied combination of functions could

execute concurrently without exceeding a certain power

budget, measured in milliwatts – were fairly typical for the

low-power realm.

When the customer received the silicon, however, its

engineers struggled to get the device to behave as advertised.

Despite their best efforts to string together functions that

should have been well within the device’s limits, their

applications kept exceeding the power budget.

It should be said that the semiconductor company deserves

heapings of kudos for sticking around to help. After realizing

it didn’t have tools or expertise to do extensive hardware/

software co-verification, the company wound up hiring an

entire firmware team. After much effort, expense, and, most

significantly, delay to the customer’s product, these contract

coders put together software infrastructure that bridged

the ASIC’s underlying power-management features to the

customer’s skills and objectives for the new product.

The story is not unusual. Given the increasing complexity of

ASICs and SoCs, it’s no longer enough for semiconductor

companies to focus on silicon and deliver meager amounts

of diagnostic software as an afterthought. This is especially

true where power-management looms large, as it does in

just about any device with batteries, a market segment that

seems poised for a major rebound. The worldwide mobile

phone market grew 11.3 percent in the fourth quarter of

2009, according to IDC. And the research firm estimates

that the market for voice/data mobile devices (that generally

are power hogs compared to their voice-only counterparts)

grew by nearly 30 percent year over year.

For designers of such devices, a whole series of new questions

abound. Does the system correctly power up and change

power states? Does it meet performance requirements while

powering up/down its components? Does it meet power

budgets and battery life requirements?

Answering those questions with any degree of certainty

invariably hinges on verifying those areas of the design

where software and hardware interact together the most.

Often this is a confounding task that confronts designers

with a long list of seemingly contradictory requirements.

And though no one technique is right for every design

situation, we think that a good starting point is to first

model power-management functionality at RTL and then

verify the hardware and software together in an optimized

environment.

Here’s why, and how.

ANNOTATE AN EXISTING DESIGN WITH UPF

Until recently, an engineer wanting to really drill down and

look for power-related bugs in an ASIC or SoC design faced

a series of unattractive hardware simulation choices. Gate-

level verification was highly detailed and impossibly slow.

Though marginally faster, the various ways of simulating

at RTL were complicated by the need to insert additional

power management information, which required intrusive

RTL code changes.

Figure 1: Software is relevant to most power-management functions in

low-power ASIC/SoC designs (such as switching states), which is why the

premium on hardware/software co-verification is on the rise.

Page 34: Chip Design Magazine April-May 2010

24 • April / May 2010 Chip Design • www.chipdesignmag.com

FIR

MW

AR

ECO

-VER

IFIC

ATIO

N

SIM

ULATIO

N What about just focusing on software simulation to verify

applications most tightly wedded to the silicon? Though

fast, this approach too often lacked most or all of the detail

needed for debugging hardware/software interactions,

where some of the thorniest low-power issues arise.

The arrival of the Unified Power Format (UPF) changed

things for the better. A TCL-based format for specifying low-

power intent throughout the design and verification flow,

UPF was designed to allow for reuse and interoperability

between different tools. For those who care about such

things, UPF 2.0 also became an industry standard with the

adoption of IEEE Std 1801™-2009 in March 2009. (Full

disclosure: Mentor Graphics chaired the IEEE standards

activity.)

Reuse of certain functional blocks or even entire designs is

among the holy grails of ASIC/SoC design, which is ever

more costly. UPF enables such reuse, providing a relatively

straightforward means to annotate old designs with new

power management features. Engineers can supplement

existing designs with power-aware features by specifying

these in a separate UPF file. Or they can experiment with

different power control schemes by simply changing this

separate file while leaving the essential design description

alone. The alternative, continuing to tweak the RTL of

every design that requires power-management features, is

tedious and error-prone.

UPF allows for more than just defining power domains,

switches and other elements of the power architecture. It can

be used to create power strategy via power state tables; to set

up and map low power design elements such as retention,

isolation and level shifters; and to match simulation and

implementation semantics.

UPF-FRIENDLY SIMULATOR SPEEDS VERIFICATION

Unlocking the value of the UPF file requires a verification

platform built to work with the standard. Mentor Graphics

Questa is one, though there are others available from

several of the larger EDA vendors. The workflow, in short,

is to first go through and verify that the RTL actually

performs correctly, and then to toggle some settings on the

simulator and run it a second time to check the power-aware

functionality described in the UPF file. No recompilation

of the RTL is required, a boon given ever increasing gate

counts.

For all the benefits of UPF, in the end the standard is mostly

aimed at hardware verification. Advanced verification of

an ASIC or SoC loaded with power-aware features means

taking a hard look at software, or more specifically, verifying

the hardware and software together.

One way to do this is to execute the software on top of an

HDL-simulated CPU. Despite all the theoretical advantages

of tying code directly to the hardware description instead of

a higher level model, this approach can be both painfully

slow and relatively opaque. As deadlines loom and managers

turn up the pressure, too often all the engineer can say with

any authority is that something is not quite right between

the software and the underlying HDL.

Tools that speed up the simulated CPU can help matters.

Mentor’s solution, for example, replaces the HDL-based

CPU simulation with a model that’s tied in with the rest

of the logic simulation – and that operates at dramatically

higher speeds, a benefit that flows from a host of features in

the tool, including optimized memory access.

A quick primer on optimized memory access: During

verification, it’s important to first confirm that fetching

instructions from memory does in fact work. But once this

is verified, huge efficiencies are gained by abstracting it

away. In general, the more a tool avoids spending time at a

pin-wiggle-level of detail continuously checking something

that you already know works, the better.

A high-speed processor model allows design teams to run,

for example, tightly embedded RTOSs, whose importance is

rising in lock-step with the increasing need for fine-grained

management of underlying hardware. Combined with a

solid software debugger, running an RTOS can be useful in

a host of verification and debugging scenarios.

For example, imagine an engineer working on software that

controls power states. He wants to boot it up, get to a simple

prompt, and then use the software debugger to observe the

state change from turbo mode to sleep mode. The engineer

enters a command line prompt, which runs the software,

and then sits back to watch all the changes going on while

Page 35: Chip Design Magazine April-May 2010

Chip Design • www.chipdesignmag.com April / May 2010 • 25

FIR

MW

AR

ECO

-VER

IFIC

ATIO

N

SIM

ULATIO

N

that software is running. One hallmark of a good tool/

verification environment is providing an engineer with pin

level visibility to both the hardware and software, or more

precisely, with an ability to closely observe when the power

control module writes out to one of the power islands and

changes its power state. Another is allowing the user to be

able to dynamically select which memory accesses run in the

logic simulator.

The speedup can be dramatic. Last fall at ARM techcon3

we presented a case where a high-speed simulator increased

the speed of embedded software execution by a factor of

10,000.

To be sure, there are alternatives to simulation-based

hardware/software co-verification. Emulation is one, a

method which can provide closer-to-final-product speeds

but often fails to provide sufficient visibility. Other emulation

headaches include increased setup time (emulation is post-

synthesis) and complexity surrounding place and route.

The real selling point of simulation is that design teams can

start doing power-related hardware/software co-verification

before their designs are done. Of course, it’s always possible

to wait and throw more people at a design or verification

problem. But in IC design, as is true throughout engineering

and most other fields as well, the earlier you can address

problems, the better.

In other words, it’s best to avoid making the call to those

crack firmware coders if you can.

* Apologies for the anonymity. But everyone knows that despite its sprawling

size (0.5% of worldwide GDP, according to Wikipedia) the semiconductor

industry is more like a small village that prizes discretion than a mega-city

that celebrates the broadcasting of every foible and failing.

Marc Bryan has been both a leading and

contributing member of tool development teams

for more than 24 years. Currently serving as

the Product Marketing Manager for Mentor

Graphics’ Codelink products, Bryan came to

Mentor after five and a half years with ARM's

tool division. A prior hands-on role at Korg R&D

provided extensive embedded processor-based, system-level design

and implementation experience.

Barry Pangrle is a Solutions Architect for Low

Power in the Engineered Solutions Group at

Mentor Graphics Corporation. He has been a

faculty member at UC Santa Barbara and Penn

State University where he taught and performed

research in high-level design automation. He has

published over 25 reviewed works in high level

design automation and low power design.

Why Software MattersBy Ed SperlingSoftware and hardware may not mix easily, and engineers on

each side of the wall may not talk the same language, but these

days no one has the luxury of ignoring one side or the other.

That message came through loud and clear at a panel discus-

sion sponsored by the EDA Consortium yesterday evening,

which included top engineers at Wind River, Green Hills and

MontaVista. Among the key facts in the discussion:

1. The majority of engineers working on an SoC are software engineers, who represent the biggest portion of the non-recurring engineering expenses.

2. A couple decades ago a typical chip had thousands of lines of embedded code. Now there are millions of lines of code, and no one person understands all of it. The

result is more complexity and a higher risk of failure—particularly when it’s not well tested with the hardware.

3. All of the major embedded software companies

except one have been bought by large semiconductor companies, which increasingly are required to include software stacks with their chips to create complete platforms for applications.

Driving these changes are some fundamental shifts in the

hardware. Jack Greenbaum, director of engineering at Green

Hills, said the shift from 8-bit bare-metal software to 32-bit

microcontrollers has opened up a huge opportunity for more

complex software. In addition, the shift from 32- to 64-bit has

allowed small devices such as microcontrollers to now start

using full-featured operating systems such as Linux because

memory is so cheap.

To read more, please visit the System-Level Design commu-

nity at: sldcommunity.com

Page 36: Chip Design Magazine April-May 2010

26 • April / May 2010 Chip Design • www.chipdesignmag.com

SYSTEM

-LE

VEL D

ESIG

NCO

NSTR

AIN

T M

ETH

OD

OLG

Y

By Steve Svoboda, Cadence Design Systems

Low-Power Tradeoffs Start at the System LevelTo succeed at the end, your design methodology must weigh

all design constraints from the very beginning.

Until the early 1990s, increases in the performance of

electronics came largely without major increases in

power consumption. Starting in the late 1990s, however,

this began to change. Today, advanced central processing

units (CPUs) consume over 100 W--almost 50X the

consumption of early processors, such as the Intel i186.

Yet during this lengthy span of time, battery capacity hasn’t

been able to keep pace. Compared to the early 1980s (on a

weight-adjusted basis), for example, today’s batteries hold

only 3X to 4X as much charge. This issue poses a clear

challenge for the development of energy-efficient designs

going forward.

The primary reason for making power tradeoffs at the

system level is that the decisions made at that stage have

the greatest impact. At the system-architecture level, on

the other hand, architects select which algorithms to use,

decide which functions will be performed in software

running on processors versus custom hardware, and what

power modes the system will have. Decisions made at this

stage can easily affect system power consumption by a

factor of 10X or more.

At the next stage—the micro-architecture level—

hardware designers decide how to minimize hardware

power consumption under certain throughput and

latency constraints (i.e., what hardware operations will be

performed serially versus in parallel, how many pipeline

stages, etc.). Decisions at this stage can affect power

consumption by a factor of 2X to 10X. Beyond these two

stages, design decisions have progressively less impact on

power consumption. They also become increasingly difficult

and costly to reverse (see the Table).

LOW POWER AT THE SYSTEM LEVEL

To be effective at optimizing systems for low power, a low-

power design flow demands four capabilities:

1. Requirements capture at the specification/system

level: The designer first needs a way to capture chip-

level area, timing, power requirements, and power

budgets at the specification level. He or she must then

allocate them to different blocks within the design.

2. TLM-to-gates synthesis engine: The designer can enter

the IP portion of the design into the chip planning

tool/process. But how does he or she optimize the

blocks that were created from scratch? One way is

to create them manually at the register transfer level

(RTL) and then synthesize and estimate the power,

iterate/refine, etc.

However, a better way is to use a high-level-synthesis (HLS)

engine that enables one to do the following: create an IP

block at a much higher level of abstraction, automatically

generate and analyze different micro-architectures, and

select the one that best fits the application. The HLS

engine must be tightly integrated with the implementation

flow so that the area, timing, and power estimates that it

uses are sufficiently accurate.Table: Here, system-on-a-chip design levels are broken down according

to the impact of various decisions on overall power consumption.

Page 37: Chip Design Magazine April-May 2010

Chip Design • www.chipdesignmag.com April / May 2010 • 27

SYSTEM

-LE

VEL D

ESIG

NCO

NSTR

AIN

T M

ETH

OD

OLG

Y

Once the designer has the optimized RTL micro-

architecture, he or she can make additional improvements

in power consumption using gate/implementation-level

power-saving techniques. Examples include clock gating,

multiple voltage domains with level shifters, dynamic

voltage-frequency scaling, etc.

3. Power analysis engine: A critical third element, an

analysis engine, is required to simulate the design under

“real-world” conditions. This requires a methodology

for emulating the system-on-a-chip (SoC) hardware

performance at high speed together with a mechanism

to track the toggling of the gates in the design. With

a carefully calibrated way to use the toggling, it also

must be possible to accurately estimate how much

power the gates will actually consume.

4. Power-aware verification methodology: Power-saving

techniques like those mentioned above (clock gating,

multiple voltage domains, dynamic voltage frequency

scaling, etc.) significantly affect the functional behavior

of the SoC. They also can add major complexity to the

verification process, unless one has the proper tools

and methodology to deal with these issues.

Tying these four elements together lets designers deal

with the challenges of lower-power SoC design in a timely,

cost-effective way with higher productivity.

A POWER-AWARE WORKFLOW

Companies have begun working to address the requirements

of low-power SoC design at the system level. For

requirements capture, for example, a product called InCyte

Chip Planning promises to become the “cockpit” where the

high-level decisions and constraints on individual blocks

are made. As the different design teams implement blocks

within the system and determine whether they’ve met

unit-level specifications, this product allows them to feed

that information back to chip planning. This collaborative

process enables system-level refinements to be made. If one

block team is able to do better than their specifications,

“slack” may be freed up for another block team, which may

be struggling to meet their specifications.

For hardware synthesis, a product called C-to-Silicon

Compiler combines high-level synthesis with conventional

logic synthesis. It vows to let designers automatically convert

TLM system-level models into RTL and then to gates while

optimizing for area, timing, and power. By having the high-

level and logic synthesis integrated together in this fashion,

the compiler ensures that whatever RTL micro-architectures

are generated will be physically implementable.

Although the static timing and power analysis done by the

synthesis tools is indispensable to ensure correct hardware

implementations, it is not sufficient. Because actual

peak/average power consumption can vary tremendously

with operating conditions, dynamic analysis capability is

required as well. For this purpose, the Palladium product

family promises to estimate SoC power consumption

under “real-world” conditions while actual system software

is being executed.

Finally, verification consumes 60% to 70% of the R&D

effort on today’s SoC-development projects. With power-

aware designs, verification challenges grow more intense

as special register-transfer/gate-level features to reduce

power (e.g., clock gating) add further circuit complexity.

Power-management features are added that are under

hardware and software control. In addition, various design

modes must be verified at different operating voltages. The

net result is a huge expansion of the system state space,

which must be verified. Here, power modes become yet

another dimension of design parameters. All (or as many

as possible) combinations of power and operating modes

must be verified or else problems will arise (such as data

loss, dead-lock conditions, etc.).

Power-aware, metric-driven verification with technologies

like Conformal and Incisive goes a long way toward solving

these challenges. With these technologies, power-intent

information of the design is captured (in a format like CPF).

It is then used during verification to ensure that the design

behaves as it would with all of the power-control logic in

the RTL. These tools are used to infer all of the power

modes that need to be covered in the design, automatically

creating the appropriate coverage metrics and assertions.

Verification then continues as normal. Now, however, it is

fully power-aware.

Page 38: Chip Design Magazine April-May 2010

28 • April / May 2010 Chip Design • www.chipdesignmag.com

SYSTEM

-LE

VEL D

ESIG

NCO

NSTR

AIN

T M

ETH

OD

OLG

Y

THE OLD VERSUS THE NEW

In the “classical” flow, teams are naturally reluctant to refine

their RTL for a 20% to 30% QoR gain (see the Figure).

This is because design-iteration loops take too long to be

useful for a project that typically lasts six to nine months. In

the new flow, however, greater automation and integration

allow iterative steps to be completed much faster and—in

many cases—in parallel. With a modern HLS engine,

logic-synthesis comes built-in. This capability enables

the designer to go directly from TLM SystemC to gates.

There’s no longer any need to fine-tune RTL or create

synthesis scripts, as must be done in the manual approach.

Similarly, when the logic-synthesis engine is integrated

with a hardware-emulation system (as in the case of RTL

Compiler with Palladium), calibration between toggle-

counts obtained from the emulation system and the actual

power consumption represented by those toggles at the gate

level is already done. It is guaranteed to be accurate as well.

With the major portion of today’s electronic products

targeting mobile applications, power consumption has

evolved to become a primary design constraint. Any effective

design flow and methodology must simultaneously consider

all design constraints (including power) in a seamless closed-

loop, multi-objective, planning-to-signoff solution.

The major recent EDA industry achievement has been to

extend this flow upwards in abstraction from RTL to TLM to

include power exploration, estimation, and analysis at every

step including SystemC/TLM design exploration, software

optimization, hardware (TLM and RTL) synthesis, and

physical design/signoff. In parallel with the implementation

flow, power verification has been extended. By leveraging

static, dynamic, and formal power-verification techniques

in a closed-loop verification methodology, design teams can

avoid last-minute power-related surprises. The net result is

to enable first-pass silicon success with 5X to 10X higher

engineering productivity.

Steve Svoboda is Technical Marketing Di-

rector for System Design and Verification

solutions at Cadence Design Systems. He

is a 15-year EDA veteran with Master's

degrees in both Electrical Engineering and

Engineering Management from Stanford

University, as well as undergraduate degrees

in EE and Economics from Johns Hopkins.

Figure: Here, the classical RTL and new TLM-driven design flows are

compared.

A Positive Indicator of Semiconductor DirectionBy Jim LipmanI think the San Jose version of the annual TSMC Technology

Symposium this past week is a good indicator of where the

semiconductor industry is going over the next few years. The

positive growth predictions of keynote speaker Morris Chang,

TSMC founder, chairman and CEO - 22% this year and 7%

in 2011 - are only one gauge of industry direction. Another is

what happens on the exhibit floor.

As in past years, there were plenty of companies willing to put

out the money and resources for an exhibit floor booth in San

Jose (which also includes smaller, tabletop exhibits this month

in Austin and Boston). From a potential customer perspec-

tive, foundry-driven shows such as the TSMC symposiums

provide an excellent opportunity to meet prospects who

are, largely, in the chip design business. Since my company,

Sidense, is an IP provider, this is a good audience for us.

However, the number of exhibitors is not a great measure of

industry direction - the attendee base serves this purpose.

To read more, please visit "Turning to Jim" blog at: http://

www.chipdesignmag.com/lipman/

Page 39: Chip Design Magazine April-May 2010

Chip Design • www.chipdesignmag.com April / May 2010 • 29

ALG

OR

ITH

M O

PTIM

IZATIO

NLO

W P

OW

ER

Take A New Approach to the Power-

Optimization of Algorithms and Functions

The power consumption of digital integrated circuits (ICs)

has moved to the forefront of design and verification

concerns. In the case of handheld, battery-powered devices

like cell phones, personal digital assistants (PDAs), e-books,

and similar products, users require each new generation to

be physically smaller and lighter than its predecessors. At the

same time, they expect increased functionality and demand

longer battery life. It’s therefore obvious why low-power design

is important in the context of this class of products. In reality,

however, low-power considerations impact almost every modern

electronic system—including those powered from an external

supply.

CHALLENGING THE CONVENTIONAL WISDOM

The architecture of a system is a first-order determinant of that

system’s power consumption. When it comes to the functional

blocks themselves, the hardware design engineer must determine

the optimal micro-architecture for each block. Different micro-

architectures have very different area, timing, latency, and power

characteristics. The register transfer level (RTL) is the earliest

stage of design abstraction at which it’s possible to gain sufficiently

accurate estimations of characteristics like area and power. If

created by hand, however, RTL is very fragile, complex, and

time consuming to capture. As a result, there’s typically sufficient

time to create only one micro-architecture (or a very limited

number of micro-architectures). A wide range of alternative

implementation scenarios therefore remains unexplored. In

addition, RTL does not support sophisticated parameterization,

so an IP block cannot be retargeted into multiple systems-on-a-

chip (SoCs) with different area/speed/power targets.

The ideal scenario is to have an environment in which design

engineers can create and functionally verify behavioral

representations at a high level of abstraction. They should then

be able to quickly and easily convert these representations into

equivalent RTL for detailed power analysis. Furthermore, this

ideal scenario includes the ability to be able to create a single

behavioral representation and to use it to generate and evaluate

a full range of alternative RTL implementations. Currently, the

predominant high-level alternative to RTL-based design has

been to use sequential programming-based C/C++/SystemC

representations in conjunction with some form of behavioral

synthesis. While these approaches can raise the design’s level of

abstraction, they have significant limitations. For example, they

provide poor quality-of-synthesis results except for the narrow

range of application spaces that they can efficiently address.

Additional issues include the following:

• The model of computation in C/C++/SystemC,

sequential, threaded, flat memory has been fine-tuned to

execute on von Neumann computing platforms. This approach

is inappropriate for hardware designs that feature fine-grain

parallelism and heterogeneous storage. As a result, common C/

C++ idioms and style (loops, pointers, byte-oriented data types,

etc.) must be laboriously “rewritten” to prepare an application for

C/C++ synthesis.

• C/C++ synthesis tools work by customizing a few generic

template architectures. As a result, there’s not much room

for architectural variation. Achieving good quality often

requires a “long tail” of effort, massaging both source code

and constraints in tool-specific ways because of the non-

transparent effect on architecture. The resulting code/

constraints aren’t portable or maintainable. In addition,

the ad-hoc, proprietary constraint languages may not be

parameterizable, requiring multiple sets of source code and/

or constraints to cover the required architectural space.

• Automatic parallelization of sequential code is feasible

mainly for digital-signal-processor (DSP) -like (“loop and

array”) applications. This restricts its use to only a few blocks

of the design. Even within this “sweet spot,” it’s difficult to

address essential “system issues,” such as memory sharing,

caching, pre-fetching, non-uniform access, concurrency, and

integration into the full chip design.

• Loss of control with regard to the process of timing

closure also is a problem. When compiling a “behavioral”

C/C++ description into hardware, the semantic model

of the source (the sequential code) is so different from the

semantic model of the ensuing hardware that the designer

loses predictability. It’s difficult for the designer to imagine

what should be changed in the source to effect a particular

desired improvement in the hardware. Furthermore, small

and apparently similar changes to the source can result in

radically different hardware realizations.

By Rishiyur S. Nikhil, Bluespec, Inc.

Page 40: Chip Design Magazine April-May 2010

30 • April / May 2010 Chip Design • www.chipdesignmag.com

LO

W P

OW

ER

ALG

OR

ITH

M O

PTIM

IZATIO

N

The ability to evaluate a wide range of micro-architectures can

result in more optimal results than painstakingly hand-coded

RTL. For many of the reasons listed above, however, not all

high-level (behavioral) languages and associated HLS engines

facilitate the ability to automatically generate the full range of

micro-architectures for evaluation.

In the same way that it would be unimaginable for software

developers to neglect to evaluate alternative algorithms, it should

be unimaginable for hardware designers to proceed without

considering alternative micro-architectures. In reality, however,

the lack of rigorous micro-architecture evaluation is the norm

rather than the exception. But what if design engineers had the

ability to quickly and easily evaluate the entire gamut of micro-

architecture alternatives ranging from highly parallel to highly

serial implementations?

A NOVEL APPROACH

A new approach, such as PAClib, has emerged using a plug-and-

play library of common pipeline building blocks. Those building

blocks are designed for constructing algorithms and datapath

functions as illustrated in Figure 1. (This is, of course, a very

simple representation; PAClib modules can be instantiated by

other modules and wrappers and so forth.)

For example, suppose a PAClib pipeline module called PF

computes some function f(x) on each input x. In addition, a

pipeline module called PG computes some function g(y) on

each input y. By cascading PF followed by PG, it’s possible to

construct a pipeline that computes the function g(f(x)) on each

input x. Alternatively, by cascading PG followed by PF, designers

can construct a pipeline that computes the function f(g(y)) on

each input y.

These PAClib building blocks are highly parameterized.

They allow designers to separately specify attributes like

computational functions, pipeline buffer insertion, and

pipeline structure choices—all without having to worry about

actual implementation details. By simply “dialing in” different

parameters, alternative micro-architectural versions are

automatically generated with correct pipeline control logic and

implementation details.

This PAClib library is written in Bluespec SystemVerilog

(BSV). It augments standard SystemVerilog with rules and

rules-based interfaces that support complex concurrency and

control across multiple shared resources and across modules.

BSV features the following: high-level abstract types; powerful

parameterization, static checking, and static elaboration; and

advanced clock specification and management facilities. One

of the key advantages is that the semantic model of the source

(guarded atomic state transitions) maps very naturally into

the semantic model of clocked synchronous hardware. BSV’s

computation model is universal (equally suitable for datapath

and control). So it can directly address system considerations,

such as memory sharing, caching, pre-fetching, non-uniform

access, concurrency, and integration into the full chip design.

With full architectural transparency, the designer also can

make controlled changes to the source with predictable effects

on timing. Due to the extensive static checking in BSV, these

changes can be more dramatic than the localized “tweaking”

techniques favored when working with standard RTL. As

a result, designers can achieve timing goals sooner without

compromising correctness. The end result of using PAClib is

that hardware design engineers continue to think like design

engineers. But they now have access to rapid algorithmic design

and architectural exploration capabilities.

24 MICRO-ARCHITECTURES FROM A SINGLE SOURCE

As an example, look at the Inverse Fast Fourier Transform

(IFFT) block used in the IEEE 802.11a transmitter system (see

Figure 2). The term 802.11a refers to a common IEEE standard

for wireless communication. The protocol translates raw bits

from the media access controller (MAC) into orthogonal-

frequency-division-multiplexing (OFDM) symbols in the form

of sets of 64 32-bit, fixed-width, complex numbers. The protocol

is designed to operate at different data rates. At higher rates, it

consumes more input to produce each symbol. Regardless of

the rate, all implementations must be capable of generating an

OFDM symbol every 4 ms.

For the purposes of these discussions, it’s not necessary to

understand the 802.11a transmitter in any level of detail. It’s

sufficient only to appreciate that the IFFT block can account

for approximately 90% of the silicon real estate, depending on

its implementation. Additionally, the critical timing path of the

IFFT is many times larger than the critical path of any other

block in the system. Thus, the focus here is on this block.

Figure 1: This graphic shows how PAClib module pipeline interfaces plug

together.

Page 41: Chip Design Magazine April-May 2010

Chip Design • www.chipdesignmag.com April / May 2010 • 31

ALG

OR

ITH

M O

PTIM

IZATIO

NLO

W P

OW

ER

The IFFT is constructed from two basic computational functions,

f_radix4 and f_permute, which are treated here as black boxes.

Conceptually, the IFFT is a cascade of three identical stages as

illustrated in Figure 2. The input and output of each stage—and

of the IFFT as a whole—are vectors of 64 complex numbers

with 16-bit real and imaginary parts. Each stage also receives

a set of coefficients, which may be different for each f_radix4

instantiation.

The 64-element input vector to each stage is partitioned into 16

slices—each comprising four complex numbers. Each group of

four complex numbers is fed into an f_radix4 function, which

also has four complex number outputs. Thus, the outputs from a

column of 16 f_radix4 functions acting in parallel also are a 64-

element vector of complex numbers. This vector is fed into an

f_permute function, which permutes the vector. It then outputs

another 64-element vector of complex numbers, which forms

the output from this stage.

The mathematical details of the IFFT aren’t important for the

purposes of this article. Instead, the focus should be on the

underlying structure of the computation (see Figure 3). The

goal is to investigate how alternative micro-architectures—all of

which compute the same mathematical function—may differ in

area, performance (throughput, clock speed, latency, etc.), and

power.

One possibility would be to implement the IFFT as a purely

combinatorial circuit (see Figure 4a). Another alternative would

be to add pipeline buffers to the outputs of the f_permute

functions as illustrated in Figure 4b and also to the inputs of the

f_permute functions as illustrated in Figure 4c. These buffers

increase the hardware cost. Yet they will likely decrease the

critical path length and allow synthesis at a higher frequency,

thereby increasing overall throughput. Yet another possibility is

to implement just one stage, but to loop the data through this

stage three times to replicate the actions of the three stages (see

Figure 4d).

For any of the preceding choices (except the purely combinatorial

implementation), it also is possible to vary the micro-architecture

implementations of each “stack” of f_radix4 functions. Instead of

16 f_radix4 functions, the designer might “funnel” or “serialize”

the 64-element input vector into two 32-element vectors. He

or she can run each of these vectors through the same group

of 8 f_radix4 instances and then “unfunnel” or “deserialize” the

emerging sequence of two 32-element vectors back into a 64-

element vector.

Alternatively, the designer could funnel the 64-element input

vector into four 16-element vectors and run each of these

vectors through the same group of four f_radix4 instances.

Or the 64-element input vector can be funneled into eight 8-

element vectors. Each of these vectors can be run through the

same group of two f_radix4 instances (see Figure 5).Yet another

alternative is to funnel the 64-element input vector into 16 4-

element vectors and run each of these vectors through a single

f_radix4 instance.

Figure 2: This high-level block diagram depicts an IEEE 802.11a

transmitter.

Figure 3: This high-level dataflow graph details the IFFT block.

Figure 4: Some alternative micro-architectures for the IFFT block are

shown here.

Page 42: Chip Design Magazine April-May 2010

32 • April / May 2010 Chip Design • www.chipdesignmag.com

LO

W P

OW

ER

ALG

OR

ITH

M O

PTIM

IZATIO

N

Because they re-use hardware, looping and funneling may seem

to be guaranteed to reduce silicon area. Yet these techniques

require additional buffers and control circuitry. Furthermore, for

a given target throughput, they will require higher clock speeds

with an associated cost in power consumption. The end result is

that there are a wide variety of potential micro-architectures. It

is difficult to predict which one will be “best” for a given set of

throughput, area, and power goals.

The PACLib library’s family of plug-and-play building blocks

allows all of the micro-architectures envisioned for the IFFT in

Figure 4 to be expressed in a single source using only 100 lines of

code. It is parameterized by the f_radix4 and f_permute modules

and the micro-architectural choices. Simply by controlling the

parameters associated with this design, this single high-level

representation can be quickly and easily used to generate, in this

example, 24 different micro-architectures. The design engineer

can then determine which structure provides the optimum

combination of throughput, area, and power characteristics

to address the requirements of his or her particular target

application.

In closing, the predominant high-level alternative to RTL-

based design has been to use sequential programming-based,

C/C++/SystemC representations in conjunction with some

form of behavioral synthesis. These approaches can raise the

design’s level of abstraction. Yet they have significant limitations

including poor quality-of-synthesis results except for the narrow

range of application spaces that they can efficiently address. In

addition, they lack the parameterization needed to be able to

express multiple micro-architectures uniformly. A new approach

can be found in solutions like PAClib. This plug-and-play library

of common pipeline building blocks is designed for constructing

algorithms and datapath functions. The PAClib library is

written in Bluespec SystemVerilog (BSV), which augments

standard SystemVerilog with rules and rules-based interfaces

that support complex concurrency and control across multiple

shared resources and modules. BSV’s computation model is

equally suitable for datapath and control.

By capturing an algorithmic design using PAClib and then

controlling the parameters associated with the PAClib modules,

a single high-level representation can be quickly and easily used

to generate multiple alternative micro-architectures. As a result,

design engineers can focus on determining which structure

provides the optimum combination of throughput, area, and

power characteristics to address the requirements of their

particular target applications. The end result is to dramatically

reduce the development cycle while significantly improving the

quality of results. Such approaches are predictable and seamlessly

integrate complex control. As a result, they don’t suffer from the

“long tail” of effort and difficulty addressing essential “system

issues,” such as memory sharing, caching, pre-fetching, non-

uniform access, concurrency, and integration into the full chip

design.

Rishiyur S. Nikhil is co-founder and

CTO of Bluespec Inc. Previously, he led

a team inside Sandburst Corp. that was

developing Bluespec technology. Nikhil

also served as acting director at Cambridge

Research Laboratory (DEC/Compaq)

and was a professor of computer science

and engineering at MIT. He holds patents

in functional programming, dataflow and multithreaded

architectures, parallel processing, compiling, and EDA. Nikhil

received his PhD and MSEE in computer and information

sciences from the University of Pennsylvania. He received

his bachelors in technology in electrical engineering from IIT

Kanpur.

Figure 5: Here, the micro-architecture of the f_radix4 “stack” is varied.

Page 43: Chip Design Magazine April-May 2010

Chip Design • www.chipdesignmag.com April / May 2010 • 33

DOT.ORG

I ’ve served as president of the IEEE Council on EDA

(also known as CEDA), a focal point since 2005 for EDA

activities spread across six IEEE societies — Antennas and

Propagation; Circuits and Systems; Computer; Electron

Devices; Microwave Theory and Techniques; and Solid

State Circuits.

Since its formation, CEDA has worked to expand its

support of emerging areas within EDA and brought more

recognition to members of the EDA profession. And as

my two-year term ends, I look back with pride at the way

in which CEDA’s Executive Committee has found many

creative ways to further expand support of the EDA

community that makes me quite proud.

Just announced is the formation of the Design Technology

Committee, a group of executives from EDA user

companies. DTC has the goal to work with groups inside

and outside the IEEE to promote best practice sharing

and strategic solutions to address gaps between EDA

capabilities and future needs.

CEDA now sponsors 14 EDA conferences and workshops,

including DAC, ICCAD, DATE and ASPDAC. It created

the Embedded Systems Letters, a new publication for

rapid communication of short notes in an increasingly

important EDA area. This adds to the quarterly Currents

newsletter and the mainstay TCAD Journal.

Two new awards presented in 2009 help to recognize the

accomplishments of members of our community. The

yearly Richard Newton Technical Impact Award is jointly

sponsored with the ACM Special Interest Group on

Design Automation (ACM SIGDA). It is awarded to an

individual or individuals for their outstanding technical

contributions to EDA, recognized over a significant period

of time. The Early Career Award, also to be presented

yearly, recognizes an individual who has made innovative

and substantial technical contributions to EDA in the

early stages of their career

These awards complement the existing awards:

• The D. O. Pederson Best Paper Award presented in

the TCAD Journal

• The William McCalla Best Paper Award presented at

ICCAD

• The prestigious Phil Kaufman Award jointly sponsored

with the EDA Consortium

The ongoing Distinguished Speaker Series offers a number

of complementary events at EDA conferences and will

continue in 2010. CEDA has also begun experimenting

with new ways of reaching the EDA community, including

a live webcast of the ICCAD keynote. A digital edition of

Design and Test Magazine is available for a reduced fee and

the first issue of Embedded Systems Letters is available

online, while an on-line calendar of EDA key conference

dates can be found at www.ceda-org.

Cadence’s Andreas Kuhlmann will become CEDA’s

president in January and is committed to continued support

of the EDA community with more valuable activities in

2010. He and the rest of CEDA’s Executive Committee

would benefit by having your help. I encourage you to

visit the CEDA website (www.c-eda.org) to learn more

about us and to get more involved. Organizations such as

CEDA are driven by the efforts of volunteers.

John Darringer is President of the IEEE

Council on EDA. He can be reached at

[email protected]. For more information

on CEDA visit http://www.c-eda.org

Creatively Supporting the EDA Community

By John Darringer, President of the IEEE Council on EDA

Page 44: Chip Design Magazine April-May 2010

Register today at the Early Bird rates & save up to $200 on yourConference Pass! Or, register now for a FREE Expo Hall Pass!

Co-located with the Embedded Systems Conference Chicago!

Find the Solutions to Your Sensors &Sensing Technology Challenges!

Subscribers: Visit: www.sensorsexpo.com to register or call 877-232-0132 or 972-620-3036 (outside U.S.). Use discount code

F327M for an EXTRA $50 OFF a Gold or Main Conference Pass!

• Energy Harvesting• Low-Power Sensing• Wireless Networking• Bio-Sensing

Gain the knowledge you need from leading experts and peers in the sensors industry.

• MEMS & MCUs• Monitoring Tools & Applications• Novel Approaches to Measurement• Power/Smart Grid Monitoring & Control

This year’s Conference Program includes more than 40 Technical Sessions in 8 Tracks covering:

Identify specific solutions to your most difficult detection and control-related challenges on the expo floor.

Sensors Expo brings together the largest and best-in-class showcase of sensing technologies and systems for attendees to evaluate and make informed decisions.

PRODUCED BY: OFFICIAL PUBLICATION: SILVER SPONSOR: MEDIA SPONSOR:

Conference: June 7-9, 2010 • Exhibits: June 8-9, 2010Donald E. Stephens Convention Center • Rosemont, IL • www.sensorsexpo.com

Page 45: Chip Design Magazine April-May 2010

Chip Design • www.chipdesignmag.com April / May 2010 • 35

TOP VIEW

By K. Charles Janac, Arteris Holdings

Today’s SoC designers are at an inflection point that requires

a re-thinking of how complex designs are implemented.

They are challenged with the increasing amount of IP blocks

that must be managed on a single chip, a trend that introduces

complexity, performance and power issues which strain

traditional approaches to implementing an SoC ‘infrastructure.”

As a result, Network-On-Chip (“NoC”) technology is rapidly

displacing traditional bus and crossbar approaches for SoC on-

chip interconnect. The reason is simple: NoC architectures result in

smaller, faster and lower power on-chip-interconnects than previous

approaches. With viable commercial offerings that have been proven

in productions now available on the market, NoC’s time has arrived

as a mature and efficient way to help SoCs scale.

High performance requirements, quality of service (QoS) needs,

and physical design constraints make the integration of an increasing

number of heterogeneous IP cores in a SoC a formidable challenge.

It’s made even more challenging because traditional on-chip

interconnect is taking up an increasingly large part of the SoC design,

not only in chip area, but in design complexity and design resources

as well. Previous generation approaches such as a bus, crossbar or

hybrid combinations of buses and crossbars cannot meet area and

power budgets, or frequency targets. Even in the smaller designs,

power requirements alone make traditional interconnect approaches

sub-optimal.

Designers are moving toward NoC architectures because they offer a

light-weight packet-based communication system that meet stringent

area and power requirements, without sacrifices in performance.

In a NoC, data-packets travel between processing elements in the

interconnect through physical links, and where the physical properties

of each link can be independently configured according to bandwidth,

latency, clocking, power or physical design requirements. Interconnect

processing elements can be switches, data-width converters, clock-

domain converters, power-isolator blocks, security modules and

others. Packet-based formatting allows the processing elements to be

very simple and link configurability provides the optimal link design

for each communication path.

A common misconception is that NoC architectures introduce

additional latency. But robust NoC solutions allow for packet

configurability. A flexible packet format allows the designer to make

the optimal wiring versus latency trade-off on a connection-by-

connection basis, including the configuration of packet formats that

do not insert any additional latency as may be required by the CPU

to DRAM path.

The flexible topology enabled by the true NoC approach allows

designers to independently address the many design constraints

imposed upon the interconnect by the SoC. The NoC architecture

removes the need to divide up the SoC into independent islands,

each governed by a standalone arbitration scheme, and requiring all

interconnected IP to conform to the same interface standard, clock

and data width. The ability of the NoC architecture to address

unrelated constraints in an orthogonal way leads to superior results

and improved SoC economics.

Because of the complexity and design requirements of modern SoCs,

it is inevitable that a NoC approach will become the de fact way to

develop these devices.. The question remains whether semiconductor

companies will develop such technology in-house or will opt to license

it. Over the last fifteen years, the industry has been relatively slow in

moving towards licensing interconnect technology. The reasons have

been twofold: the existence of prior infrastructures and not enough

“bang-for-the-buck” in commercial interconnect products to warrant

the switching cost. Now, however, commercial NoC products are

outperforming traditional internal solutions by significant margins,

and the costs faced by semiconductor vendors for the internal

development of a new interconnect infrastructure around an NoC

architecture cannot be justified. Indeed, the semiconductor vendors

that are using NoC technology for on-chip interconnect today are

already reaping the benefits of bringing faster, lower power and

cheaper products to the market.

K. Charles Janac is the Chairman, President

and Chief Executive Officer of Arteris Holdings.

Charlie has over 20 years experience building

technology companies. He was employee number

two of Cadence Design Systems, and later served

as CEO of HLD Systems, Smart Machines and

Nanomix. Born in Prague, Czech Republic, he

holds both a B.S. and M.S. degrees in Organic Chemistry from Tufts

University and an MBA from Stanford Graduate School of Business.

He holds a patent in polymer film technology.

NoC Technology Offers Smaller, Faster,

and More Efficient Solutions

Page 46: Chip Design Magazine April-May 2010

36 • April / May 2010 Chip Design • www.chipdesignmag.com

By Dr. Joost van Kuijk, CoventorNO RESPINS

Microelectromechanical systems (MEMS) are micro-

scale or nano-scale devices. Typically, they’re fabricated

in a manner similar to integrated circuits (ICs) to exploit the

miniaturization, integration, and batch processing benefits of

semiconductor manufacturing. Yet unlike ICs, which consist

solely of electrical components, MEMS devices combine

technologies from multiple physical domains. They may contain

electrical, mechanical, optical, or fluidic components.

Spurred by growth in consumer electronics, the total market for

MEMS is projected to grow more than 40 percent from 2008 to

2012. It will go from just over $7 billion worldwide to over $13

billion, according to market research firm Yole Development. The

market for MEMS in mobile phones is expected to grow by more

than 4X during this period to $2 billion.

As promising as these forecasts sound, only a few large IDMs are

well positioned to benefit from this rapidly growing market. This is

due to the specialized expertise, long development time, and high

cost of bringing MEMS devices to market. Almost all MEMS

devices are tightly integrated with electronics—either on a common

silicon substrate or in the same package. Yet MEMS design has

traditionally been separated from IC design and verification.

MEMS devices are typically designed by PhD-level experts in

such fields as mechanical, optical, and fluidic engineering. They

use their own two-dimensional (2D) and three-dimensional (3D),

mechanical computer-aided-design (CAD) tools for design entry

and finite-element-analysis (FEA) tools for simulation. Eventually,

the MEMS design must be handed off to an IC design team in

order to go to fabrication. But the handoff typically follows an

ad-hoc approach that requires a lot of design re-entry and expert

handcrafting of behavioral models for functional verification.

Moreover, MEMS historically requires specialized process

development for each design, resulting in a situation often

described as “one process, one product.” While there are a number

of specialized MEMS foundries, support from pure-play foundries

has been very limited. According to one analyst report, it takes an

average of four years of development and $45 million in investment

to bring a MEMS product to market.

Several trends are converging to make this level of effort and

expertise unacceptable—not only for new entrants in the MEMS

market, but for the best-positioned IDMs as well. First, the fast-

paced consumer-electronics market demands design cycles that

are measured in months, not years. In addition, design costs must

be such that they can quickly maximize return on investment and

profitability.

Secondly, the market is demanding more functionality from MEMS

devices. For example, enhanced sensitivity requires that more

analog and digital circuits will be placed around MEMS devices.

The third trend is the rise of advanced packaging technologies, such

as system-in-package (SiP) and chip stacking with through-silicon

vias (3D IC). These technologies will allow manufacturers to

package all of this functionality more densely, combining multiple

MEMS sensors with analog and digital dice in a single package.

These technologies will allow manufacturers to package all of this

functionality more densely, combining multiple MEMS sensors

with analog and multiple digital die in a single package.

These demands make MEMS more susceptible to unwanted

coupling between sensing modes as well as between the MEMS

sensors and electronics. The present approach to MEMS design—

with separate design tools and ad-hoc methods for transferring

MEMS designs to IC design and verification tools—is simply not

up to these new challenges. The time has come to “democratize”

MEMS design and bring it into the IC design mainstream. The

result would be reduced design costs and shortened time to market.

In addition, the MEMS design would no longer be confined to

teams of specialists inside IDMs.

A critical key to accomplishing this “democratization” is to build

an integrated design flow for MEMS devices and the electronic

circuits with which they interact. A structured design approach

should be used that avoids manual handoffs. Companies like

Coventor and Cadence are now working together to develop such

integrated methodologies. Their goal is to shield IC designers from

the complexity of MEMS design while reducing the time, cost, and

risk of developing MEMS-enabled products.

Dr. Joost van Kuijk is vice president of marketing

and business development at Coventor. Dr. van

Kuijk has more than 16 years of experience in

the MEMS field, specializing in modeling and

simulation. He received a PhD in micro system

technology from Twente University, where he also

received a diploma in technology information.

In addition, Dr. van Kuijk holds an MSc in

mechanical and precision engineering from Delft University.

MEMS Is Poised to Cross the Chasm

Page 47: Chip Design Magazine April-May 2010

The Global Semiconductor Alliance (GSA) mission is to accelerate the growth

and increase the return on invested capital of the global semiconductor industry

by fostering a more effective fabless ecosystem through collaboration,

integration and innovation. It addresses the challenges within the supply chain

including IP, EDA/design, wafer manufacturing, test and packaging to enable

industry-wide solutions. Providing a platform for meaningful global collaboration,

the Alliance identifies and articulates market opportunities, encourages and

supports entrepreneurship, and provides members with comprehensive and

unique market intelligence. Members include companies throughout the supply

chain representing 25 countries across the globe.

GSA Member Benefits Include:

Access to Profile Directories

Ecosystem Portals

IP ROI Calculator

IP Ecosystem Tool Suite

Global Semiconductor Funding, IPO and M&A Update

Global Semiconductor Financial Tracker

Semiconductor End Markets

MS/RF PDK Checklist

AMS/RF Process Checklist

MS/RF SPICE Model Checklist

Consumer Electronics Study

Discounts on various reports and publications including:

Wafer Fabrication & Back-End Pricing Report

Understanding Fabless IC Technology Book

IC Foundry Almanac, 2009 Edition

SiP Market & Patent Analysis

Global Exposure Opportunities:

Advertising

Sponsorships

Member Press Section on GSA Web site

Company Index Listing on Ecosystem Portals

Member Spotlight on GSA Web site

12400 Coit Road, Suite 650 | Dallas, TX 75251 | 888-322-5195 | T 972-866-7579 | F 972-239-2292 | www.gsaglobal.org

Page 48: Chip Design Magazine April-May 2010

One system, infinite verification possibilities.

First, EVE's ZeBu emulators broke the billion-cycle barrier. Now, ZeBu-Server - EVE's next generation emulation system - has broken the billion-gate barrier. Scalable to handle up to 1 billion ASIC gates, and with execution speeds up to

30MHz, ZeBu-Server is a multi-mode, multi-user emulator suitable for all system-on-chip (SoC) hardware-assistedverification needs, across the entire development cycle, from hardware verification and hardware/software integration

to embedded software validation.

Used in virtually every ASIC/SoC industry, from graphics and computer peripheral applications, to processor and wireless mobile applications, ZeBu emulators are truly a universal solution.

ZeBu-SServveer: Billions of Cycles for Billionnss off GatesZeBu-Server: Billions of Cycles for Billions of Gates

© 2010 EVE. All rights reserved. Contact us at 408-457-3200 or 888-738-3872 (toll-free) // www.eve-team.com // [email protected]

Contact EVE-Team today for a FREE consultation at [email protected], or visit us online at www.eve-team.com.

Speed, Capacity, & Lowest Cost of Ownership.