Designcon 2011 – Day 1

I am at Designcon 2011 this week to present a paper that I cowrote with Paul Shad, titled Case Study for PID Control in an FPGA.  Here are my notes regarding the sessions and presentations that I attended today.

TT-MA5 – Intelligent Hardware / Software Interface Design

This was presented by Gary Stringham based on his book, Hardware / Firmware Interface Design.
Here is his web site:  www.garystringham.com

Principles vs. practices – principles are broad guiding concepts, while practices are individual approaches to tasks and activities that help support the broad principles.

He has discovered 7 principles that lead to successful HW / SW integration.  They are:

  1. Collaboration
    Early collaboration is very important
    Documentation is a key means of collaboration
  2. Set and adhere to standards
    Some are internal, but you should adhere to industry standards whenever possible
  3. Balance the load
    Allocate functionality appropriately to HW and SW based on resources
  4. Design for compatibility
    Any HW works with any SW.  Try not to break firmware with new HW updates
  5. Anticipate the impacts
    How HW design affects SW – some features or lack of might add a lot of complexity later on
  6. Design for contingencies
    Test and debug access
  7. Plan ahead
    “There’s never enough time to do it right, but there’s always enough time to do it again.”

Collaboration Practices

  • Ambassadors between HW and SW teams
  • SW team is part of the HW architecture phase
  • Direct contact – initial on site kickoff meetings

Planning Practices

  • Use existing industry standards where possible
    • don’t tweak them, implement them exactly as specified
    • ie CAN with a custom application protocol on top – don’t make changes to CAN, make your customizations at the higher layers
  • Use latest version of every block – another project may have used the block and discovered and made fixes, so you want those fixes included in your design also
  • Document all defects and errata in all blocks, even if they seem insignificant
  • Keep HW/SW interactions in the design as simple as possible
  • Post mortem
    • have them on every project
    • More importantly, review the previous projects’ post mortems for new projects based on previously used blocks

Documentation Practices

  • Document templates
  • Write documentation at the beginning of the design
  • Register design tools – define your register in a tool, and it creates HDL, source code, and documentation automatically
  • Document the interactions between any registers
    • Including the order in which the HW expects them to be set
  • Use horizontal tables for bit fields
  • Document all conditions that could cause every error message

Super Block

  • Design blocks to include all possible functionality, even if only using a subset
    • Don’t add unnecessary functionality, instead just don’t remove existing functionality that you aren’t using from blocks that are being reused
  • Common firmware interface regardless of feature set of derivative products
  • Design blocks with enable line inputs that can be hard tied for functionality that is not used, that way the synthesis tool will remove that functionality without having to use a different design file

Design

  • Make internal status information available in registers, for diagnostics when errors occur (state machine bits, internal counters
  • Multiplex the inputs to the chip to the different blocks that use them.  Don’t allow shared inputs to be connected to blocks when the block is not supposed to be watching the signal.
  • Always have event indicator signals of all events from the HW to the SW.  Try to avoid using blind delays in the software to wait for actions to be completed by the HW.

Registers

  • Avoid write only bits
  • Do not mix different writable bit types in any combination in the same register
  • Block level ID and version registers for all major functional blocks in the design
  • Atomic access to registers that more than 1 device driver or thread will access
    • This must be accomplished with write only registers, and this is the one exception where these should be used

Errors and Aborts

  • Need to provide abort for all tasks that the HW is performing
  • Need to think about error handling and abort functionality up front

Overall the presenter had a lot of good ideas for hardware to software interfacing that are applicable to ASIC designs, but also to FPGA designs like I have been involved with.

Keynote Address: Harold Hughes, CEO RAMBUS

There wasn’t much memorable said here.  The presenter read a speech, made little eye contact, and was only speaking for approximately 15 minutes of the allotted 30.  In past Designcon keynote addresses I remember dynamic presenters giving a vision for the past, present, and future state of the electronics industry.  This was a bit of a disappointment.

TT-MP1 – Rethinking How Signals Interact with Interconnects

This was presented by Eric Bogatin, Jeff Loyer (Intel), Olufemi Oluwafemi (Intel), and Stephen Hall (Intel).

This presentation was about the differences between two different approaches to describing signal integrity:  the current/voltage/circuits view vs. the electromagnetic fields and waveguides view.

The lumped element RLGC circuit model of a transmission line is very powerful to understand and characterize return currents, impedance, ground bounce, reflections, terminations, attenuation, and how to engineer to minimize loss.  However there are several situations where circuit models are not useful and may even give the wrong intuition.  The presenters chose 4 areas to focus on as follows.

Copper Roughness

A summary is that surface roughness increases the effective surface area and causes the concentration of E and H fields to increase because they are no longer simply orthogonal to the dielectric material.  This means that more energy is required to drive the same response as a flat surface conductor or alternatively less signal gets through, thus more loss.  The increased loss is NOT due to the current flowing up and down the ridges and thus taking a longer path, as you might predict using a circuit model approach.

Modal Decomposition

One of the presenters made the claim that far end crosstalk exists only when there are differences in the even and odd mode as in microstrip, but does not exist in striplines.  I didn’t really understand this explanation.

Waveguide Via

The presenter showed an example of a waveguide via (signal via surrounded by ground vias) and a simple signal via with a ground via very close by.  In the example simulations, the second case appeared to be better because it had less impedance discontinuity.  However in reality, the waveguide via kept the fields contained better for less radition (or crosstalk in the near field), so overall it performed much better.  This was simply meant as an example to show that the fields are important to consider and not just the impedance.

Causality of Transmission Line Models

Traditional transmission line simulators make assumptions that only R and G vary with frequency and that L and C are constant.  This leads to models that are not causal.  This will start to matter at higher frequencies.  The loss tangent will increase with frequency, however because more energy is lost as frequency increases, this also means that a corresponding less amount of energy is stored in the C of the model.  Therefore the dielectric constant must be decreasing while the loss tangent is increasing.  Similar is true for L with respect to R.  R increases with frequency due to the skin effect, however as the current crowds to the outside of the trace, the inductance must be decreasing because there are fewer magnetic field lines surrounding the current.  The presenter contended that these effects matter for simulations above 2 Gbps.  All of my experience in simulating at these data rates has been with w-element models or equivalent that did not account for this.  Some tools are starting to account for this such as Simbeor, Mentor Hyperlynx, and Ansoft (maybe others).

One of the presenters has a new book:
Advanced Signal Integrity for High-Speed Digital Design by Stephen H. Hall and Howard Heck.

TP-M2 – Closed Eye: Determining Proper Measurement Approaches in a 3rd Gen Serial World

This is a panel discussion on test and measurement of multi-gigabit serial signals (I have attended this same panel in previous years where discussion focused on jitter measurement).  The panelists were:

  • Ransom Stephens – Ransom’s Notes
  • Mark Marlett – Xilinx
  • Mike Peng Li – Altera
  • Eric Kvamme – LSI
  • Greg Le Cheminant – Agilent
  • Tom Waschura – Tektronix (formerly Synthesys Research)
  • Marty Miller – Lecroy

Apparently Tektronix acquired Synthesys Research last year which I was not aware of (see this article).  The panel had some arguments about random jitter, and concluded that deembedding techniques are pretty immature and not to expect agreement from different vendor tools.  Overall, I came away from this panel still thinking that the industry still does not know the correct way to measure multi-gigabit signals and assess them for pass fail.  The only true measure of quality is performance (BER), which is time consuming to measure.  All other tools are diagnostic at best.

Leave a comment