although i am not in electronics industry, but i do understand a little bit. However, as a part of continuing education, i am enjoying listening the lectures and got an ideas..it looks like I am in University class room to learn about the subject...
I do appreciates the lecture and very interesting to learn more all about especially i am now preparing for instrumentation and control certification examination...
i am now on slide 13 of this topic...thanks to sir brian...
Good presentation, answered a lot of questions. Also the comment/Blog dialogue was useful in bring up ideas and answering questions - good job all ...
Verification methodologies at the FUNCTIONAL level should include both functional coverage and path coverage. The hybrid approach along with segment and unit level testing helps the verification process. Constrained Random is a sort of hybrid approach at the stimulus - response (functional) level but may or may not take path testing into account which should not be overlooked.
Finally caught up to the class and took a look-ahead. This is a good overview of functional verification at a very high level. Implementation in the mission critical domains such as Military, Medical or Automotive industries have processes and mechanisms that can provide insight and examples into the process.
Jack Ganssle's class in the Digi-Key CEC (and writings online) on approach/method, concepts, philosophy for design, test and debug in general can also help in getting a mindset of good practices.
@digital angel - That is normally the role of the verification plan and is mostly an informal process today. Many designers say going from that to the coverage model is one of the most difficult aspects of verification.
Brian, for example a project engineer, with say three designers under him/her, wants to insure technical/functional and contract coverage of a design, is there an intermediate approach in going from spec to verification that can be handed off to a designer?
@pbrodeur. Thanks. I forgot I had put it theree. That was slavaged from a site that no longer exists. The book does go into the subject in a lot more depth. When I first came up with the concept it even took a while before verification experts accepted it, but eventually they do. It is not widely talked about because the EDA industry has no tools for positive verification.
@asicsoc - Most people today would use constrained random as it will allow you to spend your time definning how tests should be created rather than aactual;ly creating the tests. The actual tests come for free. Constrained randdom also very useful for finding the things you never thought to test. HOwever, constrained random will fail to fill some coverage holes and for those revert to directed testing using the same predictors/checkers
@alka - yes, but we have to make sure that when IP blocks are connected together that they actually perform the necessary function. Think of this in board terms. Even if the components all work, does that imply that your design works? Have to do high level tests as well to prove the device does something useful.
Going from a spec to a set of assertions is difficult and takes some experience. There is a library (Open Verification Library) that has some typical components and assertions pre defined. It may be good to look at those and see the kiinndds of things and ways that assertions can be defined. High-levell asertions caan be very difficuult to write.
I think its worth mentioning that formal scoreboard is different from simulation scoreboard. Formal scoreboard is more about packet integrity. For example Jaspergold tool from Jasper design automation use formal scoreboard to prove data integrity across clock domain crossing (CDC)
On the other hand, simulation scoreboard keeps track of the tests like which ones have been done and which remain. At the end simulation scoreboard should be empty. Its more like test tracker.
Pos/Neg is a difficult concept to explain in a couple of m inutes, and in part verification methodologies in use today almost discourage the usage of positive verification. Directed test much better in this regard.
IWhile SystemC can be used at RTL - I highly doscourage its usage there. It is slow compared to VErilog/VHDL simulators. But at the system-level it is good and the fact that it can "contain" C code, which may already exist, makes it very useful. SystemVErilog can also contain C, but not quite as readily.
I find the difference between positive and negative verification rather vague. I don't remember those concepts being addressed in the original book from Janick (circa 2000). Does he cover this subject in the System Verilog book?
I have successfully used SystemC for behavioral modeling of hardware and also was able to pull in an execute (cross compiled) portions of the application processor code within the model, again at a high level of abstraction.
As far as I know SystemC is only rarely used. It seems to me like a lot of people want to know about it, but it is not used. I assume that most engineers do not get the power of this library and the concept beneath. It enables you to do the step form the algorithmic level to the RTL level very easily. That's beacuse it supports both TLM and RTL. But I guess there are still a lot of misunderstandings, because most engineers come from either the software side OR the hardware side. There are only few out there who bring both together.
The response checker wouyld know how to check that two packets are consistent, whereas the scoreboard contains a repository of the packets that should be seen on the output. A reference model may also be required that explains how a packet gets transformed.
SystemC is the language of choice for high-level models and virtual prototypes (which I haven't really talked about). However SystemC no good for verification and likewise, SystemVErilog no good for high level modeling (extreme view - lots of overlap).
I had not mentioned white box/black vbox differences. With white box it assumes you can see everything in the design. With black box it is invisible - you can only see what is on the pins. Assertions can be written that sit inside a black box and identify errors internally, thus making the innards more visible and highlighting problems when they happen rather than waiting for propagation onto the output pins.
At the system level, it becomes too difficult for constrained random to find useful tests. Maybe I want to ensure that I can take a picture using my devices CCD sensor,. pass it thorugh some DSP functionas and display it on the screen. I would write the code to run oin the procesor, or use production code and the stuimulus to do the intended function. I can check that the image produced is correct.
Assertions are used for whitebox verification. What other formal veriifcation techniques can be used to improve the observability in the design ? Can you suggest how one can use assertions in verilog ?
Seems like a lot of people still using VErilog. For those Janic does have a version of the book that deals with VErilog, but a lot of constrrained andom is not supported properly by those langauuges. We will talk about the new concepts added to VErilog tomorrow when I introduce SystemVerilog.
Packets get dropped when the design doesn't work, or when it is dfeemed ok to drop them. Maybe it is a quality of service issue that says all low priority packets do not have to get delivered. They can retry when the network is less busy.
As an example @tariq. Mentor has both a constrained random solution that they market the heck out of andf a graph based stimulus generator. Their own analysis shows that the graph based produces vectors that achieve the same coverage using 1/10 of the number of vectors. Needless to say they dont market their other solution very hard, and I even believe that their graph based solution is nowhere near the best solution possible.
The streaming audio player will appear on this page when the show starts at 2 p.m. ET today. Note, however, that some companies block live audio streams. If you don't hear any audio when the show starts, try refreshing your browser.
The first Tacoma Narrows Bridge was a Washington State suspension bridge that opened in 1940 and spanned the Tacoma Narrows strait of Puget Sound between Tacoma and the Kitsap Peninsula. It opened to traffic on July 1, 1940, and dramatically collapsed into Puget Sound on November 7, just four months after it opened.
Noting that we now live in an era of “confusion and ill-conceived stuff,” Ammunition design studio founder Robert Brunner, speaking at Gigaom Roadmap, said that by adding connectivity to everything and its mother, we aren't necessarily doing ourselves any favors, with many ‘things’ just fine in their unconnected state.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.