@gongji: Like CPP directives, there are good and bad ways to use /* */ and //. Using /* */ to comment out a chunk of code is bad. #if 0 and #endif should be used for that. The book, "Embedded C Coding Standard" by Michael Barr allows the use of both /* */ and //. It also says not to use it for commenting out code nor should there be nested comments.
I am surprised you use /* */ for comments. We have a rule to use only // for comments. Since you cannot nest /* */, we reserve it only for quick code changes during debugging when you may want to comment out a large portion of code temporarily. I guess you can also use #ifdef for that purpose...
@andrewcrlucas: Interesting about X-Macros. I have never heard of it. But I "invented" that very technique and published it in the April, 2000, issue of Research Disclosure (researchdisclosure.com) under the title, "Structuring Source Code Data in One File to Be Used in Multiple Files." I used that technique in HP's LaserJet code. Now I see from an online article that apparently that technique was used back in 1968. I hate it when I invent something that's already been invented.
@sjaffe: Several have commented today on the benefits of run-time switching. I will cover that tomorrow. Yes, compile-time does mean multiple executables but it also reduces the size of code that has to be downloaded into the device. A low-end product wouldn't want to pay for all the memory required to store (and not use) the code necessary for the high-end version.
Hi Gary, You didn't mention that using compile time switches implies that you'll have multiple executable files that you'll need to track. With run time switching, you can have just a single executable for multiple products.
@mRlu2012: I don't have much space here to write. But the "if (regStatus.active)" on slide 16 is the same as "if (*regStatus & BIT_ACTIVE)" on slide 17, just implemented differently. But the method on 16 can be dangerous. But the reason is too much to explain here. Send me an email if you want a more detailed explanation.
The reason I #undef on slide 10 is to make sure they are clear for the subsequent #define. If it was already defined I would get a compiler error. Also by #undef'ing it first, then if I inadvertently forget to #define it to something, it will make sure the check on slide 12 will work properly and not "pass" the test because someone else had #defined it before.
Yeah, most people I know that were using CodeWright have moved to SlickEdit. I enjoy SlickEdit myself -- it is quite powerful. I do think you can still buy CodeWright, but I do not believe it is actively maintained.
I bought a copy of the Pro version of Hex Editor Neo to do the heavy hexadecimal lifting. Even editors like Codewright don't handle inserting values into a file and searching or "value filling" the right way. A dedicated editor is much better if you have to do surgery on large binary files.
@carmacks @rlallier Yes, we use the Hex Editor plug-in a lot, too, and like it a lot.
And, yes, very feature rich. There are just some visual anomalies I've noticed with some of the plug-ins, like a text compare one we use, that I'd like cleaned up. Otherwise, it is a great editor for the price!
@rlallier Thanks. Being just as much a "Windows" programmer as an embedded programmer, I've come to rely heavily on the niceties of the IDE editor -- especially as I get lazier and lazier!
Haven't used Codewright in awhile; maybe it's time to check it out again. Probably need to look into SlickEdit, UltraEdit and the like again, too. I'm sure all of those tools have advanced considerably.
@Tocard One thing I love about Notepad+ is its Hex Editor plugin. It has the most readable output of any editor I have ever used, including stand alone hex programs. I don't know if it will do really in depth pattern matching or table files, but I don't use the latter function anyway. But yes, I definitely love Intellisense and Code Complete. Nothing quite like it.
@Tocard Oh, that's a nice trick. I use SlickEdit now. Just recently migrated after being a Codewright holdout. Still getting familiar with the features. Some of the people in some of the other groups use Codewarrior. I tried it, but some of its features seemed a bit less powerful than Codewright's. That may just be prejudice talking. I've used Codewright for years and only did a brief eval of CodeWarrior.
External editors are your friend -- I always find them to be vastly superior to any IDE bundled with a chip or compiler. I have played around with CodeWright, SlickEdit, Netbeans, Eclipse, Notepad+ ... Can't recommend going that route enough.
@rlallier I had also used Codewrite in the past and remember the code folding. From my limited playing with the Eclipse-based CodeWarrior editor, it handled nesting really well. I set it up to make the non-compiled code very light-gray -- almost invisible on the white background. It made it super-easy to ignore the non-compiled code, but still allowed me to see it if I wanted. And I had some nested 2-levels deep (tried never to go beyond that) and it worked just fine. Don't know about 4, though!
@Tocard I used to use Codewright, which had a feature that would collapse blocks of excluded code automatically based upon parsing the files' defines. The problem with that is that sometimes, it is actually helpful to see what code is NOT getting compiled as well... Color coding conditional blocks sounds like an improvement on that. The problem we had was that the conditional blocks became nested, sometimes up to four levels deep! I wonder how a color-coding editor would handle nested blocks?
@rlallier About having to untangle #defines. There are several editors, I believe Eclipse and NetBeans are 2, that both have a feature built-in that can color code lines that either do or do not get compiled in based on the #defines.
We use IAR here, so I don't get that benefit. But if I had really tangled code, I'd probably recreate the project in one just to help out, since I don't have your nice text utility.
I wish more of our legacy developers had had the benefits of this lecture. We got really burned using product defines and feature defines intermingled. It leads to horrific nesting of conditional compiles that are hard to disentangle and hard to maintain. We developed a small text utility that processes a source file and uses the EBCDIC extended graphics characters to "draw" lines in the listing file showing the blocks of conditional compiles and their nesting.
I try to avoid using #undef as much as possible... Too great of chance for unintended side effects. If I *have* to do it, then only at the beginning of a file, and making absolutely sure I won't be stepping on anything else.
tniles: I like all your comments and questions. You bring up excellent points. Regarding the "flexible electronics," you mentioned that one buys the electronics once then later buys apps. In that example, the hw is not flexible; you don't buy hw "apps." It's sw that is flexible. But then looking from the side of the app producer, they have to write their app to work on the many different versions of hw out there. In this case, then yes, run-time switching is needed.
@Tocard The problem with "fixing it correctly" long after the mistake is made is that you end up having to re-build and re-test the things that use the code you fixed. That's not to mention, re-releasing new versions of the software. If you don't end up re-building and re-releasing, then you might end up confusing later code changers who are unaware of the previous changes and have to spend time consulting version control diffs to find out "what else is new" in the code besides the things upon which they themselves have worked. It's a trade-off.
While 80-column line lengths may seem old fashion, it does make it easier to print code segments, to have two code windows side by side, to use diff tools without having to scroll back and forth horizontally, to view the code in a debugger, and other reasons.
@MarkO For a quick fix as you described, I probably would have done the same thing. However, it sounds like a Band-Aid, as opposed to a "robust" solution long-term. Not trying to argue; just trying to understand this particular bit of the C language. But it sounds like, if time weren't an issue, that rewriting the #defines to make them "correct" for the high and low used products would be better -- if time permitted.
But I think Gary had a good point. I am a big proponent of nested #includes myself, too. This could be a really good reason for #undef.
Just Last Night, I had to make a Quick Fix to code that needs to ship today. Two different Compile Confiurations to the same Sorce Code, was broken Two years ago, by forcing Both versions to have the same basic settings, by using #define. The less used version, was broken by replacing an Input Configuration with the Now common basic settings.
To fix the Less Used version, without rewriting the More Used version and retesting all code, I used the #undef to remove the changes, but only from the Less Used Version, and did not change any of the More Used versions code.
@tniles: The reason to use #undef is to make sure those values were not somehow defined earlier. This is useful since a particular .h file could be #included multiple times in one file. #include file1.h which #includes fileA.h, then #include file2.h which also #includes fileA.h.
@rlallier: true about the 132 columns, yet if printed using 10 point Courier New font on an 8.5 x 11" sheet of paper in portrait mode with even modest margins, it will get clipped or wrapped. If you migrate to landscape mode, you get fewer lines per page, yet most of those line are shorter than 80 anyway. It really comes down to personal preference.
It will always be a shorter path to code functionality than to explain why the code is doing what it is doing. Relying on the code to document itself is pernicious. If you get into the habit of believing that your code documents itself you don't keep in mind how not everyone will understand, months from now, why exactly your code is doing what it plainly says it is doing. Comments explain why, even when functionality itself does not require the explanation.
Are vendors still trying to sell the self documenting code with their source control tools, they do work but only if you put so much work/effort into it that it would be easier to document the code in a different fashion
@tniles: I knew I would be pushing some hot buttons with how I use CPP directives, including macros. But as you said, they do have their place. And as I have emphasized several times, there are good ways and bad ways to use CPP or any other tool. For example, don't try to use a hammer to put in a screw.
@pdxesto Precisely. I think the concept tends to induce laziness in engineers. Using descriptive names and good coding practice is important, but the idea of "commentless, self-documenting code" should have died with COBOL.
@kdavidson: You asked why not use SCAN_TIMEOUT_S when I had just talked about SCAN_SCALAR_US. Good question. SCAN_SCALAR_US will be needed by other people when reusing code for another product but they won't necessarily have to see or deal with SCAN_TIMEOUT. It wouldn't hurt to call it SCAN_TIMEOUT_S but it wouldn't have the exposure that SCAN_SCALAR has in a reusable context.
Great talk about the use of the pre-processor. It is an amazing tool, and though not specific to code-reuse is an invaluable tool for automatically generating code. If you havn't heard about the use of x-macros look over the following link to a stack overflow question I answered a while back. http://stackoverflow.com/questions/6635851/real-world-use-of-x-macros
@Tocard: occasionally it is helpful to define a macro that aids in table construction or some other repetitive coding, then #undef that macro at the end of the table or code section to make it obvious to a maintainer that this macro is not intended for general use elsewhere.
"self-documenting code" uh huh... I'm working with some of that this week -- pretty frustrating -- for the most part I have no idea what it was intended to d. I know what it does. These are two different issues.
let's remember, too, that we're now in the age of flexible electronics. We ship the hardware, and during run time allow users to purchase new features. Clearly, compile-time feature switches fall short here. Something to think about, imho.
@pdxesto: 80 columns turns out to be a good natural limit that transcends its roots in Hollerith punched cards. I have inherited code with lines that are hundreds of columns wide, and end up having to scroll or wrap to see what the line is doing.
@Mr.E I agree that being too explicit sometimes make the names very long but that sometimes a user preference in order to reduced possible confusion. I have also made "short" #define names but make explicit comments for that line instead.
I use the #define, but I had serious problems with an IAR compiler and had to almost everything from #define in main.h to const in main.c. The problem was the #define data types were set by the compiler whereas the const data types were set by me.
Expert user. Ran into the problem of using directives for large differences. Should have gone with product specific include files. Really makes a mess to use too many switches. Had to write a tool to help parse the code sections and disentangle the nesting.
@bobymacs: I was just wondering how to "game" the system to enhance the odds. Is it signing up, logging in, chatting, etc.. as the gating criteria. I seldom win anything but they offer the enticement for some reason.
Hey Paul S., I'll check into using a phone connection. Many participants who work at companies that block streaming can play the program when it's archived (just a few minutes after the program is over).
-The streaming audio player will appear on this web page when the show starts at 2pm eastern today. Note however that some companies block live audio streams. If when the show starts you don't hear any audio, try refreshing your browser.
The company says it anticipates high-definition video for home security and other uses will be the next mature technology integrated into the IoT domain, hence the introduction of its MatrixCam devkit.
Siemens and Georgia Institute of Technology are partnering to address limitations in the current additive manufacturing design-to-production chain in an applied research project as part of the federally backed America Makes program.
Most of the new 3D printers and 3D printing technologies in this crop are breaking some boundaries, whether it's build volume-per-dollar ratios, multimaterials printing techniques, or new materials types.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.