Programming has not only given us smartphones, tablets, and extremely useful software programs, it has changed the way electrical and computer engineers are designing circuits. For example, an LED effect such as a blinking light could be built using a 555 timer circuit rather simply. However, using a cheap MCU, the circuitry is less complex and a few lines of code can achieve the same effect. It is even possible to create much more complex lighting effects using just a few more lines of code.
The first general-purpose 8-bit microprocessor in the world was the Intel 8008. After its introduction, MCUs quickly began replacing discrete logic. Due to their small size, ease of use, and programmability, engineers quickly welcomed them into the field. However, in the 80s, memory was scarce and even more important, it was expensive. In January of 1983, memory was approximately $2,396 per megabyte, whereas today it is approximately half a penny per megabyte. Consequently, the cost of memory in the past put a limit on the applications MCUs could be used for. Engineers needed to program with caution, making sure their code used as few lines as possible and did not surpass their memory limit.
Bjarne Stroustrup, creator of C++. (Source: Wiki)
Today, MCUs are cheap and memory is abundant. Furthermore, MCUs have been integrated into products we use every day such as: displays, printers, keyboards, phones, washing machines, microwaves, and most importantly, cars. A new luxury car today could consist of hundreds of MCUs.
With all the great development in software and MCUs up to this point, it is imperative that engineers build a strong foundation in programming. Programming not only gives people a deep insight on how computers work, it opens up a large array of new tools you can use your computer for. Overall, programming has made our lives unbelievably technically involved. Programming is the powerful force driving modern day technology, and we still have a long road to drive.
Other notable happenings in 1982:
Sun Microsystems incorporates.
Adobe was founded, placing PostScript description language in the Apple LaserWriter.
Symantec was founded, mostly selling security and information management software.
Hercules, maker of high-resolution screens, was founded.
Maxtor was founded, now absorbed into Seagate.
Commodore 64 sold with 64 kilobytes of RAM for $200, the fastest selling computer of all time.
HX-20, the first notebook-sized portable computer, debuted.
Stroustrup suggests that the origin of the name C++ is related to George Orwell's "Newspeak" described in the appendix to his novel 1984: "any word ... could be strengthened by the affix plus-, or, for greater strength, doubleplus-." This is part of a soi-disant "C" language. See Orwell, 1984, pp 315 and 322.
tekochip- I think you hit the nail on the head with "limited resources." Based on my experience in interviews and seeing CS's in action, it seems many CS schools don't touch on coding efficiency.
Now that I think about it, I haven't seen strong evidence CS schools even touch on good code. Graduates seem to hit the field either "having the knack," or (to paraphrase Steve Martin,) "not have knack.)
That is for certain. The ability to make the process between program writing and the production of correctly functioning code can be quite daunting at best.
The closest I get to embedded code is writing detailed functional specifications that describe both the "screens" and the I/O actions in sequential detail. The challenge is that in addition to creating the description of what happens when everything goes exactly right, it also needs to describe what happens when things don't function correctly. That part requires an excellent understanding of the entire non-software portion of the system, and sometimes requires extensive discussions with the person writing the actual code. The problem there is that most programmers don't seem to be quite normal people. It is not clear to me if it is programming that makes them that way, or if they are programmers because they are that way.
William, learning a tool or software for professional work and learning a programming language is different. In some of the tools, command line access and writing scripts are very much requires for executing the batch file during simulation or synthesis. Such command line access may be difficult for most of the non-IT professionals.
AutoCAD wasn't the first drafting program at all, but that is when they started. Now they are an industry standard. The first "modern" as we know it CAD package that stuck. Keep in mind, Photoshop wasn't the first graphical program. The iPOD wasn't the first MP3 player either. But they are now considered the most important.
I agree apresher. I hired a couple of CS majors for my embedded designs and quickly found that they really didn't have the proper skill set for developing embedded designs. I'm sure there are CS guys that can get the job done, but EEs perform much better with the limited resources (execution speed, memory, etc) of an embedded design and the EEs also do a better job at troubleshooting system errors since they have a better understanding of glue logic and the other components in a system.
WilliamK, I agree with your perspective on the need for other skill sets beyond programming. The biggest challenge over the last number of years, and continuing into the future, is the need for multidisciplinary engineering teams that combine a broad set of capabilities from mechanical, to electrical, to electronic design and software. There's no doubt that engineers need to understand how software works but everyone doesn't need to have specific programming skills themselves.
Electrical and electronic engineering are quite a bit different from program writing, probably almost opposites. Yes, engineers do need to understand a lot about computers if they are qoing to design a system that uses or interfaces with a computer, but really, desgining circuits that actually function and are constrained by cost and size limitations is a quite different thing than creating code that is so very bloated that one person can't even read all of it, let alone understand exactly what it is doing.
Seriously doubt he ever really uttered it, but IBMs Thomas Watson supposedly said in the 1940s that there would only ever be a need for about 5 computers in the United States. Microcomputers changed that whole paradigm.
They came on the scene when computer science was making some great strides, and application software written by companies other than the computer manufacturer was really coming into its own.
BTW, AutoCAD was hardly the first Computer Aided Design software. There were numerous, completely incompatible and incomprehensible electonic drafting programs before it. AutoDesk had the vision to see that the PC would grow up.
As for Java, don't slight the contributions of Sun co-founder Bill Joy in the development and emergence of the language. Interestingly in light of recent issues with security, the license has always included a caveat that it was not to be used to program safety systems - medical, missiles, or nuclear sites.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.