Hi, as a mechatronics engineer I am used to define product specifications that will be later implemented according to user needs. In my experience it is hard to design a sophisticated system without knowing what's inside the black box or knowing how to implement control laws to a system that must be reliable like an alarm or safety system, this can be implemented using coding like assembly or C in a better way. I consider graphical programming has a high challenge to become so intuitive and clear to the user and also it should have its own intelligence to detect possible failures or error in design that could further become into failure.
In addition to generating the code, graphical tools also give you an excellent way to document your software in a way that you know will be accurate. Many tools also work with a variety of languages and give you the ability to quickly create a cross-platform solution. A tool I frequently use is DevFlowCharter.
Certainly, I code without it, but by using the graphical tool I can document work for agency approvals, patents, lay people and have no problem in maintaining my own work years after creation when it's difficult to remember exactly what I was doing. I also use the tool when maintaining other engineer's code so I have a way to understand what they developed and how I might fix the problem.
In a way it's like using a high level language. Yes, I can write in Assembly, sometimes I still do, but by writing in C the code is easier to maintain, easier to understand. Graphical Programming is just the next step up from a high level language.
Let's throw it into a real world application... let's hypothesize by asking the programmer to run a five axis water jet head for cutting carbon fiber composites. The surface geometry is complex; normally it would take Spock to program the cutter... or would it? Maybe CNC (G-CODE) Cartesian programming is the answer?http://www.robotmaster.com/success/success4.php
These types of tools will let engineers concentrate their expertise on solving problems rather than on creating custom algorithms and controls. I've heard some companies on Wall Street have coded trading algorithms into FPGAs using high-level tools. Sure, coding in C or C++ is a valuable skill, but using high-level tools to quickly implement and test algorithms beats the heck out of trying to implement them in an FPGA one line of code at a time.
Judging by the broad array of tools out there to help with code development, there seems to be a larger trend toward aiding engineers at varying levels of expertise. Seems like a natural progression that's being recognized to some extent by the market, and I would think that university engineering programs would be smart to recognize this trend, as well. Future engineers are going to need these skills.
If you want to "move up" the level of code development for a project of moderate complexity, look at the VisualSTATE tool from IAR Systems (http://www.iar.com/en/Products/IAR-visualSTATE/) that links with the company's integrated-development environment packages. IAR has a lot of helpful information on its Web site.
Also, the book, "Practical UML Statecharts in C/C++", by Miro Samek, provides a way to use the Universal Modeling Language to create program flow that then translates to code. ISBN: 978-0-7506-8706-5. Amazon lists this book for $US 39.
Yes, you expressed it very well. Anyone who can logically express the idea can program and get the result. This will also reduce time to realisation of many of the projects. Though to a certain extent LabVIEW has done this to the "engineering" community, it was not meant for the pure enthusiasts. People will be motivated by the instant results and I am sure these kits do that..
I can't help wondering, though, if we might be setting up for a fall by teaching the graphical programming model. I agree, it's fine for quick-turn prototypes, but we all know that prototypes often turn into long-lived production-critical monsters. I "inherited" a factory-automation project that was done in HP Vee. I quickly found out how difficult it was to modify, as there was little structure, and worse: having made changes, I could not generate a diff report! All our difference or patch or merge tools assume text source files! Having non-SW engineers (especially young people) do graphical programming is fine for an intro, but config mgt and metrics tracking, etc., are necessary disciplines that must eventually be taught, and as far as I can tell, these are built on the text-based model.
John E, I'm with you. After programming in Forth and ANSI C for a decade, I moved into academia and needed to select a programming environment that allowed undergraduates to design and build systems that involved a software component, without having time for a traditional CS degree. LabVIEW was the logical choice and is a natural fit for first-time coders. Not having to "translate" and transition from a text-based language, they can concentrate on the design of the program flow and visualize the system as the flow of information among connected components, much like the systems diagrams we utilize for working with Energy and Material distribution systems. As we advance through the curriculum, the freshmen start out with simple transducer acquisition but by senior year are familiar with Artificial Intelligence, Robotics, and Knowledge Discovery in Databases...all in LabVIEW. I'm glad to see graphical languages being used.
Robots that walk have come a long way from simple barebones walking machines or pairs of legs without an upper body and head. Much of the research these days focuses on making more humanoid robots. But they are not all created equal.
The IEEE Computer Society has named the top 10 trends for 2014. You can expect the convergence of cloud computing and mobile devices, advances in health care data and devices, as well as privacy issues in social media to make the headlines. And 3D printing came out of nowhere to make a big splash.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.