Sneak preview of Eclipse tips and tricks webinar

I’m totally addicted to PhotoShop. But, like every other PhotoShop user, I’m resigned to the fact that I’ll never really master the program. It simply has too many tools, any of which can be used in a thousand different ways.

Eclipse CDT is a lot like that. This tooling environment, which forms the basis of products such as the QNX Momentics Tool Suite, is so feature rich that you can take years to become a true power user.

My colleague Andy Gryc, who has helped customers with Eclipse issues, has seen this problem first hand. And it gave him an idea: What if he canvassed a number of advanced Eclipse users and collected their favorite productivity tips?

He did just that, and the result is a webinar called “Hot Tips and Tricks for the Eclipse IDE.” Andy will cover automatic code formatting, code folding, advanced search, automatic refactoring, call hierarchy navigation, plug-ins, keyboard shortcuts, custom breakpoint actions, and many other techniques for boosting productivity.

Sample techniques
To give you a taste, here are a few techniques that Andy will cover. Keep in mind that I've chosen some of the simpler examples — the webinar will also explore more advanced topics.

Viewing definitions and prototypes
If you press <Ctrl> and hover your pointer over an identifier, it transforms into a hyperlink. Simply click the link to view the identifier’s definition or prototype:

Prompting for command-line arguments
To prompt for command-line arguments when launching an executable, go to the program’s Launch Configuration, click the C/C++ Program Arguments tab, and insert the ${string_prompt} literal:

Detaching views
If you use multiple monitors, detaching a view from the main window can come in really handy. Simply right-click on the header and select Detach:

Again, this is just a sample — Andy will also cover template proposals, variable directory paths, automated header file include, function completion, automatic structure completion, expansion of #define’s, version compare, and other techniques.

The webinar occurs Thursday, November 3, 2011 at 2:00 pm EST. For more information or to register, click here.


QNX, Freescale talk future of in-car infotainment

QNX and Freescale enjoy one of the longest technology partnerships in the field of automotive infotainment. The roots of their relationship reach back to 1999, when QNX became a founding member of MobileGT, an industry alliance formed by Motorola, Freescale's parent company, to drive the development of infotainment systems.

If you've read any of my blog posts on the QNX concept car (see here, here, and here), you've seen how mixing QNX and Freescale technologies can yield some very cool results.

So it's no surprise that when Jennifer Hesse of Embedded Computing Design wanted to publish an article on the challenges of in-car infotainment, she approached both companies. The resulting interview, which features Andy Gryc of QNX and Paul Sykes of Freescale, runs the gamut — from mobile-device integration and multicore processors to graphical user interfaces and upgradeable architectures. You can read it here.


Is multicore a viable choice for medical devices?

Will even relatively simple devices
eventually require multicore?
Multicore processors, and the software required to run on them, can increase the complexity of any embedded system. Some industries, notably networking, have long embraced this added complexity. The medical device market isn't one of them.

It's easy to see why, as this same complexity could potentially hinder or prolong the process of securing FDA approval for a medical device. Getting approval is already hard enough and long enough; any new technology that might further extend the ordeal is rightly looked upon with skepticism.

And yet, multicore is the way of the future for medical devices, save for relatively simple products. We've seen this trend in other markets, including automotive, and the medical device market will, in all likelihood, follow suit.

Should medical developers be concerned? Yes, but not too much. As my colleague Justin Moon argues, the techniques needed to validate multi-core medical systems are, in fact, the same proven techniques that developers already apply to single-core systems. These techniques include testing, statistical analysis, fault tree analysis, and design verification. Meanwhile, the tools and OS technology needed to create, analyze, and optimize multicore-capable applications are, in many cases, quite mature.

And, of course, let's not forget a key benefit of multicore: significantly increased performance (through concurrency) without an attendant increase in power consumption and heat dissipation.

But enough from me. To get the argument straight from the horse's mouth, read Justin's article, Smart OS strategy makes multicore viable for medical devices, which EE Times published earlier this month.

Testing, statistical analysis, and design validation complement one another to validate a software system, whether it is running on one or multiple cores. (Click image to magnify.)


PlayBook videos from BlackBerry DevCon

Here's a passel of PlayBook videos from BlackBerry Devcon, held last week in San Francisco. Videos on new games and the ├╝ber-cool Cascades platform bookend the set. In between you'll find interviews on three new apps: Box (cloud storage), Citrix Receiver (remote desktop access), and Evernote (note-taking).

Gaming on the PlayBook

Box (cloud storage app)

Citrix Receiver (remote desktop app)

Evernote for PlayBook (notetaking and archiving app)

And last but not least, a demo of the Cascades rich UI development platform:

For more videos from BlackBerry Devcon, including a replay of the general session, click here.



Quantum levitation: How cool is that?

I'm linking to this video for one simple reason: It is absolutely the coolest thing I've ever seen — pun fully intended. For an explanation of this phenomenon, known as the Meisner effect, check out the article on Scientific American.

Be sure to view the whole video; the more you see, the more your jaw will drop.


No SOUP for you? Using off-the-shelf software in medical devices

A three-part video that explores the role of SOUP in safety-critical products.

Would you put this
in a medical device?
You can build a perfectly safe railway braking system if you never allow the train to move. And you can build a perfectly safe drug infusion system if you never allow it to infuse anything. But what's the use of that?

In the real world, designers of medical devices and other critical systems have to create products that are both safe and functional. They also have to satisfy time-to-market pressures: A safe system is no good to anyone if you take too long to build it.

To cut development time, manufacturers in many industries use commercial off-the-shelf (COTS) software in their products. But medical manufacturers have been reluctant to follow suit. They worry that COTS means SOUP — software of uncertain provenance. And SOUP can make a mess of safety claims, not to mention approvals by the FDA and other agencies.

Or perhaps not. When it comes to SOUP, my colleague Chris Hobbs argues for a nuanced approach. He states that if manufacturers distinguish between opaque SOUP (which should be avoided) and clear SOUP (for which source code, fault histories, and long in-use histories are available), they will discover that COTS software is, in many cases, the optimal choice for safety-related medical devices.

Chris isn't a lone voice crying in the wilderness. He notes, for example, that IEC 62304, which is becoming the de facto standard for medical software life-cycle processes, assumes manufacturers will use SOUP.

Enough from me. Check out this three-part video in which Chris explores the ingredients that can make SOUP the right choice for a medical software design:

Part 1

Part 2

Part 3

Webinar alert
Yesterday, Chris and his colleague Justin Moon presented a webinar on this very topic. If you missed it, no worries: It should soon be available for download through the QNX webinar page.

New software release from QNX means less noise, less tuning for hands-free systems

This just in: QNX has released version 2.0 of its acoustic processing suite, a modular software library designed to maximize the quality and clarity of automotive hands-free systems.

The suite, used by 18 automakers on over 100 vehicle platforms, provides modules for both the receive side and the send side of hands-free calls. The modules include acoustic echo cancellation, noise reduction, wind blocking, dynamic parametric equalization, bandwidth extension, high frequency encoding, and many others. Together, they enable high-quality voice communication, even in a noisy automotive interior.

Highlights of version 2.0 include:

Enhanced noise reduction — Minimizes audio distortions and significantly improves call clarity. Can also reconstruct speech masked by low-frequency road and engine noise.

Automatic delay calculation and compensation — Eliminates almost all product tuning, enabling automakers to save significant deployment time and expense.

Off-Axis noise rejection — Rejects sound not directly in front of a microphone or speaker, allowing dual-microphone solutions to hone in on the person speaking for greater intelligibility.

To read the press release, click here. To learn more about the acoustic processing suite, visit the QNX website.

The QNX Aviage Acoustic Processing Suite can run on the general purpose processor, saving the cost of a DSP.



Tridium greens up with QNX

The folks at Boeing's largest manufacturing facility (over 1 million square feet) faced a challenge. On the one hand, they wanted to reduce the high energy costs of lighting such a huge area. But at the same time, they needed a solution that would maintain a safe working environment and provide flexible, easy-to-configure lighting zones.

To address this challenge, Boeing turned to Tridium, a global supplier of energy management and device-to-enterprise integration systems. Tridium's solution not only slashed power consumption — up to 30% during peak periods and up to 50% on weekends — but also provided real-time alarming and allowed operators to program the system remotely, from any web browser.

Boeing is one of many customers to benefit from Tridium's solutions, and for more than a decade, many of those solutions have run on the QNX OS. Case in point: The Tridium Niagara Framework, a software platform used in factories, schools, universities, and office buildings to control a host of applications, including energy management, building automation, security, lighting control, and convergence retailing. More than 250,000 instances of the Niagara Framework operate in 50 countries.

So why I mentioning all this? Because QNX and Tridium announced today that Tridium has optimized the latest version of its Niagara Framework, NiagaraAX 3.6, for the QNX Neutrino RTOS.

For details, read the press release. But in the meantime, check out this video, which describes what happens when you integrate various systems — HVAC, lighting, elevators, and so on — with the QNX-powered Niagara framework:



Can the car, cloud, and smartphone be integrated more successfully?

Lots of people use the phrase "connected car," but what does it really mean? What, exactly, is connected, and what is it connected to?

In the past, my QNX colleagues referred to four types of automotive connectivity:

  • Connectivity to phones and other mobile devices — for handsfree calling and for accessing music and other media
  • Connectivity to the cloud — for accessing off-board navigation, voice recognition, etc
  • Connectivity within the car — for sharing information and applications between systems, such as the instrument cluster and the head unit
  • Connectivity around the car — for providing the driver with feedback about the surrounding environment

Problem is, the distinction between the first two categories is becoming progressively softer. As my colleague Kerry Johnson argues, if your car connects to a smartphone that draws information from the cloud, can you really distinguish between mobile-device connectivity and cloud connectivity?

Mind you, making such distinctions is of secondary importance. The real issue is whether we can integrate the car, cloud, and smartphone much more successfully. Can we, using widely accessible technologies, harness their combined power to deliver a significantly better driving experience?

This is just one of the issues that Kerry addresses in his new blog post, which you can read here. If you're interested in the future of the connected car, check it out.

QNX OS aces performance benchmarks

If you've ever wondered why the QNX Neutrino OS is popular in applications that demand fast, predictable performance, have I got some benchmarks for you.

Recently, Dedicated Systems Experts, a professional services company that specializes in real-time systems, performed independent evaluations of the QNX Neutrino OS on three different platforms: ARM Cortex A8, Intel Atom, and Pentium.

After putting QNX Neutrino through its paces, they determined that it offers:

  • Excellent architecture
  • Very fast and predictable performance
  • User-friendly development environment
  • Above average documentation
  • Support for many embedded platforms

    In fact, QNX scored 9 out of 10 on the dimensions of architecture, documentation, tools, and performance.

    The performance benchmarks, which are especially rigorous, gauge whether an OS has what it takes for applications that have zero tolerance for missed deadlines. These tests include thread-switch latency and interrupt latency.

    To download the reports, visit the Dedicated Systems website. Registration is required.