Showing posts with label Tina Jeffrey. Show all posts
Showing posts with label Tina Jeffrey. Show all posts

Tuesday, June 30, 2015

DevCon5 recap: building apps for cars

Tina Jeffrey
Last week I had the pleasure of presenting at the DevCon5 HTML5 & Mobile App Developers Conference, held at New York University in the heart of NYC. The conference was abuzz with the latest and greatest web technologies for a variety of markets, including gaming, TV, enterprise, mobile, retail, and automotive.

The recurring theme throughout the event was that HTML5 is mainstream. Even though HTML5 still requires some ripening as a technology, it is definitely the burgeoning choice for app developers who wish to get their apps onto as many platforms as possible, quickly and cost effectively. And when a developer is confronted with a situation where HTML5 falls short (perhaps a feature that isn’t yet available), then hybrid is always an option. At the end of the day, user experience is king, and developers need to design and ship apps that offer a great experience and keep users engaged, regardless of the technology used.

Mainstream mobile device platforms all have web browsers to support HTML5, CSS3, and JavaScript. And there’s definitely no shortage of mobile web development frameworks to build consumer and enterprise apps that look and perform like native programs. Many of these frameworks were discussed at the conference, including jQuery Mobile, Dojo Mobile, Sencha Touch, and Angular JS. Terry Ryan of Adobe walked through building a PhoneGap app and discussed how the PhoneGap Build tool lets programmers upload their code to a cloud compiler and automatically generate apps for every supported platform — very cool.

My colleague Rich Balsewich, senior enterprise developer at BlackBerry, hit a homerun with his presentation on the multiple paths to building apps. He walked us through developing an HTML5 app from end to end, and covered future features and platforms, including the automobile. A special shout-out to Rich for plugging my session “The Power of HTML5 in the Automobile” held later that afternoon.

My talk provided app developers with some insight into creating apps for the car, and discussed the success factors that will enable automakers to leverage mobile development — key to achieving a rich, personalized, connected user experience. Let me summarize with the salient points:

What’s needed

What we're doing about it

The automotive community wants apps, and HTML5 provides a common app platform for infotainment systems. We’ve implemented an HTML5 application framework in the QNX CAR Platform for Infotainment.
Automotive companies must leverage the broad mobile developer ecosystem to bring differentiated automotive apps and services to the car. We’re helping by getting the word out and by building a cloud-based app repository that will enable qualified app partners to get their apps in front of automotive companies. We plan to roll out this repository with the release of the QNX CAR Platform 2.1 in the fall.
The developer community needs standardized automotive APIs. We’re co-chairing the W3C Automotive and Web Platform Business Group, which has a mandate to create a draft specification of a vehicle data API. We’re also designing the QNX CAR Platform APIs to be Apache Cordova-compliant.
Automotive platform vendors must supply tools that enable app developers to build and test their apps. We plan to release the QNX CAR Platform 2.1 with open, accessible tooling to make it easy for developers to test their apps in a software-only environment.

Thursday, June 25, 2015

QNX Acoustics for Voice — a new name and a new benchmark in acoustic processing


Tina Jeffrey
Earlier this month, QNX Software Systems officially released QNX Acoustics for Voice 3.0 — the company’s latest generation of acoustic processing software for automotive hands-free voice communications. The solution sets a new benchmark in hands-free quality and supports the rigorous requirements of smartphone connectivity specifications.

Designed as a complete software solution, the product includes both the QNX Acoustics for Voice signal-processing library and the QWALive tool for tuning and configuration.

The signal-processing library manages the flow of audio during a hands-free voice call. It defines two paths: the send path, which handles audio flowing from the microphones to the far end of the call, and the receive path, which handles audio flowing from the far end to the loudspeakers in the car:





QWALive, used throughout development and pre-production phases, gives developers realtime control over all library parameters to accelerate tuning and diagnosis of audio issues:



A look under the hood
QNX Acoustics for Voice 3.0 builds on QNX Software Systems’ best-in-class acoustic echo cancellation and noise reduction algorithms, road-proven in tens of millions of cars, and offers breakthrough advancements over existing solutions.

Let me run through some of the innovative features that are already making waves (sorry, couldn’t resist) among automotive developers.

Perhaps the most significant innovation is our high efficiency technology. Why? Well, simply put, it saves up to 30% both in CPU load and in memory requirements for wideband (16 kHz sample rate for HD Voice) and Wideband Plus (24 kHz sample rate). This translates into the ability to do more processing on existing hardware, and with less memory. For instance, automakers can enable new smartphone connectivity capabilities on current hardware, without compromising performance:



Another feature that premieres with this release is intelligent voice optimization technology, designed to accelerate and increase the robustness of send-path tuning. This technology implements an automated frequency response correction model that dynamically adjusts the frequency response of the send path to compensate for variations in the acoustic path and vehicle cabin conditions.

Dynamic noise shaping, which is exclusive to QNX Acoustics for Voice, also debuts in this release. It enhances speech quality in the send path by reducing broadband noise from fans, defrost vents, and HVAC systems — a welcome feature, as broadband noise can be particularly difficult for hands-free systems to contend with.

Flexibility and portability — check and check
Like its predecessor (QNX Aviage Acoustic Processing 2.0), QNX Acoustics for Voice 3.0 continues to offer maximum flexibility to automakers. The modular software library comes with a comprehensive API, easing integration efforts into infotainment, telematics, and audio amplifier modules. Developers can choose from fixed- and floating-point versions that can be ported to a variety of operating systems and deployed on a wide range of processors or DSPs.

We’re excited about this release as it’s the most sophisticated acoustic voice processing solution available to date, and it allows automakers to build and hone systems for a variety of speech requirements, across all their vehicle platforms.

Check out the QNX Acoustics for Voice product page to learn more.

Tuesday, June 23, 2015

The next chapter in car infotainment: seamless mobile integration

Tina Jeffrey
According to a survey from Forrester Research, 50% of North Americans who plan to buy cars in the next 12 months say that technology options will play an important role in their purchasing decisions. The fact is, consumers want to remain connected all the time; they don’t want to park their digital lifestyle while driving. This begs the question: what’s an automaker to do?

Allow consumers to bring in content and apps on their mobile devices. We are becoming increasingly attached to our smartphones, and this is driving a trend towards mobile-centric car infotainment. The trend is of particular benefit to buyers of low-end vehicles, in which built-in features such as navigation and speech recognition can be cost prohibitive. A smartphone-driven head unit reduces costs by leveraging the existing connectivity and processing power of the mobile device; it also provides easy access to apps the consumer has already downloaded. In fact, integration between the mobile device and head unit offers numerous benefits: it helps the car keep pace with the consumer-device lifecycle, it endows the car with app store capabilities, and it lets the car connect to the cloud through the mobile device, eliminating the need for a built-in connection.

Using the phone's connectivity and
processing power to deliver apps and
software updates.
Design in-vehicle systems to be compatible with all leading smartphones. To satisfy this requirement, the vehicle must support both proprietary and standards-based connectivity protocols, using Bluetooth, USB, and Wi-Fi. Automakers will need to deliver platforms that include support for CarPlay, iPod Out (for older Apple devices), DLNA (for BlackBerry phones and other devices), MirrorLink, and Miracast, as well as the solution that the Open Automotive Alliance (OAA) promises to reveal later this year. By offering this widespread connectivity, automakers can avoid snubbing any significant portion of their prospective customer base.

Leverage and enable the mobile development community to build the apps consumers want. With companies like Apple and Google now in the fray, native brought-in apps will be a certainty, but automakers should continue to embrace HTML5 as an application platform, given its ”write once, run anywhere” mantra. HTML5 remains the most widely used cross-platform application environment and it gives automakers access to the largest pool of developers worldwide. And, as the first W3C vehicle information API specification is ratified, HTML5 application developers will be able to access vehicle information and develop compelling, car-appropriate apps that become an integral part of our daily commute.

Top 10 challenges facing the ADAS industry

Tina Jeffrey
It didn’t take long. Just months after the release of the ISO 26262 automotive functional safety standard in 2015, the auto industry began to grasp its importance and adopt it in a big way. Safety certification is gaining traction in the industry as automakers introduce advanced driver assistance systems (ADAS), digital instrument clusters, heads-up displays, and other new technologies in their vehicles.

Governments around the world, in particular those of the United States and the European Union, are calling for the standardization of ADAS features. Meanwhile, consumers are demonstrating a readiness to adopt these systems to make their driving experience safer. In fact, vehicle safety rating systems are becoming a vital ‘go to’ information resource for new car buyers. Take, for example, the European New Car Assessment Programme Advanced (Euro NCAP Advanced). This organization publishes safety ratings on cars that employ technologies with scientifically proven safety benefits for drivers. The emergence of these ratings encourages automakers to exceed minimum statutory requirements for new cars.

Sizing the ADAS market
ABI Research claims that the global ADAS market, estimated at US$16.6 billion at the end of 2015, will grow to more than US$260 billion by the end of 2020, representing a CAGR of 41%. Which means that cars will ship with more of the following types of safety-certified systems:



The 10 challenges
So what are the challenges that ADAS suppliers face when bringing systems to market? Here, in my opinion, are the top 10:
  1. Safety must be embedded in the culture of every organization in the supply chain. ADAS suppliers can't treat safety as an afterthought that is tacked on at the end of development; rather, they must embed it into their development practices, processes, and corporate culture. To comply with ISO 26262, an ADAS supplier must establish procedures associated with safety standards, such as design guidelines, coding standards and reviews, and impact analysis procedures. It must also implement processes to assure accountability and traceability for decisions. These processes provide appropriate checks and balances and allow for safety and quality issues to be addressed as early as possible in the development cycle.
     
  2. ADAS systems are a collaborative effort. Most ADAS systems must integrate intellectual properties from a number of technology partners; they are too complex to be developed in isolation by a single supplier. Also, in a safety-certified ADAS system, every component must be certified — from the underlying hardware (be it a multi-core processor, GPU, FPGA, or DSP) to the OS, middleware, algorithms, and application code. As for the application code, it must be certified to the appropriate automotive safety integrity level; the level for the ADAS applications listed above is typically ASIL D, the highest level of ISO 26262 certification.
     
  3. Systems may need to comply with multiple industry guidelines or specifications. Besides ISO 26262, ADAS systems may need to comply with additional criteria, as dictated by the tier one supplier or automaker. On the software side, these criteria may include AUTOSAR or MISRA. On the hardware side, they will include AEC-Q100 qualification, which involves reliability testing of auto-grade ICs at various temperature grades. ICs must function reliably over temperature ranges that span -40 degrees C to 150 degrees C, depending on the system.
     
  4. ADAS development costs are high. These systems are expensive to build. To achieve economies of scale, they must be targeted at mid- and low-end vehicle segments. Prices will then decline as volume grows and development costs are amortized, enabling more widespread adoption.
     
  5. The industry lacks interoperability specifications for radar, laser, and video data in the car network. For audio-video data alone, automakers use multiple data communication standards, including MOST (media-oriented system transport), Ethernet AVB, and LVDS. As such, systems must support a multitude of interfaces to ensure adoption across a broad spectrum of possible interfaces. Also, systems may need additional interfaces to support radar or lidar data.
     
  6. The industry lacks standards for embedded vision-processing algorithms. Ask 5 different developers to develop a lane departure warning system and you’ll get 5 different solutions. Each solution will likely start with a Matlab implementation that is ported to run on the selected hardware. If the developer is fortunate, the silicon will support image processing primitives (a library of functions designed for use with the hardware) to accelerate development. TI, for instance, has a set of image and video processing libraries (IMGLIB and VLIB) optimized for their silicon. These libraries serve as building blocks for embedded vision processing applications. For instance, IMGLIB has edge detection functions that could be used in a lane departure warning application.
     
  7. Data acquisition and data processing for vision-based systems is high-bandwidth and computationally intensive. Vision-based ADAS systems present their own set of technical challenges. Different systems require different image sensors operating at different resolutions, frame rates, and lighting conditions. A system that performs high-speed forward-facing driver assistance functions such as road sign detection, lane departure warning, and autonomous emergency breaking must support a higher frame rate and resolution than a rear-view camera that performs obstacle detection. (A rear-view camera typically operates at low speeds, and obstacles in the field of view are in close proximity to the vehicle.) Compared to the rear-view camera, an LDW, AEB, or RSD system must acquire and process more incoming data at a faster incoming frame rate, before signaling the driver of an unintentional lane drift or warning the driver that the vehicle is exceeding the posted speed limit.
     
  8. ADAS cannot add to driver distraction. There is an increase in the complexity of in-vehicle tasks and displays that can result in driver information overload. Systems are becoming more integrated and are presenting more data to the driver. Information overload could result in high cognitive workload, reducing situational awareness and countering the efficacy of ADAS. Systems must therefore be easy to use and should make use of the most appropriate modalities (visual, manual, tactile, sound, haptic, etc.) and be designed to encourage driver adoption. Development teams must establish a clear specification of the driver-vehicle interface early on in development to ensure user and system requirements are aligned.
     
  9. Environmental factors affect ADAS. ADAS systems must function under a variety of weather and lighting conditions. Ideally, vision-based systems should be smart enough to understand when they are operating in poor visibility scenarios such as heavy fog or snow, or when direct sunlight shines into the lens. If the system detects that the lens is occluded or that the lighting conditions are unfavorable, it can disable itself and warn the driver that it is non-operational. Another example is an ultrasonic parking sensor that becomes prone to false positives when encrusted with mud. Combining the results of different sensors or different sensor technologies (sensor fusion) can often provide a more effective solution than using a single technology in isolation.
     
  10. Testing and validating is an enormous undertaking. Arguably, testing and validation is the most challenging aspect of ADAS development, especially when it comes to vision systems. Prior to deploying a commercial vision system, an ADAS development team must amass hundreds if not thousands of hours of video clips in a regression test database, in an effort to test all scenarios. The ultimate goal is to achieve 100% accuracy and zero false positives under all possible conditions: traffic, weather, number of obstacles or pedestrians in the scene, etc. But how can the team be sure that the test database comprises all test cases? The reality is that they cannot — which is why suppliers spend years testing and validating systems, and performing extensive real-world field-trials in various geographies, prior to commercial deployment.
     
There are many hurdles to bringing ADAS to mainstream vehicles, but clearly, they are surmountable. ADAS systems are commercially available today, consumer demand is high, and the path towards widespread adoption is paved. If consumer acceptance of ADAS provides any indication of societal acceptance of autonomous drive, we’re well on our way.

Sunday, June 21, 2015

Making your car a first-class citizen of the Web

Tina Jeffrey
Anyone who follows the latest ongoings of the Worldwide Web Consortium (W3C) may have heard today’s news: the launch of the Automotive and Web Platform Business group. We live in a connected world, and let’s face it, many of us expect access to our favorite applications and services while on the road. I see the formation of this W3C group as a huge step in the pursuit of marrying web technology and the automobile.

The business group will bring together developers, OEMs, automotive technology vendors — many of who, like QNX, were part of Web and Automotive Workshop held last November. The group allows us to continue the discussion and to define a vehicle data API standard for enabling automotive services via the Web. And this is just the start of greater things to come: standards for OTA (over-the-air) software updates, driver safety, security, and seamless integration of smart phones and tablets.

As a member of the QNX automotive team, I second my colleague Andy’s enthusiasm in the announcement in saying we’re extremely excited to be part of this group and the process of helping to define these standards for the industry.

Check out the W3C press release.



Tina is an automotive product marketing manager at QNX Software Systems

Friday, June 19, 2015

AUTOMOBILE OTA software: not just building castles in the air

Tina Jeffrey
After attending Telematics Detroit earlier this month, I realized more than ever that M2M will become the key competitive differentiator for automakers. With M2M, automakers can stay connected with their vehicles and perhaps more importantly, vehicle owners, long after the cars have been driven off dealer lots. Over-the-air (OTA) technology provides true connectivity between automakers and their vehicles, making it possible to upgrade multiple systems, including electronic control unit (ECU) software, infotainment systems that provide navigation and smartphone connectivity, and an ever-increasing number of apps and services.

Taken together, the various systems in a vehicle contain up to 100 million lines of code — which makes the 6.5 million lines of code in the Boeing 787 Dreamliner seem like a drop in the proverbial bucket. Software in cars will only continue to grow in both amount and complexity, and the model automakers currently use to maintain and upgrade vehicle software isn’t scalable.

Vehicle owners want to keep current with apps, services, and vehicle system upgrades, without always having to visit the dealer. Already, vehicle owners update many infotainment applications by accepting software pushed over the air, just like they update applications on their smartphones. But this isn’t currently the case for ECUs, which require either a complete module replacement or module re-flashing at a dealership.

Pushing for updates
Automakers know that updates must be delivered to vehicle owners in a secure, seamless, and transparent fashion, similar to how OTA updates are delivered to mobile phones. Vehicle software updates must be even more reliable given they are much more critical.


BlackBerry’s OTA solution: Software Update Management for Automotive service

With OTA technology, automakers will use wireless networks to push software updates to vehicles automatically. The OTA service will need to notify end-users of updates as they become available and allow the users to schedule the upgrade process at a convenient time. Large software updates that may take a while to download and install could be scheduled to run overnight while the car is parked in the garage, making use of the home Wi-Fi connection. Smaller size updates could be delivered over a cellular connection through a tethered smartphone, while on a road trip. In this latter scenario, an update could be interrupted, for instance, if the car travels into a tunnel or beyond the network area.

A win-win-win
Deployment of OTA software updates is a winning proposition for automakers, dealers, and vehicle owners. Automakers could manage the OTA software updates themselves, or extend the capability to their dealer networks. Either way, drivers will benefit from the convenience of up-to-date software loads, content, and apps with less frequent trips to the dealer. Dealership appointments would be limited to mechanical work, and could be scheduled automatically according to the vehicle’s diagnostic state, which could be transmitted over the air, routinely, to the dealer. With this sharing of diagnostic data, vehicle owners would better know how much they need to shell out for repairs in advance of the appointment, with less chance of a shocking repair-cost phone call.

OTA technology also provides vehicle owners and automakers with the ability to personalize the vehicle. Automaker-pushed content can be carefully controlled to target the driver’s needs, reflect the automaker's brand, and avoid distraction — rather than the unrestricted open content found on the internet, which could be unsafe for consumption while driving. Overall, OTA software updates will help automakers maintain the customers they care about, engender brand loyalty, and provide the best possible customer experience.

Poised to lead
Thinking back to Telematics Detroit, if the number of demos my BlackBerry colleagues gave of their Software Update Management for Automotive service is any indication, OTA will transform the auto industry. According to a study from Gartner ( “U.S. Consumer Vehicle ICT Study: Web-Based Features Continue to Rise” by Thilo Koslowski), 40 percent of all U.S. vehicle owners either “definitely want to get” or at least are “likely to get” the ability for wireless software updates in their next new vehicle — making it the third most demanded automotive-centric Web application and function.

BlackBerry is poised to lead in this space, given their expertise in infrastructure, security, software management, and close ties to automotive. They were leaders in building an OTA solution for the smartphone market, and now again are among the first entrants in enabling a solution that is network, hardware, firmware, OS, software, and application agnostic.

Thursday, June 11, 2015

Tackling fragmentation with a standard vehicle information API

Tina Jeffrey
Has it been a year already? In February 2015 QNX Software Systems became a contributing member of the W3C’s Automotive Web Platform Business Group, which is dedicated to accelerating the adoption of Web technologies in the auto industry. Though it took a while to rev up, the group is now in full gear and we’re making excellent progress towards our first goal of defining a vehicle information API for passenger vehicles.

The plan is to establish a standard API for accessing speed, RPM, tire pressure, and other vehicle data. The API will enable consistent app development across automakers and thereby reduce the fragmentation that affects in-vehicle infotainment systems. Developers will be able to use the API for apps running directly on the head unit as well as for apps running on mobile devices connected to the head unit.

Parallel processing
Let me walk you through our work to date. To get started, we examined API specifications from four member organizations: QNX, Webinos, Intel, and GENIVI. Next, we collected a superset of the attributes from each spec and categorized each attribute into one of several functional groups: vehicle information, running status, maintenance, personalization, driving safety, climate/environment, vision systems, parking, and electric vehicles. Then, we divvied up these functional groups among teams who worked in parallel: each team drafted an initial API for their allotted functional group before sharing it with the members at large.

Throughout this effort, we documented a set of API creation guidelines to capture the intent and reasoning behind our decisions. These guidelines cover details such as data representation, attribute value ranges and increments, attribute naming, and use of callback functions. The guidelines also capture the rules that govern how to grow or extend the APIs, if and when necessary.

Driving towards closure
In December the business group editors began to pull the initial contributions into a single draft proposal. This work is progressing and will culminate in a member’s face-to-face meeting mid-March in Santa Clara, California, where we will review the draft proposal in its entirety and drive this first initiative towards closure.

I’m sure there will be lots more to talk about, including next potential areas of focus for the group. If you're interested in following our progress, here’s a link to the draft API.

Enjoy!

Wednesday, June 10, 2015

Building (sound) character into cars

Tina Jeffrey
Modern engines are overachievers when it comes to fuel efficiency — but they often score a C minus in the sound department. Introducing a solution that can make a subtle but effective difference.

Car engines don’t sound like they used to. Correction: They don’t sound as good as they used to. And for that, you can blame modern fuel-saving techniques, such as the practice of deactivating cylinders when engine load is light. Still, if you’re an automaker, delivering an optimal engine sound is critical to ensuring a satisfying user experience. To address this need, we’ve released QNX Acoustics for Engine Sound Enhancement (ESE), a complementary technology to our solution for active noise control.

The why
We first demonstrated our ESE technology at 2015 CES
in the QNX technology concept car for acoustics.
Many people assume, erroneously, that ESE is about giving cars an outsized sonic personality — such as making a Smart ForTwo snarl like an SRT Hellcat. While that is certainly possible, most automakers will use ESE to augment engine sounds in subtle but effective ways that bolster the emotional connection between car and driver — just like engine sounds did in the past. It boils down to creating a compelling acoustic experience for drivers and passengers alike.

ESE isn’t new. Traditionally, automakers have used mechanical solutions that modify the design of the exhaust system or intake pipes to differentiate the sound of their vehicles. Today, automakers are shifting to software-based ESE, which costs less and does a better job at augmenting engine sounds that have been degraded by new, efficient engine designs. With QNX Acoustics for Engine Sound Enhancement, automakers can accurately preserve an existing engine sound for use in a new model, craft a unique sound to market a new brand, or offer distinct sounds associated with different transmission modes, such as sport or economy.

The how
QNX Acoustics for Engine Sound Enhancement is entirely software based. It comprises a runtime library that augments naturally transmitted engine sounds as well as a design tool that provides several advanced features for defining and tuning engine-sound profiles. The library runs on the infotainment system or on the audio system DSP and plays synthesized sound synchronized to the engine’s real-time data: RPM, speed, throttle position, transmission mode, etc.




The ESE designer tool enables sound designers to create, refashion, and audition sounds directly on their desktops by graphically defining the mapping between a synthesized engine-sound profile and real-time engine parameters. The tool supports both granular and additive synthesis, along with a variety of digital signal processing techniques to configure the audio path, including gain, filter, and static equalization control.



The value
QNX Acoustics for Engine Sound Enhancement offers automakers numerous benefits in the design of sound experiences that best reflect their brand:

  • Ability to design consistent powertrain sounds across the full engine operating range
     
  • Small footprint runtime library that can be ported to virtually any DSP or CPU running Linux or the QNX OS, making it easy to customize all vehicle models and to leverage work done in existing models
     
  • Tight integration with other QNX acoustics middleware libraries, including QNX Acoustics for Active Noise Control, enabling automakers to holistically shape their interior vehicle soundscape
     
  • Dedicated acoustic engineers that can support development and pre-production activities, including porting to customer-specific hardware, system audio path verification, and platform and vehicle acoustic tuning
     
If you’re with an automaker or Tier One and would like to discuss how QNX Acoustics for ESE can address your project requirements, I invite you to contact us at anc@qnx.com.

In the meantime, learn more about this solution on the QNX website.

Tuesday, June 9, 2015

Specs for Cars?

Tina Jeffrey
As Google Glass, the latest in experimental computer wearables, starts to make its way into the hands of select users, a multitude of use cases is popping up. For instance, a WIRED article recently explored the notion of your car being a ‘killer app’ for Google Glass. Now, you may not want to think of your car as a killer app, but let’s contemplate this use case for a moment.

Drivers wearing Glass could pair their new specs to their phone and instantly have a personal heads-up display (HUD) that overlays virtual road signs and GPS information over the scene in front of them. For instance:


Source: Google

Glass also understands voice commands and could dictate an email, display turn by turn directions, or set up and display point-of-interest destination data based on a simple voice command such as “Find the nearest Starbucks”.

This is all very cool — but does it bring anything new to the driving experience that isn’t already available? Not really. Car makers have already deployed voice-enabled systems to interface with navigation and location-based services; these services either run locally or are accessed through a brought-in mobile device and displayed on the head unit in a safe manner. ADAS algorithms, meanwhile, perform real-time traffic sign detection and recognition to display speed limits on the vehicle’s HUD. All this technology exists today and works quite well.

Catch me if you can
Another aspect to consider is the regulatory uncertainty created by drivers wearing these types of devices. Police can spot a driver with their head down texting on a cellphone or watching a movie on a DVD player. But detecting a driver performing these same activities while wearing a head-mounted display — not so easy. There’s no way of knowing whether the activities a driver is engaged in are driving related or an outright distraction. Unlike an HUD specified by the automaker, which is designed to coordinate and synchronize displayed data based on vehicle conditions and an assessment of cognitive load, a head-mounted display like Glass could give a driver free reign to engage in any activity at any time. This flies in the face of driver distraction guidelines being promulgated by government agencies.

Don’t get me wrong. Glass is cool technology, and I see viable applications for it. For instance, as an alternative to helmet cams when filming a first-person perspective of a ski run down a mountain, or in taking augmented reality gaming to the next level. (You can see many other applications on the Glass site.) But Glass is a personal display that operates as an extension of your cellphone, not as a replacement for a car’s HUD. Cars need well-integrated, useable systems that can safely enhance the driving experience. Because of this, I don’t believe that devices like Glass, as they are currently conceived, will garner a spot in our cars.

Monday, June 8, 2015

A sound approach to creating a quieter ride

Tina Jeffrey
Add sound to reduce noise levels inside the car. Yup, you read that right. And while it may seem counterintuitive, it’s precisely what automakers are doing to provide a better in-car experience. Let’s be clear: I’m not talking about playing a video of SpongeBob SquarePants on the rear-seat entertainment system to keep noisy kids quiet — although I can personally attest to the effectiveness of this method. Rather, I’m referring to deliberately synthesized sound played over a vehicle’s car speakers to cancel unwanted low-frequency engine tones in the passenger compartment, yielding a quieter and more pleasant ride.

So why is this even needed? It comes down to fuel economy. Automakers are continually looking at ways to reduce fuel consumption through techniques such as variable cylinder management (reducing the number of cylinders in operation under light engine load) and operating the engine at lower RPM. Some automakers are even cutting back on passive damping materials to decrease vehicle weight. These approaches do indeed reduce consumption, but they also result in more engine noise permeating the vehicle cabin, creating a noisier ride for occupants. To address the problem, noise vibration and harshness engineers (OEM engineers responsible for characterizing and improving sound quality in vehicles) are using innovative sound technologies such as active noise control (ANC).

Automotive ANC technology is analogous to the technology used in noise-cancelling headphones but is more difficult to implement, as developers must optimize the system based on the unique acoustic characteristics of the cabin interior. An ANC system must be able to function alongside a variety of other audio processing tasks such as audio playback, voice recognition, and hands-free communication.


The QNX Acoustics for Active Noise Control solution uses realtime engine data and sampled microphone data from the cabin to construct the “anti-noise” signal played over the car speakers.

So how does ANC work?
According to the principle of superposition, sound waves will travel and reflect off glass, the dash, and other surfaces inside the car; interfere with each other; and yield a resultant wave of greater or lower amplitude to the original wave. The result varies according to where in the passenger compartment the signal is measured. At some locations, the waves will “add” (constructive interference); at other locations, the waves will “subtract” or cancel each other (destructive interference). Systems must be tuned and calibrated to ensure optimal performance at driver and passenger listening positions (aka “sweet spots”).

To reduce offending low-frequency engine tones (typically <150 Hz), an ANC system typically requires real-time engine data (including RPM) in addition to signals from the cabin microphones. The ANC system then synthesizes and emits “anti-noise” signals that are directly proportional but inverted to the original offending engine tones, via the car’s speakers. The net effect is a reduction of the offending tones.


According to the superposition principle of sound waves, a noise signal and an anti-noise signal will cancel each other if the signals are 180 degrees out of phase. Image adapted from Wikipedia.

Achieving optimal performance for these in-vehicle systems is complex, and here’s why. First off, there are multiple sources of sound inside a car — some desirable and some not. These include the infotainment system, conversation between vehicle occupants, the engine, road, wind, and structural vibrations from air intake valves or the exhaust. Also, every car interior has unique acoustic characteristics. The location and position of seats; the position, number, and type of speakers and microphones; and the materials used inside the cabin all play a role in how an ANC system performs.

To be truly effective, an ANC solution must adapt quickly to changes in vehicle cabin acoustics that result from changes in acceleration and deceleration, windows opening and closing, changes in passenger seat positions, and temperature changes. The solution must also be robust; it shouldn’t become unstable or degrade the audio quality inside the cabin should, for example, a microphone stop working.

The solution for every vehicle model must be calibrated and tuned to achieve optimal performance. Besides the vehicle model, engine noise characteristics, and number and arrangement of speakers and microphones, the embedded platform being used also plays a role when tuning the system. System tuning can, with conventional solutions, take months to reach optimal performance levels. Consequently, solutions that ease and accelerate the tuning process, and that integrate seamlessly into a customer’s application, are highly desirable.

Automotive ANC solutions — then and now
Most existing ANC systems for engine noise require a dedicated hardware control module. But automakers are beginning to realize that it’s more cost effective to integrate ANC into existing vehicle hardware systems, such as the infotainment head unit. This level of integration facilitates cooperation between different audio processing tasks, such as managing a hands-free call and reducing noise in the cabin.

Earlier today, QNX announced the availability of a brand new software product that targets ANC for engine tone reduction in passenger vehicles. It’s a flexible, software-based solution that can be ported to floating or fixed-point DSPs or application processors, including ARM, SHARC, and x86, and it supports systems with or without an OS. A host application that executes on the vehicle’s head unit or audio amplifier manages ANC through the library’s API calls. As a result, the host application can fully integrate ANC functionality with its other audio tasks and control the entire acoustic processing chain.

Eliminating BOM costs
The upshot is that the QNX ANC solution can match or supersede the performance of a dedicated hardware module — and we have the benchmarks to show it. Let me leave you with some of the highlights of the QNX Acoustics for Active Noise Control solution:

  • Significantly better performance than dedicated hardware solutions — The QNX solution can provide up to 9dB of reduction at the driver’s head position compared to 5dB for a comparative hardware solution in the same vehicle under the same conditions.
     
  • Significant BOM cost savings — Eliminates the cost of a dedicated hardware module.
     
  • Flexible and configurable — Can be integrated into the application processor or DSP of an existing infotainment system or audio amplifier, and can run on systems with or without an OS, giving automakers implementation choices. Also supports up to 6 microphone and 6 speaker-channel configurations.
     
  • Faster time to market — Speeds development by shortening tuning efforts from many months to weeks. Also, a specialized team of QNX acoustic engineers can provide software support, consulting, calibration, and system tuning.

For the full skinny on QNX Acoustics for Active Noise Control, visit the QNX website.

Sunday, June 7, 2015

Now with ADAS: The revamped QNX reference vehicle

Tina Jeffrey
Since 2015, our Jeep has showcased what QNX technology can do out of the box. We decided it was time to up the ante...

I walked into the QNX garage a few weeks ago and did a double take. The QNX reference vehicle, a modified Jeep Wrangler, had undergone a major overhaul both inside and out — and just in time for 2015 CES.

Before I get into the how and why of the Jeep’s metamorphosis, here’s a glimpse of its newly refreshed exterior. Orange is the new gray!



The Jeep debuted in June 2015 at Telematics Detroit. Its purpose: to show how customers can use off-the-shelf QNX products, like the QNX CAR Platform for Infotainment and QNX OS, to build a wide range of custom infotainment systems and instrument clusters, using a single code base.

From day one, the Jeep has been a real workhorse, making appearances at numerous events to showcase the latest HMI, navigation, speech recognition, multimedia, and handsfree acoustics technologies, not to mention embedded apps for parking, internet radio streaming, weather, and smartphone connectivity. The Jeep has performed dependably time and time again, and now, in an era where automotive safety is top of mind, we’ve decided to up the ante and add leading-edge ADAS technology built on the QNX OS.

After all, what sets the QNX OS apart is its proven track record in safety-certified systems across market segments — industrial, medical, and automotive. In fact, the QNX OS for Automotive Safety is certified to the highest level of automotive functional safety: ISO 26262, ASIL D. Using a pre-certified OS component is key to the overall integrity of an automotive system and makes system certification much easier.

The ultimate (virtual) driving experience
How better to showcase ADAS in the Jeep, than by a virtual drive? At CES, a 12-foot video screen in front of the Jeep plays a pre-recorded driving scene, while the onboard ADAS system analyzes the scene to detect lane markers, speed signs, and preceding vehicles, and to warn of unintentional lane departures, excessive speed, and imminent crashes with vehicles on the road ahead. Onboard computer vision algorithms from Itseez process the image frames in real time to perform these functions simultaneously.

Here’s a scene from the virtual drive, in which the ADAS system is tracking lane markings and has detected a speed-limit sign:



If the vehicle begins to drift outside a lane, the steering wheel provides haptic feedback and the cluster displays a warning:



The ADAS system includes Elektrobit EB Assist eHorizon, which uses map data with curve-speed information to provide warnings and recommendations, such as reducing your speed to navigate an upcoming curve:



The Jeep also has a LiDAR system from Phantom Intelligence (formerly Aerostar) to detect obstacles on the road ahead. The cluster displays warnings from this system, as well as warnings from the vision-based collision-detection feature. For example:



POSTSCRIPT:
Here’s a short video of the virtual drive, taken at CES by Brandon Lewis of Embedded Computing Design, in which you can see curve-speed warnings and lane-departure warnings:



Fast-boot camera
Rounding out the ADAS features is a rear-view camera demo that can cold boot in 0.8 seconds on a Texas Instruments Jacinto 6 processor. As you may recall, NHTSA has mandated that, by May 2018, most new vehicles must have rear-view technology that can display a 10-by-20 foot area directly behind the vehicle; moreover, the display must appear no more than 2 seconds after the driver throws the vehicle into reverse. Backup camera and other fastboot requirements such as time-to-last-mode audio, time-to-HMI visible, and time-to-fully-responsive HMI are critically important to automakers. Be sure to check out the demo — but don’t blink or you’ll miss it!

Full-featured infotainment
The head unit includes a full-featured infotainment system based on the QNX CAR Platform for Infotainment and provides information such as weather, current song, and turn-by-turn directions to the instrument cluster, where they’re easier for the driver to see.



Infotainment features include:

Qt-based HMI — Can integrate other HMI technologies, including Elektrobit EB Guide and Crank Storyboard.

Natural language processing (NLP) — Uses Nuance’s Vocon Hybrid solution in concert with the QNX NLP technology for natural interaction with infotainment functions. For instance, if you ask “Will I need a jacket later today?”, the Weather Network app will launch and provide the forecast.

EB street director — Provides embedded navigation with a 3D map engine; the map is synched up with the virtual drive during the demo.

QNX CAR Platform multimedia engine — An automotive-hardened solution that can handle:
  • audio management for seamless transitions between all audio sources
  • media detection and browsing of connected devices
  • background synching of music for instant media playback — without the need for the synch to be completed

Support for all smartphone connectivity options — DLNA, MTP, MirrorLink, Bluetooth, USB, Wi-Fi, etc.

On-board application framework — Supports Qt, HTML5, APK (for Android apps), and native OpenGL ES apps. Apps include iHeart, Parkopedia, Pandora, Slacker, and Weather Network, as well as a Settings app for phone pairing, over-the-air software updates, and Wi-Fi hotspot setup.

So if you’re in the North Hall at CES this week, be sure to take a virtual ride in the QNX reference vehicle in Booth 2231. Beneath the fresh paint job, it’s the same workhorse it has always been, but now with new ADAS tech automakers are thirsting for.

Thursday, June 4, 2015

Crisper, clearer in-car communication — Roger that

Tina Jeffrey
Over the years, Telematics Detroit has become a premier venue for showing off advancements in automotive infotainment, telematics, apps, cloud connectivity, silicon, and more. If the breadth of QNX technology being demonstrated at the show this week is any indication, the event won’t disappoint. Among the highlights is our next-generation acoustics processing middleware — QNX Acoustics for Voice 3.0 — which has been architected to deliver the highest-quality audio for hands-free and speech recognition systems, enabling the ultimate acoustics experience in the car.

What is QNX Acoustics for Voice?
QNX Acoustics for Voice 3.0 is the successor to the QNX Aviage Acoustics Processing Suite 2.0. The new product includes a set of libraries — standard and premium — that offer automakers ultimate flexibility for voice processing in the harsh audio environment of the car.

The standard library provides a full-featured solution for implementing narrowband and wideband hands-free communications, operating at 8 kHz and 16kHz sample rates, respectively. It also includes innovative new features for performing echo cancellation, noise reduction, adaptive equalization, and automatic gain control. Perhaps the most valuable feature, especially for systems constrained by limited CPU cycles, is the high efficiency mode, which can process wideband and higher-bandwidth speech with substantially less CPU load. The net result: more processing headroom for other tasks.

The premium library includes all the standard library functionality, plus support for Wideband Plus, which expands the frequency range of transmitted speech to 50 Hz - 11 kHz, at a 24kHz sample rate. The introduction of Wideband Plus fulfills the higher voice quality and low noise requirements demanded by the latest smartphone connectivity protocols for telephony, VoIP services, and speech recognition. Let me recap with a table:

Supported capabilities
Standard library
Premium library
Narrowband audio: 300 – 3400Hz (8kHz sample rate)
   
   
Wideband audio: 50-7000Hz
(16kHz sample rate)
   
   
Wideband Plus audio: 50Hz – 11kHz (24kHz sample rate)

   
High efficiency mode
 
(Wideband only)
   
VOIP requirements for new smartphone connectivity protocols

   
Cloud-based speech recognition requirements for new smartphone connectivity protocols

   



Why is high-quality speech important in the car?

Simply put, it improves the user experience and can benefit passenger safety. Also, new smartphone connectivity protocols require it. Let’s examine two use cases: hands-free voice calling, and speech recognition.

In a voice call, processing a larger bandwidth of speech and eliminating echo and noise from various sources, including wind, road, vents, fans, and tires, dramatically increases speech intelligibility — and the more intelligible the speech, the more natural the flow of conversation. Also, clearer speech has less impact on the driver’s cognitive load, enabling the driver to pay more attention to the task at hand: driving.

Speech recognition systems are becoming a primary way to manage apps and services in the car. Voice commands can initiate phone calls, select media for playback, search for points of interest (POI), and choose a destination.

Technological advancements in pre-processing voice input to remove noise and disturbances helps speech recognizers detect commands more reliably, thereby achieving higher recognition accuracy. Early speech recognition systems, by comparison, were unintuitive and performed poorly. Drivers became so frustrated that they stopped using these systems and resorted to picking up their smartphones, completely eliminating the safety benefits of speech recognition.

QNX Acoustics for Voice 3.0 is a comprehensive automotive voice solution that includes industry-leading echo cancellation, noise reduction, adaptive equalization and automatic gain control.

If you happen to be at Telematics Update in Novi Michigan this week, be sure to drop by our booth to sit in our latest concept car — a specially modified Mercedes-Benz CLA45 AMG — and experience our acoustics technologies first hand.