BorgWarner To Display Advanced eMobility Solutions At Bharat Mobility Global Expo 2025
- By MT Bureau
- January 13, 2025
BorgWarner is all set to display its latest developments in electrified drivetrain solutions at the Bharat Mobility Global Expo 2025 in New Delhi.
At Booth M5 in Hall H2 at the Components Show in Yashobhoomi, Dwarka, New Delhi, BorgWarner will present its LFP battery systems, which are built with a sturdy and modular architecture based on state-of-the-art blade cell technology from FinDreams Battery. The various pack sizes are perfect for a variety of off-road applications and all electric commercial vehicles, including trucks and buses. These batteries, which come with the newest in-house, future-proof technology and software, help to increase vehicle range, dependability and safety performance. Additionally, BorgWarner will showcase its advanced turbocharging solutions, eMotors, high-voltage coolant heaters, integrated drive modules and next-generation inverters, highlighting its extensive range that serves advanced combustion, hybrid, and electrification requirements.
Dr Stefan Demmerle, Vice President, BorgWarner Inc and President and General Manager, PowerDrive Systems, said, “Our participation in the Bharat Mobility Global Expo marks a significant opportunity to showcase our pioneering eMobility technologies. With our high-energy LFP batteries and advanced power electronics as well as thermal management solutions, we are committed to supporting India’s rapid move towards cleaner, more efficient mobility solutions. Our global expertise and local presence enable us to be a key partner in this transformation.”
NXP And Nvidia Collaborate On Integrated Robotics Solutions For Physical AI
- By MT Bureau
- March 17, 2026
NXP Semiconductors has announced a series of robotics solutions designed for real-time data processing, sensor fusion and motor control. Developed in collaboration with Nvidia, these ready-to-deploy systems implement the Nvidia Holoscan Sensor Bridge with NXP’s system-on-chip (SoC) technology to reduce component count, power consumption and costs in robotic development.
The solutions focus on Physical AI, which requires low-latency data transport to synchronise motion and sensor data. By integrating the Holoscan Sensor Bridge into NXP's software, developers can establish a direct transport route between a robot's body and its central processing unit.
The architecture incorporates several NXP technologies:
- i.MX 95 Applications Processor: A machine vision solution designed to deliver high-bandwidth data to the robot brain.
- i.MX RT1180 Crossover MCUs: A motor control solution based on a kinematic chain.
- S32J TSN Switch: Aggregates motor control data and provides direct connectivity to the brain using Time-Sensitive Networking (TSN) and EtherCAT protocols.
- Asymmetric Data Transport: Technology acquired through Aviva Links to manage high-throughput data across the robot body.
The unified architecture is designed to support humanoid form factors, which require complex motor synchronisation and real-time perception. NXP’s automotive-grade networking and functional safety expertise are used to ensure the reliability of these systems in physical environments.
Charles Dachs, Executive Vice-President and General Manager, Secure Connected Edge at NXP Semiconductors, said, “Physical AI is redefining what machines can do in the real world, and humanoid robots represent the most complex expression of that revolution. By combining NXP’s deep expertise in edge processing, secure networking, functional safety and real-time control with Nvidia robotics platforms, we are greatly simplifying physical AI development, enabling seamless connectivity between the physical AI edge and the central brain. This is just the beginning of what NXP will deliver to accelerate the ecosystem for physical AI.”
Deepu Talla, Vice-President of Robotics and Edge AI, Nvidia, commented, “The development of autonomous machines requires a high-performance computing architecture that can synchronize complex motor controls with real-time perception. By integrating Nvidia Holoscan Sensor Bridge into its edge portfolio, NXP is providing developers with a scalable foundation to accelerate the deployment of physical AI.”
- TIER IV
- Autoware
- SoC
- Level 4 Autonomous
- University of Tokyo
- Carnegie Mellon University
- Hyundai IONIQ 5
- Toyota JPN TAXI
- Technical University of Munich
- Volkswagen T7 Multivan
- Shinpei Kato
- Yang Zhang
- Yutaka Matsuo
TIER IV Launches Data-Centric AI Software Stacks For Level 4 Autonomous Driving
- By MT Bureau
- March 17, 2026
Tokyo-headquartered deep-tech company TIER IV has announced that it has developed new software stacks for Level 4 autonomous driving powered by data-centric artificial intelligence. The software is available via Autoware, an open-source platform, and is designed to be hardware-agnostic, supporting various system-on-chip (SoC) and sensor configurations.
The software stacks are built on an end-to-end (E2E) architecture and offer two primary configurations to allow adaptability across diverse driving environments:
- Hybrid System: Utilises perception and planning AI. It employs diffusion models to capture temporal changes in surroundings and generates trajectories by combining machine learning models with environment perception.
- E2E System: Integrates perception, planning, and control into a single learning process. It uses world models to treat surroundings and driving status as vector representations, creating a pipeline from recognition to vehicle operation.
Automakers can use TIER IV’s machine learning operations (MLOps) platform to iterate AI models. The platform manages data-quality validation, anonymisation and tagging, while generating synthetic and real-world datasets for system evaluation.

TIER IV has commenced 60-minute test runs in three global hubs to validate the technology under distinct traffic conditions:
- Tokyo: Collaborating with the University of Tokyo using a Toyota JPN TAXI to evaluate urban hub-to-hub travel.
- Pittsburgh: Partnering with Carnegie Mellon University using a Hyundai IONIQ 5 for robotaxi tests between Pittsburgh International Airport and the university.
- Munich: Working with the Technical University of Munich using a Volkswagen T7 Multivan for safety evaluations in European urban scenarios.
While safety drivers remain on board to comply with local regulations, no manual intervention is expected during normal operation.
Shinpei Kato, Founder and CEO, TIER IV, said, “To achieve Level 4+ autonomy, we need technology that evolves autonomously alongside the environments it serves. Our new data-centric AI models and collaborative MLOps platform provide a common language and a shared foundation for the entire industry. By working with research institutions, industry leaders and the development community to advance autonomous driving technology through Autoware, we are creating an open, transparent environment that fosters continuous, collective innovation for the benefit of society.”
Yang Zhang, Chairman, Autoware Foundation’s Board of Directors, said, “Autoware serves as the global foundation where researchers, corporations and developers collaborate to advance autonomous driving software. Our collaboration with TIER IV strengthens the international framework for validating and refining E2E autonomous driving through real-world deployment. By testing across three continents, we are driving standards-based innovation and expanding an open ecosystem that lowers the barrier for a diverse range of partners to join and contribute.”
Yutaka Matsuo, Professor at the University of Tokyo, added, “The release of these software stacks and MLOps platform is a vital step toward deploying advanced AI models in industrial applications. By accumulating data from Japan’s distinctive traffic environments through our Tokyo testing and contributing those insights back to Autoware, we aim to further bridge the gap between academic research and real-world deployment.”
Marelli's Zone Control Unit Named Engineering Product of the Year
- By MT Bureau
- March 13, 2026
Tier 1 automotive supplier Marelli has received the ‘Commendable’ honour in the ‘Engineering Product of the Year’ category at the Digital Engineering Awards 2026. The ceremony, hosted by L&T Technology Services in association with ISG and CNBC-TV18, was held in Boston, USA, on 12 March 2026.
The award recognises the role of Marelli’s Zone Control Unit (ZCU) in the transition towards software-defined vehicles.
The ZCU is designed to replace traditional domain-based architectures with a platform that delivers cross-domain control through a single Electronic Control Unit (ECU). This system simplifies vehicle electrical and electronic (E/E) layouts and enables communication across vehicle zones. By reducing the number of dedicated ECUs and streamlining wiring, the ZCU reduces wiring harness weight by 30 per cent compared to existing systems.
It is built on the EliteZone platform and supports ethernet capabilities, hardware accelerators, and remote-control protocols. It features processing performance up to 6 KDMIPS, two-port Gigabit Ethernet, and more than 20 CAN and LIN interfaces. The unit also includes an integrated hypervisor and data routing engine, supporting functional safety up to ASIL D standards.
For power management, the ZCU accommodates 48V system requirements with dedicated power input and efuse-protected output. The hardware uses a service-oriented architecture (SOA), which decouples software development from hardware. This approach allows modules to subscribe to services exposed by the ECU, supporting feature updates throughout the vehicle lifecycle and shortening development cycles for manufacturers.
Ravi Tallapragada, President, Marelli’s Electronics business, stated, “This recognition for our Zone Control Unit makes me and all of us at Marelli truly proud. It reflects the impact of our work on supporting the industry’s transition toward software-defined vehicles. By bringing cross-domain control into a single, scalable platform, our ZCU enables vehicle makers to innovate at speed. I want to congratulate our global engineering teams, whose dedication and expertise made this achievement possible.”
drivebuddyAI Receives Patent For Vehicle Facial Recognition System
- By MT Bureau
- March 12, 2026
drivebuddyAI has been awarded a patent for a facial recognition system designed for vehicle environments. The technology identifies drivers in moving vehicles to monitor duty hours and manage fatigue.
The system uses computer vision and artificial intelligence to recognise faces under varying lighting conditions and when drivers wear accessories such as caps or mufflers. This replaces manual or key-based identification methods to track driving time for wage calculations and safety compliance.
The patented technology is integrated into several areas of the company's product suite:
- Driver Profiling: Used in the 'CARDs' scoring method.
- Alert Systems: Provision of language-specific alerts based on driver identification.
- Performance Monitoring: Real-time tracking of duty time and driver behaviour.
- Compliance: Alignment with Indian government discussions on enforcing rest periods for commercial vehicle operators.
The company holds 15 patents in AI vision, edge processing, and risk assessment. Its systems meet India's AIS-184 driver monitoring standards and the European Union's General Safety Regulation (GSR) 2144.
Nisarg Pandya, CEO, drivebuddyAI, said, “Driver fatigue remains one of the most critical yet under-addressed causes of highway accidents. Our patented technology ensures that fleets know exactly who is driving, for how long, and under what conditions. This creates a foundation for enforcing safe driving limits while also enabling continuous learning and improvement for drivers. This milestone reflects our commitment to delivering technology built from the ground up and leveraging AI to enable safer and smarter driving solutions.”

Comments (0)
ADD COMMENT