The Boston Fire Department started to use emerging technology to fight fires in the last couple of years. In collaboration with Karen Panetta, an IEEE fellow and dean of Graduate Education at Tufts University’s School of Engineering, the department is using AI for object recognition. The goal is to be able to use a drone or robot that can locate objects in a burning building.
Panetta worked with the department to develop prototype technology that leverages IoT sensors and AI in tandem with robotics to help first responders “see” through blazes to detect and locate objects – and people. The AI technology she developed analyzes data coming from sensors that firefighters wear, and it recognizes objects that can be navigated in a fire.
The systems are still a work in progress. Panetta noted that all the imaging and sensors are working, but the prototypes can't withstand heat right now. Departments were testing the prototype by doing onsite burns, but that stopped with the onset of the pandemic.
She insists that AI holds great promise and will have a significant impact on the effectiveness of first responders in dangerous settings. She noted that the systems need to learn from thousands of scenarios to conduct search and rescue, and prevent fatalities, and detect the presence of victims in hazardous conditions. She now wants to apply the prototype to the California wildfires.
Design News caught up with Panetta to find out how she expects the technology to get deployed going forward.
DN: Explain the technology that you developed in partnership with the Boston Fire Department.
Karen Panetta: Natural disasters, including earthquakes, hurricanes, and anthropogenic disasters, and the wildfires overwhelming California, are dynamic situations where numerous hazards are constantly emerging that hinder humanitarian efforts and create deadly conditions for rescue workers and civilians. My team’s work in partnership with the Boston Fire Department involves using low-cost sensor and imaging technologies to collect information and automatically analyze data to help in disaster assessment, monitoring, and relief efforts.
Following a catastrophic failure of a structure, rescue workers and emergency responders may be required to enter an extremely unsafe environment. Emergency workers may be responsible for assisting survivors, extinguishing fires, shutting down utilities, evaluating structural instabilities, identifying safe paths into the structure, and assessment of other hazards, such as loose electrical wires and airborne contaminants.
DN: How have firefighters used technology in the past?
Karen Panetta: Most disaster area assessment, as well as search and rescue efforts, rely heavily on visual imagery captured from cell phones, aerial video, and other forms of media. Currently, most imaging systems intended for human observers rely on images acquired from visible spectrum cameras due to their high availability and low cost. For critical missions, human observers typically view these images manually to perform detection and recognition. The images are taken in harsh conditions, such as smoke, rain, fog, and fire. They hinder human observers’ ability to detect victims of their fellow first responders.
Unfortunately, due to the cognitive load demands for high volumes of information, humans are subjected to dynamically changing dangerous conditions while doing their jobs to find victims and fight fires.
AI and Advanced Technology In Dangerous Environments
DN: How do artificial intelligence and other advanced technologies help in these dangerous situations?
Karen Panetta: Our research utilizes artificial intelligence to help in these dynamic situations by providing real-time image enhancement capabilities to remove smoke, fog, rain, snow, and to leverage recognition algorithms to bring attention to objects of interest, such as humans, animals, and other physical objects. This will help responders better navigate hazardous environments.
Our research also integrates IoT to optimize these applications. It is so important for teams to know where their members are located, whether in a forest or within a building. This is especially critical when an individual loses contact with their team or is injured and unable to communicate.
Working on the California Wildfires
DN: Explain how robot technology can be used to help with the California wildfires.
Karen Panetta: In California, drones are being deployed to monitor situations, detect hotspots, and the spread of the fire. However, the footage being captured is typically viewed live, so without the ability to enhance the imagery in real-time, important details and detections can still be missed because of poor visibility in the images and due to the overwhelming volume of information a human observer must be able to keep pace with. In addition to being able to provide clear vision images from smoky images, our research includes data fusion of multi-modal sensors, including thermal, chemical, and radiation sensors. This allows a better understanding of situations and dangers.
DN: What was involved in developing this technology with the Boston Fire Department?
Karen Panetta: My colleagues and I are working with the Boston, Medford, and Somerville Fire Departments to test our technologies and deploy one of our first prototype systems in conjunction with the IEEE-MOVE (Mobile Outreach Vehicle) team. The IEEE-MOVE truck has been successfully deployed to several areas affected by natural disasters. It contains satellite communications and solar-powered charging stations so relief workers and communities can keep powering their devices and can maintain communications. Our multi-modal imaging and sensing platform will give added capabilities to help keep workers safe and to better assess situations even in inclement weather.
One often overlooked danger is the hazards presented by flooding due to submerged power lines. We often see images of people wading through water and putting themselves in danger because they are not aware of the power lines. Most recently, we saw alligators entering waterways after flooding. Our novel underwater imaging technologies would have detected them.
DN: Is any of the underwater technology being used now?
Karen Panetta: Several companies are using our technology for a variety of underwater imaging applications. There has been so much work on enhancing “open-air” images, but these techniques fail miserably in underwater environments. Our work addresses this directly.
DN: In developing robot technology for California, will the IoT and AI technology developed in Boston also be involved?
Karen Panetta: The beauty of what we have designed is that it is like a portable engine that can be plugged into different kinds of vehicles. For instance, it can be deployed on a robot, a drone, or simply used in conjunction with a laptop computer with the sensors mounted onboard a vehicle or boat.
The different sensor modalities can be swapped out on a drone to increase its flying time and decrease the load on the drone. So, if it’s flying at night, using our thermal recognition module captures the information best. If there are dangerous chemicals, it is better to use one of the chemical sensing modules to map out the presence of dangerous chemicals that may be burning.
The Algorithms of Rescue
DN: What type of algorithm are you using to process collected sensor data?
Karen Panetta: Our algorithms utilize edge computing, meaning we are so efficient in our processing that it can occur onboard the device and not require transmitting and post-processing before viewing. Communication systems are typically strained during natural disasters.
The concept is to transmit as much useful information as possible, so if we can transmit viewable, clear images with objects of interest already highlighted, that’s a huge advantage over transmitting many images that are unperceivable for human vision and then doing the processing off-site to make them viewable. That precious time delays response. For the applications, we are targeting, getting accurate information to responders, locating victims quickly, and keeping responders safe are the primary goals.
A Future of Clear Perception
DN: Is this technology getting used now or is targeted for the future?
Karen Panetta: We anticipate that this technology will be used much more for underwater applications. Right now, my team is using the same “engine” to detect plastics in the Mystic River in Massachusetts. This could be a new way to cost-effectively assess pollution and its impact on our waterways and wildlife.
We see millions of dollars spent on underwater search and recovery missions. The multi-modal technologies coupled with our image enhancement algorithms will reduce these costs and allow users to detect and recognize objects much faster and safer.
As a member of IEEE, IEEE-HKN, and the IEEE SIGHT team, I learn firsthand from responders and relief workers around the globe about their challenges. They risk their lives every day to keep us safe. Since we have the capacity to help them through our low-cost technologies and engineering innovation, we feel it’s our duty to help them.
Rob Spiegel has covered automation and control for 19 years, 17 of them for Design News. Other topics he has covered include supply chain technology, alternative energy, and cybersecurity. For 10 years, he was the owner and publisher of the food magazine Chile Pepper.