I am a building inspector and fire engineer with 30 years’ experience. I’ve overseen numerous projects across London, including new builds and refurbishments, making sure buildings comply with the proper regulations, and post-occupation fire risk assessments. Given my experience, I was shocked by the blaze which engulfed Grenfell Tower in the early hours of Wednesday morning.
At this point in time it’s very hard to tell precisely what went wrong. We don’t know where the fire started and we don’t know how it spread. What we can say for sure is how the building should have performed – and that it definitely did not perform that way. If regulations were followed, what happened at Grenfell Tower should never have been possible, and there are very big question which need to be answered. There are already suggestions that proper planning procedures were not followed.
Normally, British fire regulations assume that fires will start in one location only – and normally, this is completely reasonable. In a big tower block like Grenfell, each individual flat is a fire-tight box from which flames should not be able to escape, and a fire which starts in one tends to stay in it. That is why residents are usually advised to stay within their own rooms and wait for rescue. The fire service should arrive within ten minutes, ascend the building, and tackle the fire where it burns, while other residents sit quite happily in place.
This is also why we shouldn't be disturbed by reports from Grenfell that there was no common alarm system installed. Most residential blocks don’t have common alarms, because they could trigger a mass panic in which everyone tries to evacuate via the same stairwell which the fire service are using to reach the fire. Unlike in a hotel, there are no fire trained fire wardens to safely direct such an evacuation. In the event that a fire grows too large, firefighters might sometimes decide to evacuate the floor immediately above. Otherwise, it’s better everyone stays where they are. That policy has worked several hundred times over the past few years without a problem.
What happened at Grenfell was something else entirely. Firefighters were on site six minutes after being called, which is within expectations. But it is extremely unusual for the fire to spread this far and with this speed and ferocity. Within half an hour or so it had travelled way beyond the first flat, making it very difficult for the fire services to control it. Even more worryingly, survivors have reported that stairwells and lobbies were choked with smoke, which should never happen: there are supposed to be means of clearing smoke from such areas. In those circumstances, “stay and hide” becomes obsolete.
And yet to me the fire spread still had a horrifying familiarity. This has happened before, and – if we are not careful – it may happen again.
ADVERTISINGIn Knowsley Heights in Manchester in 1991, fire spread in a way no one had predicted via the decorative cladding on the outside of the building. These plastic or metal panels are installed to protect a building from weather or improve its appearance, but between them and the wall there is a cavity where rain can run down. In the event of a fire this acts like a chimney, drawing the hot air up through itself and making the flames burn brighter. In this way fire travelled all the way up from the base of the building to the very top.
New analytical techniques that work with a massive volume of data of extremely wide variety are enabling geoscientists and engineers to understand the nature and extent of reservoirs in ways never possible before. Welcome to an interview with Kamal Hami-Eddine, Paradigm, who explains big data and deep learning as they are being used today in the petroleum exploration and production. Kamal will be presenting at the AAPG Deepwater / Big Data GTW.
My name is Kamal Hami-Eddine, Paradigm, and I studied applied statistics, probabilities and stochastic processes. This how I got introduced to big data problematics. I was studying in a city where the plane industry is big, and a big challenge for them was to monitor and learn from all the measures they take during flights, to limit maintenance cost. At that time the problem was unsolvable, but lots of research was done to find ways to transform idle data into information. That being said, I worked a lot on machine learning and neural networks more specifically, so naturally these days, it is all about big data and deep learning.
I've read the other answers people from outside of the industry. Allow me to give you an answer from within the industry.
There is currently no machine learning going on and, I don't conceive of an application for it. While we have automated many things, and machines now do much of the above ground manual labor humans previously did, these are simple, well-defined, non-evolving tasks. There is no need for the machines to learn anything.
Downhole tools already use sensory data and function autonomously. The sensors and data are so simple that the "decisions" made by downhole tools are approximately as simple as the "decisions" made by your car's old fashioned speed control, so simple the programming hasn't changed in years, and doesn't need to. They're much like ballistic missiles. We tell them where to go and, unless they break, they go where told. Their only decisions are made using iterative loops. Downhole machines don't need to learn because their jobs are incredibly simple.
I witnessed BHP Billiton's attempt to use "big data" to optimize drilling operations. It failed dismally because the data analysts knew nothing about the meaning of the statistics they were accumulating. Because of this they drew lots of wrong conclusions. Big data is of interest when analyzing a large number of data points, as in consumer or voter behavior. When dealing with smaller, more granular data sets, big data's conclusions can be very misleading. An oil company may employ anywhere from a few to a few hundred drilling rigs spread across the globe, many operating in unique circumstances. Drawing conclusions from that kind of granular data isn't best done by algorithms. It's always best done by experienced, knowledgeable humans. I don't see an upstream application for big data. Even our "little data" is usually misused.