Those two extremes illustrate the present mental state of the US: Some of us are cleverly figuring out how to dodge the problems we have created such as space debris falling on us, while others stew over elaborate fanciful nonsense like everything on earth coming unglued and falling into space.

However, a truly intelligent, sentient species would stop flinging dangerous things up above our heads until devising a reliable system for managing their downfall.
Large chunks of space junk that return to Earth at supersonic speeds are like artificial meteors, emitting sonic booms that shake the atmosphere before slamming into the ground. [...]
Warning people to evade falling space debris is a tall order. But Dr. Fernando’s team showed it was possible to precisely map out the trajectory of an object as it re-enters the atmosphere, and to narrow down where surviving pieces may have fallen, thanks to seismometers, which are chiefly designed to detect earthquakes.
In a study published this Thursday in the journal Science, the researchers showed that sensors throughout California registered the sonic booms of the Shenzhou-15 module’s disintegration. This data let the researchers ascertain how and when the spacecraft component broke apart, and in what direction the fragments were traveling. [...]
Dr. Fernando thinks seismometers could solve this conundrum. Although primed to pick up the seismic waves of tectonic events, the tools can detect the rumblings of hurricanes, avalanches and megatsunamis. They also register the midair destruction of meteors, whose sonic waves strike the ground and create seismicity. The acoustic effects of re-entering spacecraft parts aren’t too dissimilar.
Hundreds of millions of people now use chatbots every day. And yet the large language models that drive them are so complicated that nobody really understands what they are, how they work, or exactly what they can and can’t do—not even the people who build them. Weird, right?
It’s also a problem. Without a clear idea of what’s going on under the hood, it’s hard to get a grip on the technology’s limitations, figure out exactly why models hallucinate, or set guardrails to keep them in check.
But last year we got the best sense yet of how LLMs function, as researchers at top AI companies began developing new ways to probe these models’ inner workings and started to piece together parts of the puzzle.
One approach, known as mechanistic interpretability, aims to map the key features and the pathways between them across an entire model.
A crucial part of birds’ eyes is unlike any tissue known in vertebrate animals. Their retina – the light-sensitive layer at the back of the eye – sidesteps the near-universal need for oxygen by vacuuming up heaps of energy-rich sugar instead.
The discovery solves a 400-year-old mystery about the physiology of birds’ eyes. It is also a neurobiological paradigm shift, says Christian Damsgaard at Aarhus University in Denmark.
“We have the first evidence that some neurons can work without any oxygen, and they’re found in the birds that fly around in our gardens,” he says.
Retinas detect light and relay this information as nerve signals to the brain. The tissues require a lot of energy and are sustained by oxygen and nutrients coursing through a mesh of blood vessels. But bird retinas are extremely thick, and no vessels weave into the tissue. It was a mystery how their retinas received enough oxygen to keep the deep stacks of important nerve cells alive.
You must be logged in to post a comment.