Functional ultrasound through the skull

Functional ultrasound imaging (fUSI) is an awesome new technology for imaging brain activity. Its spatial resolution can be even better than fMRI. The hardware to run ultrasound can be made compact and cheap.

Could we possibly use it to make a brain-computer interface? Similar to the acousto-electric effect, fUSI could have 10,000x more channels than state-of-the-art EEG.

Functional ultrasound in a rat brain (from Macé et al., 2011)

So far, all fUSI experiments involve opening up the skull or replacing it with an acoustically-transparent window. Not the greatest if you want a non-invasive brain-computer interface. Could we do functional ultrasound with the skull intact?

When we asked around about doing functional ultrasound with the skull intact, we were told it was impossible. There are two big challenges in it ultrasound work through the skull:

  1. High frequency ultrasound waves get distorted when they pass through the skull. If you use traditional ultrasound beamforming and don't account for this distortion, you're bound to fail.

A 1 MHz ultrasound wave propagating through a skull model, obtained via CT. A 2D solver with no absorption was used for simplicity. (jupyter notebook)

  1. The skull attenuates ultrasound, so there might not even be enough signal once the ultrasound has passed through the skull and back.

People often quote 22 dB/cm/MHz attenuation of ultrasound. Decibels are a logarithmic scale, so with 1.4 cm of roundtrip skull distance, and typical fUSI frequencies of 10 MHz, this would be 14 orders of magnitude of attenuation! In physics, there's a word for 14 orders of magnitude of attenuation. It's called zero, i.e., you will measure nothing.

But where did the 22 dB/cm/MHz attenuation number come from? We were skeptical of the methods used to measure this.

We ran a 10-day hacksprint to try to answer if functional ultrasound through the skull was possible.

Two main results:

  1. Attenuation through the skull is actually not as high as currently reported!

  2. We can de-abberrate the signal after it interfaces with the skull. Code and algorithm shared!

Banner image showing functional ultrasound through the skull

What is functional ultrasound?

Before anything else, a quick primer on functional ultrasound. Functional ultrasound measures changes in blood flow or volume in the brain. Blood changes are correlated with neural activity — more neurons firing means more energy consumed, which means more blood that will flow to deliver more energy.

Functional ultrasound measures blood volume by sending short pulses of ultrasound into the brain. Because blood cells have a different acoustic impedance than the background, the blood cells scatter the sound waves. Ultrasound transducers then read the scattered sound waves and form an image of the scattering.

Diagram illustrating functional ultrasound imaging process

What frequency to use?

The first design question for an ultrasound imager is what frequency to use. Higher frequencies attenuate ultrasound more — the typical 10 MHz ultrasound would attenuate the signal to well below the noise levels of ultrasound transducers.

You might then think to use as low frequency as you can. But there's a problem with that, too: as you lower the frequency, the amount of scattering from red blood cells decreases and you get less signal back. The physical reason for this is that the scattering is in the Rayleigh regime (the red blood cells are a lot smaller than the wavelength of ultrasound), and the scattering in this regime scales with the fourth power of frequency.

So what's the optimal frequency? We modeled the ultrasound attenuation through the skull and the Rayleigh scattering from the blood and asked what frequency gives the most signal back.

Graph showing ultrasound signal drop through bone for blood cells

Ultrasound signal drop through bone for blood cells. This combines both reduced Rayleigh scattering and bone attenuation effects. This uses an attenuation of 10 dB/cm/MHz through 0.7 cm of skull each way. The attenuation is given relative to 5 MHz functional ultrasound without the skull, which is the lowest frequency we've seen people do functional ultrasound (jupyter notebook)

It turns out that 1.25 MHz gives the least signal drop for an attenuation of 10 dB/cm/MHz. So we use 1-2 MHz as a first guess.*

* The signal drop isn't the only metric that matters. Another relevant metric is contrast: higher frequencies will give higher contrast between the Rayleigh scattering blood cells and the background.

Skull CT Scan

The thing that nobody tells you is that you can buy a real human skull online (shoutout to skullsunlimited.com). We did that, and then CT scanned it.

Photo of a human skull used for CT scanning

Here's what the CT scan looks like. We thought this was really cool! You can access the scan here.

De-aberration

To correct for the distortion of ultrasound through the skull, we developed our own skull de-aberration algorithm.

Aberrations in the body come from changes in the speed of sound*

* Changes in density and even the stiffness of the material can also affect wave propagation, but those are less pronounced than changes in speed of sound in tissue.
, similar to how light bends in Snell's law. There are two problems here

  1. Bone has a much higher speed of sound (~2800 m/s) than the brain (~1540 m/s)

  2. Part of the skull is actually porous, filled with marrow. There's a big change in the speed of sound of the bone (~2800 m/s) and the marrow (~1540 m/s).

Diagram showing different parts of bone structure

Different parts of bone (source: Anatomy and Physiology)

The problem is that traditional ultrasound imaging assumes that the medium's speed of sound is the same everywhere.

The engine of our algorithm is a wave propagation simulator. It takes in a speed of sound map and source waves, and it simulates waves propagating through the medium.

We use a CT scan of the skull to estimate the skull's speed of sound map. Then, our algorithm considers virtual sensors below the skull.*

* To go from the CT scan to a speed of sound map, we binarize the CT skull into tissue/marrow and bone. We set bone voxels to 2,800 m/s and the rest to 1,540 m/s.

Diagram illustrating virtual sensors below the skull

We use the wave solver to find a transformation between the data recorded at the actual sensors that are above the skull and the data that would have been recorded if they were at the virtual positions below the skull. Then, we use this transformation to act as if the sensors were below the skull, and apply standard delay-and-sum beamforming. The code for reproducing our results can be found in recon_skull.ipynb in our GitHub repository.

How do we find the transformation?

To find the transformation, we play a trick with time-reversing waves. If the sensors retransmit a time-reversed version of what they measured, you can, in some sense, “rewind” the waves.

There's two parts to the transformation: on transmit and on receive.

On transmit, we simulate the virtual sensors sending a plane wave, and we measure the signal at the real sensors. Then, in real life, we get the sensors to send a time-reversed version of what they measured in simulation. This should produce a plane wave at the virtual sensor location.

Diagram showing plane wave propagation through the skull

On receive, we take the data measured by our real transducers in reality, and propagate the time-reversed version through the wave simulator, while recording at the virtual sensors. We use the virtual sensor data for the subsequent beamforming.

We tested our algorithm in a 2D simulation, with a cross-section of our CT-scanned skull. We placed a small scatterer below the skull, and the goal was to reconstruct it.

Comparison of regular beamforming and de-aberration algorithm results

Regular beamforming completely fails, but our algorithm can correctly capture the location of the spot.

We also started to test our approach on real data. We have some preliminary results that show that we can de-abberate through a 3d printed squiggly material (meant to emulate the skull), but we need to do some more testing.

In translating this approach to humans, we won't have access to a CT scan. But, perhaps an MRI would suffice. Or, maybe you could use machine learning to bypass de-aberration altogether.

Attenuation measurements

Now that we have a potential way to de-aberrate through the skull, we wanted to know if there would even be enough signal after the ultrasound passes through the skull and back. Ultrasound transducers have noise that's about 1 mPa/Hz1 \text{ mPa}/\sqrt{\text{Hz}}, which is ~1 Pa for a ~1 MHz bandwidth (ref). So we need to make sure the signal we receive is above that.

Why not just shoot more ultrasound?

Unfortunately, there's a safety limit. Since the head absorbs ultrasound, it heats up a little bit. The safety limit imposed by the International Electrotechnical Commission (IEC) is 2ºC, so we need to make sure not to surpass that.

To estimate how much signal we'd get back, we needed to know how much the skull attenuates ultrasound. You'd think the literature would have an answer to this, but different sources report widely different answers, from 8.3 dB/cm/MHz to 22 dB/cm/MHz. For some reason, 22 dB/cm/MHz is the number people will tell you in conversation.

We decided to measure it ourselves. We built a little mechanical jig to hold an ultrasound probe on one side of the skull to transmit and a small hydrophone on the other to receive.

Photo of the mechanical jig used for attenuation measurements
More technical details

We had the probe send a plane wave through the skull and we measured the pressure levels at the hydrophone. Our jig allowed us to place the hydrophone in 9 different spots, so that we can be sure to capture as much of the wave as possible. We took 9 measurements at two different locations on the head (occipital and temporal), and also had a control without the skull. We used continuous wave transmission at multiple frequencies: 1 MHz, 2 MHz, and 3 MHz.

We used a Phillips P4-1 probe that we found on Ebay. We chose it because it was the transducer that operated at the lowest frequencies we could find (1-4 MHz bandwidth). We measured the signal using an Onda HNR 500 connected directly to a standard oscilloscope. We drove the transducer using a Verasonics Vantage 64 system. The skull was degassed to remove air bubbles (which would otherwise cause lots of scattering) and it was placed in distilled water.

A few hours before the hacksprint ended, we rushed to get some measurements in. We only had time to take measurements at two jig locations on the head (with 9 hydrophone spots at each location). This is the attenuation we measured:

The attenuation we measured was 11.18 dB/cm/MHz, which was on the lower side of what we saw in the literature.

How did we compute attenuation?

First, the data was preprocessed by bandpass filtering the data around the reference frequency. Then, for each jig location, we computed the total power as the squared measurements summed over all 9 locations. Then we divided the total power at each jig location by the total power with no skull. We measured the thickness of the skull at each jig location with a digital caliper. See this jupyter notebook for our data analysis.

Why are our results different?

Since we only had time to measure two jig locations (+ we sketchily removed a nonsensical outlier measurement), you should take our measurements with a grain of salt. But upon digging into the literature, we found big problems with using the Fry 1978 paper that measures 22 db/cm/MHz attenuation.

Firstly, there is no mention in the paper of degassing the skull. Without degassing, there could be many air bubbles formed in the pores, which will scatter ultrasound heavily (since there is a huge mechanical impedance mismatch between air and water).

Secondly, they use focused transmission, instead of sending plane waves. The skull will spread out the focus, so if you don't sample enough, you'll mistake spreading for attenuation.

The only good reference we could find for attenuation across frequencies was a 2006 paper by White. They degas the skull and use plane waves. They claim to measure 8.53 dB/cm/MHz, but when we tried reproducing their analysis using their data, we got 11.9 dB/cm/MHz. This is very close to what we measured!

An important point is that we measured attenuation, which is different from absorption. Attenuation also includes things like scattering and reflections. But it's absorption that really limits us, not attenuation, since absorption is what contributes to heating in the head. Our attenuation measurements serve as an upper bound on absorption.

It could be that the absorption is a lot lower than the attenuation. Pinton et al find that with a 1 MHz pulsed source, only 2.7 dB/cm of the measured 13.3 dB/cm was due to absorption.

Could fUSI through the skull work?

So back to the key question: does fUSI  through the skull get a high enough signal-to-noise ratio (SNR)?

If we use 11 dB/cm/MHz as the attenuation value, we'd expect a signal drop of about 40 dB, or equivalently, 100x in pressure, relative to below-skull functional ultrasound (see the earlier plot) If we use 2.7 dB/cm/MHz, we get a signal drop of 22 dB, or 10x in pressure.

Is that too low? We're not sure yet. It depends on the pressure changes that are typically seen in regular functional ultrasound. Unfortunately, we couldn't find that in the literature (if you know this number, please let us know). If the changes in regular functional ultrasound are 100 Pa, then we'd expect to see changes of about 1-10 Pa in functional ultrasound through the skull. This is 1-10 times larger than the noise floor of a transducer (~1 Pa). And we can use a functional ultrasound trick, called coherent compounding, to increase the SNR further. Note, this is the SNR in the sensor domain, but what we really care about is the SNR in the image domain.

Doppler testbed

We also built a testbed to test fUSI outside of simulation. We wanted it to have properties similar to those of the brain. Previous work showed that tofu is desirable as a phantom material, both because it is fast to get and because it has similar physical properties (density, speed of sound) as soft tissue.

Because functional ultrasound works by detecting the movement of blood cells in blood vessels, we also needed a way to emulate blood vessels in the phantom. We accomplished this by building our own pump system.

Syringe Pump

We built our own syringe pump. Blood travels at around ~10 mm/s in small vessels in the brain, so we wanted our pump to have similar speeds. Initially we planned on using a peristaltic pump, however we could not source one that had a small enough volumetric flow rate, and at slow speeds peristaltic pumps flow rate is not consistent.

Technical details

We purchased a premade stepper motor driven linear actuator, and mounted it to a base plate. Normal sterile syringes of different sizes were used and custom mounts for each one could be swapped in; this along with adjusting the actuator speed in our Arduino code allows us to vary the volumetric flow rate. We also had various sizes of PTFE and silicone tubing to simulate different sized veins. We used Ultrasound Refill Fluid, which has similar acoustic properties to blood.

The stepper motor was controlled by a TB6600 controller, and step signals were sent by a Teensy4.1 running this program we wrote to easily adjust flow rate, typically 1-10mm/sec, analogous to the rate at which blood cells in smaller blood vessels and capillaries move.

Photo of the tofu phantom used for testing
Photo of the custom-built syringe pump

Check out our other post We have AE at home!

Acknowledgements

A big thank you to:

Code

The whole project is open source. You can find the code on our GitHub repository.