qlyoung's wiki

underwater videography with gopro

What I do to get decent results out of a GoPro when shooting underwater.

Light

Attenuation

Cameras are light gathering devices and perform better the more light is available, all else being equal. How much light is available in the water?

Attenuation of light in a medium is described by the Beer-Lambert law. In terms of water, light intensity decreases as a function of depth per

I(z) = I₀ ​e⁻ᵏᶻ

where I₀​ is the initial light intensity (at the surface), I(z) is the intensity at depth, z, and k is the absorption coefficient that varies depending on the medium characteristics (for water, things like clarity and particle content).

For air k is around 0.0001. For seawater k is more like 0.05. That gives us roughly the following:

Depth (m) Depth (ft) Light Intensity (%)
0 0 100%
5 15 78%
10 35 61%
15 50 47%
20 65 37%
25 80 29%
30 100 22%
35 115 17%
40 130 14%
45 150 11%
50 165 8%

It's much more complicated than this - absorption is wavelength dependent, seawater is not uniform, there are scattering effects, etc - but this is a decent approximation for this discussion.

The upshot is that the diving environment is low light as far as cameras are concerned. This is why underwater photographers have kit that looks like this:

Notice that the camera itself is the smallest part of the whole setup, most of the bulk is dedicated to lighting to add back in light absorbed by the water.

Sensor size

Generally speaking the larger the sensor, the more light it can collect and the better its dynamic range. Smaller sensors struggle more in low light.

To put in perspective how challenging it can be to use an action cam underwater:

A full frame camera sensor is roughly 36x24mm for an area of 864mm^2. The Hero 11 sensor is 6.74×5.05mm for a sensor area of 34.04 mm^2. That is ~25x smaller than full frame.

If you are wondering why 6.74×5.05mm does not correspond to the 1/1.9“ figure published by GoPro, that is because that figure is in optical format.

Color

Water is blue because it preferentially absorbs wavelengths of light other than blue. The deeper you are the further light has to travel through water, the more red light is absorbed and the bluer the scene.

The only way to truly compensate for this is to add the missing red wavelengths back in with external light.

In camera you can adjust the white balance of the sensor to make it more sensitive to red wavelengths. Every sensor pixel has red, green and blue subpixels. The values from these pixels are combined to produce a single pixel color. Adjusting white balance biases the color information collected by subpixels to include more of the red channel. More on this in the Settings section.

Filters

Red filters over a lens are stupid. Putting red plastic over a lens does not magically restore red wavelengths back into the scene. The filter is red because it absorbs blue light. As previously stated cameras perform better with more light, and most of the light we have at depth is blue. All you are doing with a red filter is deleting the precious light you do have.

Even if filters were good, since the scene spectrum varies by depth and filters are calibrated to one spectrum your filter will almost always be suboptimal.

You can tell when someone used a red filter because their video/picture looks like shit. Don't use filters.

Framerate

Slower framerate allows for longer exposure times. Longer exposure allows more light and better picture, all else being equal. My advice is to bias towards lower framerates in water to give the camera more breathing room.

Diving is a slow sport, so 30fps is usually plenty. Personally I find it looks better / more cinematic as well. Fast, active subjects such as bait balls might benefit from 60fps. As a rule of thumb, start at 30 unless you plan to shoot something fast or want the action cam look, then consider bumping to 60 and potentially increasing ISO max to allow the camera a little more flexibility.

ISO

In digital cameras ISO adjusts the electrical sensitivity of the sensor elements to incoming light. The tradeoff is that increased sensitivity results in lower SNR - i.e. higher ISO produces more noise. Fortunately denoising is very good these days, so if you are willing to denoise in post, you can get away with relatively high ISO.

I usually limit to 1600.

Settings

GoPro publishes a firmware called Labs that unlocks a lot of power user software toggles, many of which are useful for underwater videography. Whoever is responsible for Labs does an amazing job.

Basic

  • Natural color
    • Flat or a custom log curve in theory grant you more flexibility for color correction in post. However, after editing underwater LOG footage I realized that I do not care and that natural looks completely fine to my eye, so I don't use it anymore.
  • White balance auto
    • Spectrum is constantly varying with depth and other things, on the balance I've found I get the best result allowing the camera to handle it
  • Wide lens
    • “Wide” is the native lens. “Narrow” crops it, “Hyperview” and “Superview” convert 8:7 and 4:3 video respectively to 16:9 using an in-camera distortion mapping. I prefer an output with minimal in-camera corrections so I use “Wide”.
    • Notably, if you are using Gyroflow and want 16:9 you should shoot in 8:7 and select 16:9 as an export option in Gyroflow instead of in the camera; that way Gyroflow benefits from the additional buffer space to perform corrections and then crops at the end. Gyroflow can stabilize Hyper/SuperView footage, but in order to do so it has to undo the Hyperview distortion first using a reverse-engineered distortion matrix and then re-apply it, so you may as well shoot without it to begin with

  • 5.3k30 8:7
    • I personally find this to be the sweet spot. 5.3k 8:7 is the native sensor resolution, every single pixel. By keeping the entire frame, post processed stabilization has much more “buffer” space to crop into. In this resolution the maximum framerate is 30 - but as mentioned I almost always shoot 30fps in water, so this is fine.
  • 4k60 8:7
    • If I want to shoot 60fps for some reason then I drop the resolution to 4k in order to keep the 8:7 aspect ratio. If you want the absolute highest resolution, you can drop to 16:9 which unlocks 5.3k60. Personally I prefer working in a consistent aspect ratio and think 4k subsampling looks good enough especially at high framerates.
  • Hypersmooth off
    • Water or not, I prefer to stabilize in post with Gyroflow. The result is better and it allows more control over the stabilization parameters
    • Hypersmooth is calibrated for the refractive index of air and is only 70% effective in water; Gyroflow accounts for water if configured correctly
    • See the section on stabilization for more detail
  • Bitrate high
    • Higher bitrate generally equals better quality video
  • 10-bit on
    • More bit depth allows capturing finer grained color differences, which looks significantly better
    • Greater bit depth also reduces Color banding which is often a problem in underwater footage, which tends to have diffuse color gradients as the backdrop
  • Shutter auto
    • The camera knows best
  • ISO min 100
    • I don't understand why you would change this.
  • ISO max 1600
    • This is a tough one to choose and varies based on the scene. 1600 seems to work well for most environments. I notice that in 30fps the camera will usually keep it around 800. In a cave or wreck with a torch beam it behaves similarly.
  • Sharpness low
    • I sharpen in post as desired
  • EV Comp 0
    • I don't know enough to change this

Labs

I like to set these with QRControl.

  • NR01=1
    • GoPro's built in NR is too aggressive and softens the image too much. The underwater environment is inherently noisy, especially in the ocean where large portions of the frame are a very uniform blue that makes noise very visible
    • Noise reduction in post has much better results. See denoising for details.
  • BITR=120
    • If you have a fast sdcard you can increase the maximum bitrate with this setting. However, it does cause some instability and lost footage on my specific camera so I think this is probably a wash.

On Hero 12/13, there is DIVE=1 which enables the water distortion corrections I mention in Stabilization for Hypersmooth. As mentioned I don't use Hypersmooth, but if you do, you should probably enable this.

On 13 there is also WBDV=1, described as follows:

White Balance DiVe improvements. Rather than WARM for improving diving white balance, which effects WB the same at all depths, WBDV is more automatic – as the scene get more blue, the more the red channel is gain up.

I have a Hero 11 and haven't used either of these settings so I can't speak to how well they work.

Praxis

Case

I use the official GoPro case. If you dive deeper than 50m you need to use something else. I don't have any recommendations.

I find it works best with a 3/4” or 1“ bolt snap attached like this:

During shooting I just hold the camera itself. I have tried handles, including the official GoPro dive handle. I don't notice any improvement in footage and it is bulky.

Predive

Nothing sucks more than turning your GoPro on in 30m of water only to see that it is stuck in the wrong video mode. You cannot change this in the water (actually you can but more on that later).

Before every dive:

  • Check and set the time/date
    • You can do this with QRControl or Quik
  • Ensure the video and photo modes are in the preset you want

During dive

There is a way to change settings while in the water. GoPros with the labs firmware can read QR codes. If you want, you can print out QR codes that do specific things and tape them into your wetnotes like this:

Then during the dive you flip to the right QR code and point the camera at it to change the setting.

You do carry wetnotes, right?

Post

Any computational operation you do on the camera itself is limited by the embedded CPU used on the camera, its thermal and battery constraints, and time. In post these constraints are far more favorable so you can potentially get much better results.

Video specific stuff is biased towards Resolve since that's what I use.

Stabilization

Image stabilization is used to turn shaky handheld camera footage into nice smooth footage. Optical image stabilization, typically found in full size cameras, does this with motors that physically move the lens to cancel out shake and vibration.

Electronic image stabilization, which is used in small cameras with fixed lenses and in post processing workflows, stabilizes footage by calculating the camera motion and then applying transformations and warping to the frame to cancel out that motion. In the case of the GoPro the source of camera motion data is the gyroscope. Hypersmooth reads the gyroscope directly and postprocessing tools use the gyro data that is encoded alongside image data in the MP4 files that the GoPro records.

This result is usually cropped to eliminate the crazy warping borders. In EIS you sacrifice part of the frame for stability. The tradeoff is usually worth it.

Distortion & Refractive Index

In order to know what transformations to apply, the EIS algorithm needs to know how a change in camera position is reflected in the image. This requires EIS to know the optical properties of the lens, which is why post-processed stabilization works best when calibrated for the specific camera model / lens being used.

One of the major optical properties of a lens is its relative refractive index. Light travels at different speeds in different media. When light traveling in one medium enters a different medium, it bends (refracts) and the degree of this bend is a function of the difference in the speed of light between the first and second medium.

If you fix the first medium, e.g. by assuming it is air, then when characterizing the distortion introduced by the second medium (e.g. a lens) you can include the distortion introduced by refraction as part of your characterization. Then you can use that characterization to calibrate algorithms such as EIS.

Virtually all EIS systems assume air is the shooting medium by default. This means that when using EIS on footage shot in water, the lens distortion matrix needs to be adjusted to account for the difference in relative refractive index between water and lens. Gyroflow added this option recently and there is a Labs setting (DIVE=1) to enable it for Hypersmooth on Hero 12/13.

Without this adjustment stabilization will still work, just not as well. GoPro Labs docs say that without it, Hypersmooth is about 70% effective compared to shooting in air.

GoPro specifics

GoPro is well known for its implementation of EIS which it calls Hypersmooth. Hypersmooth is pretty good as it goes but there's a few things to know about it:

  • It produces “baked” videos; stabilization parameters cannot be changed in post and the cropped frame cannot be recovered
  • It is performed on the camera CPU in real time and thus has hard compute limits to contend with
  • It does not know what will happen in the future and cannot benefit from the additional information; stabilization in post can view the entire gyro track and perform smoothing over a larger time window, improving results

For these reasons I do not use Hypersmooth when shooting underwater.

Instead I use Gyroflow with the following settings:

  • Lens profile → Advanced → Lens is under water
    • This will adjust the distortion matrix to account for the refractive index of water, at once improving stabilization results and correcting for water-induced distortion in addition to the lens distortion.
  • Export settings → Preserve other tracks
  • Stabilization params set to whatever I think looks good, usually the default

The reason for the second setting: GoPro MP4 files contain multiple data tracks. Example:

$ ffprobe -i <vid>
<snip>
  Duration: 00:12:48.77, start: 0.000000, bitrate: 117879 kb/s
  Stream #0:0[0x1](eng): Video: hevc (Main) (hvc1 / 0x31637668), yuvj420p(pc, bt709), 4000x3000 [SAR 1:1 DAR 4:3], 117609 kb/s, 59.94 fps, 59.94 tbr, 60k tbn (default)
      Metadata:
        creation_time   : 2025-04-30T23:12:44.000000Z
        handler_name    : GoPro H.265
        vendor_id       : [0][0][0][0]
        encoder         : GoPro H.265 encoder
        timecode        : 19:11:34:59
  Stream #0:1[0x2](eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 189 kb/s (default)
      Metadata:
        creation_time   : 2025-04-30T23:12:44.000000Z
        handler_name    : GoPro AAC  
        vendor_id       : [0][0][0][0]
        timecode        : 19:11:34:59
  Stream #0:2[0x3](eng): Data: none (tmcd / 0x64636D74) (default)
      Metadata:
        creation_time   : 2025-04-30T23:12:44.000000Z
        handler_name    : GoPro TCD  
        timecode        : 19:11:34:59
  Stream #0:3[0x4](eng): Data: bin_data (gpmd / 0x646D7067), 56 kb/s (default)
      Metadata:
        creation_time   : 2025-04-30T23:12:44.000000Z
        handler_name    : GoPro MET  

In the above example there's 4 tracks:

  • H265 video track
  • AAC audio track
  • TCD timecode track (empty)
  • A binary track containing gopro-specific metadata. This is where gyro, GPS and other metadata is stored.

By default, GyroFlow will drop the metadata tracks when producing a stabilized output. This metadata is valuable and other tools such as Telemetry Overlay rely on it to function, so you want to keep this track.

Denoising

I use Resolve as my video editor and find these settings to work as a decent starting point for most underwater footage:

Color correction

As I mentioned, I don't do color correction. It takes too long and Natural looks good enough to my eye.

If you want to, though, these are the things I have used:

Panorama theme by desbest
underwater_videography_with_gopro.txt · Last modified: 2025/05/07 22:44 by qlyoung
CC Attribution-Noncommercial-Share Alike 4.0 International Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Noncommercial-Share Alike 4.0 International