qlyoung's wiki

This is an old revision of the document!


underwater videography with gopro

This will describe what I do to get decent results out of a GoPro when shooting underwater.

Light

Attenuation

Cameras are light gathering devices and perform better the more light is available, all else being equal. How much light is available in the water?

Water absorbs light fairly rapidly compared to air. Attenuation of light in a medium is described by the Beer-Lambert law; in short, surface light decreases as a function of depth per

I(z) = I₀ ​e−ᵏᶻ

Where I₀​ is the initial light intensity (at the surface), I(z) is the intensity at depth z, and k is the absorption coefficient that varies depending on the water's characteristics (such as clarity and particle content).

For air k is around 0.0001. For seawater k is more like 0.1. That gives us roughly the following:

Depth (m) Light Intensity (%)
0 100.00%
5 60.65%
10 36.79%
15 22.31%
20 13.53%
30 4.98%
40 1.83%

The upshot is that the diving environment is low light as far as cameras are concerned. This is why underwater photographers have kit that looks like this:

Notice that the camera itself is the smallest part of the whole setup, most of the bulk is dedicated to lighting to add back in light absorbed by the water.

Color

Water is blue because it preferentially absorbs wavelengths of light other than blue. The deeper you get the more light is absorbed and the bluer the scene.

The only way to truly fix this is to add the missing red wavelengths back in with external light.

The other thing you can do is adjust the white balance of the sensor to make it more sensitive to red wavelengths. Every sensor pixel has red, green and blue subpixels. The values from these pixels are combined to produce a single pixel color. Adjusting white balance biases the color information collected by subpixels to include more of the red channel.

Filters

Red filters over a lens are stupid. Putting red plastic over a lens does not magically restore red wavelengths back into the scene. The filter is red because it absorbs blue light. As previously stated cameras perform better with more light, and most of the light we have at depth is blue. All you are doing with a red filter is deleting the precious light you do have.

Even if filters were good, since the scene spectrum varies by depth and filters are calibrated to one spectrum your filter will almost always be suboptimal.

You can tell when someone used a red filter because their video/picture looks like shit. Don't use filters.

Framerate

Explanation of how a camera sensor works is out of scope for this discussion but roughly speaking:

  • Larger sensors are less noisy all else being equal
  • Dynamic range scales with sensor size - especially in highlights

Consider that you need that lighting rig to get good pictures out of full size cameras with relatively large sensors. A full frame camera sensor is ~43mm on the diagonal. The Hero 11 has a 1/1.9“ sensor size, roughly 13mm diagonal. That is over 3 times smaller. Now consider in that we are shooting video, which means exposure times are at most to the video framerate - usually 1/60 or 1/30.

Slower framerate allows for longer exposure times, more light and better picture, all else being equal. My advice is to bias towards lower framerates in water to give the camera more breathing room.

Diving is a slow sport, so 30fps is usually plenty. Personally I find it looks better / more cinematic as well. Fast, active subjects such as bait balls might benefit from 60fps. Basically start at 30 and if you think you might need it change to 60.

ISO

So far we have established that:

  • All diving footage is low light footage
  • Small sensor size is working against us

A useful weapon against these is ISO. In digital cameras it adjusts the electrical sensitivity of the sensor elements to incoming light. The tradeoff is that increased sensitivity results in lower SNR - i.e. higher ISO produces more noise. Fortunately denoising is very good these days, so if you are willing to denoise in post, you can get away with relatively high ISO.

I usually limit to 1600.

Settings

GoPro publishes a firmware called Labs that unlocks a lot of power user software toggles, ma ny of which are useful for underwater videography. Whoever is responsible for Labs does an amazing job.

Basic

  • Natural color
    • Flat or a custom log curve in theory grant you more flexibility for color correction in post. However, after editing underwater LOG footage I realized that I do not care and that natural looks completely fine to my eye, so I don't use it anymore.
  • Wide lens
    • “Wide” is the native lens. “Narrow” crops it, “Hyperview” and “Superview” convert 8:7 and 4:3 video respectively to 16:9 using an in-camera distortion mapping. I prefer an output with minimal in-camera corrections so I use “Wide”. If you want 16:9 you should select that as an export option in Gyroflow instead of in the camera; that way Gyroflow benefits from the additional buffer space to perform corrections and then crops at the end.
    • This also helps Gyroflow; while it is capable of stabilizing Hyper/SuperView footage, that feature uses a reverse-engineered distortion matrix to undo the distortion. There's no point in using Hyperview, undoing it, then redoing it in export.
  • 5.3k30 8:7
    • I personally find this to be the sweet spot. 5.3k 8:7 is the native sensor resolution, every single pixel. By keeping the entire frame, post processed stabilization has much more “buffer” space to crop into. In this resolution the maximum framerate is 30 - but as mentioned I almost always shoot 30fps in water, so this is fine.
  • 4k60 8:7
    • If I want to shoot 60fps for some reason then I drop the resolution to 4k in order to keep the 8:7 aspect ratio. If you want the absolute highest resolution, you can drop to 16:9 which unlocks 5.3k60. Personally I prefer working in a consistent aspect ratio and think 4k subsampling looks good enough especially at high framerates.
  • Hypersmooth off
    • I prefer to stabilize in post with Gyroflow. I think the result is better and it grants far more control over the stabilization parameters. See the section on stabilization for additional reasons.
  • Bitrate high
    • Of course
  • 10-bit on
    • I paid good money for those bits
  • Shutter auto
    • The camera knows best
  • ISO min 100
    • I don't understand why you would change this.
  • ISO max 1600
    • This is a tough one to choose and varies based on the scene. 1600 seems to work well for most environments. I notice that in 30fps the camera will usually keep it around 800. In a cave or wreck with a torch beam it behaves similarly.
  • Sharpness low
    • I sharpen in post as desired
  • EV Comp 0
    • I don't know enough to change this

Labs

I like to set these with QRControl.

  • NR01=1
    • GoPro's built in NR is too aggressive and softens the image too much. The underwater environment is inherently noisy, especially in the ocean where large portions of the frame are a very uniform blue that makes noise very visible
    • Noise reduction in post has much better results. See denoising for details.
  • BITR=120
    • If you have a fast sdcard you can increase the maximum bitrate with this setting. However, it does cause some instability and lost footage on my specific camera so I think this is probably a wash.

On Hero 12/13, there is DIVE=1 which enables the water distortion corrections I mention in Stabilization for Hypersmooth. As mentioned I don't use Hypersmooth, but if you do, you should probably enable this.

On 13 there is also WBDV=1, described as follows:

White Balance DiVe improvements. Rather than WARM for improving diving white balance, which effects WB the same at all depths, WBDV is more automatic – as the scene get more blue, the more the red channel is gain up.

I have a Hero 11 and haven't used either of these settings so I can't speak to how well they work.

Diving Praxis

Case

I use the official GoPro case. If you dive deeper than 50m you need to use something else. I don't have any recommendations.

I find it works best with a 3/4” or 1“ bolt snap attached like this:

During shooting I just hold the camera itself. I have tried handles, including the official GoPro dive handle. I don't notice any improvement in footage and it is bulky.

Predive

Nothing sucks more than turning your GoPro on in 30m of water only to see that it is stuck in the wrong video mode. You cannot change this in the water (actually you can but more on that later).

Before every dive:

  • Check and set the time/date
    • You can do this with QRControl or Quik
  • Ensure the video and photo modes are in the preset you want

During dive

There is a way to change settings while in the water. GoPros with the labs firmware can read QR codes. If you want, you can print out QR codes that do specific things and tape them into your wetnotes like this:

Then during the dive you flip to the right QR code and point the camera at it to change the setting.

You do carry wetnotes, right?

Post

This is all biased towards Resolve since that's what I use.

Stabilization

Overview

Image stabilization is used to turn shaky handheld camera footage into nice smooth footage. Optical image stabilization, typically found in full size cameras, does this with motors that physically move the lens to cancel out shake and vibration.

Electronic image stabilization, which is used in small cameras with fixed lenses and in post processing workflows, stabilizes footage by calculating the camera motion and then applying transformations and warping to the frame to cancel out that motion. In the case of the GoPro the source of camera motion data is the gyroscope. Hypersmooth reads the gyroscope directly and postprocessing tools use the gyro data that is encoded alongside image data in the MP4 files that the GoPro records.

This result is usually cropped to eliminate the crazy warping borders. In EIS you sacrifice part of the frame for stability. The tradeoff is usually worth it.

Distortion & Refractive Index

In order to know what transformations to apply, the EIS algorithm needs to know how a change in camera position is reflected in the image. This requires EIS to know the optical properties of the lens, which is why post-processed stabilization works best when calibrated for the specific camera model / lens being used.

One of the major optical properties of a lens is its relative refractive index. Light travels at different speeds in different media. When light traveling in one medium enters a different medium, it bends (refracts) and the degree of this bend is a function of the difference in the speed of light between the first and second medium.

If you fix the first medium, e.g. by assuming it is air, then when characterizing the distortion introduced by the second medium (e.g. a lens) you can include the distortion introduced by refraction as part of your characterization. Then you can use that characterization to calibrate algorithms such as EIS.

Virtually all EIS systems assume air is the shooting medium by default. This means that when using EIS on footage shot in water, the lens distortion matrix needs to be adjusted to account for the difference in relative refractive index between water and lens. Gyroflow added this option recently and there is a Labs setting (DIVE=1) to enable it for Hypersmooth on Hero 12/13.

Without this adjustment stabilization will still work, just not as well. GoPro Labs docs say that without it, Hypersmooth is about 70% effective compared to shooting in air.

GoPro specifics

GoPro is well known for its implementation of EIS which it calls Hypersmooth. Hypersmooth is pretty good as it goes but there's a few things to know about it:

  • It produces “baked” videos; stabilization parameters cannot be changed in post and the cropped frame cannot be recovered
  • It is performed on the camera CPU in real time and thus has hard compute limits to contend with
  • It does not know what will happen in the future and cannot benefit from the additional information; stabilization in post can view the entire gyro track and perform smoothing over a larger time window, improving results

For these reasons I do not use Hypersmooth when shooting underwater.

Instead I use Gyroflow with the following settings:

  • Lens profile → Advanced → Lens is under water
    • This will adjust the distortion matrix to account for the refractive index of water, at once improving stabilization results and correcting for water-induced distortion in addition to the lens distortion.
  • Export settings → Preserve other tracks
  • Stabilization params set to whatever I think looks good, usually the default

The reason for the second setting: GoPro MP4 files contain multiple data tracks. Example:

$ ffprobe -i <vid>
<snip>
  Duration: 00:12:48.77, start: 0.000000, bitrate: 117879 kb/s
  Stream #0:0[0x1](eng): Video: hevc (Main) (hvc1 / 0x31637668), yuvj420p(pc, bt709), 4000x3000 [SAR 1:1 DAR 4:3], 117609 kb/s, 59.94 fps, 59.94 tbr, 60k tbn (default)
      Metadata:
        creation_time   : 2025-04-30T23:12:44.000000Z
        handler_name    : GoPro H.265
        vendor_id       : [0][0][0][0]
        encoder         : GoPro H.265 encoder
        timecode        : 19:11:34:59
  Stream #0:1[0x2](eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 189 kb/s (default)
      Metadata:
        creation_time   : 2025-04-30T23:12:44.000000Z
        handler_name    : GoPro AAC  
        vendor_id       : [0][0][0][0]
        timecode        : 19:11:34:59
  Stream #0:2[0x3](eng): Data: none (tmcd / 0x64636D74) (default)
      Metadata:
        creation_time   : 2025-04-30T23:12:44.000000Z
        handler_name    : GoPro TCD  
        timecode        : 19:11:34:59
  Stream #0:3[0x4](eng): Data: bin_data (gpmd / 0x646D7067), 56 kb/s (default)
      Metadata:
        creation_time   : 2025-04-30T23:12:44.000000Z
        handler_name    : GoPro MET  

In the above example there's 4 tracks:

  • H265 video track
  • AAC audio track
  • TCD timecode track (empty)
  • A binary track containing gopro-specific metadata. This is where gyro, GPS and other metadata is stored.

By default, GyroFlow will drop the metadata tracks when producing a stabilized output. This metadata is valuable and other tools such as Telemetry Overlay rely on it to function, so you want to keep this track.

Denoising

I use Resolve as my video editor and find these settings to work as a decent starting point for most underwater footage:

Color correction

As I mentioned, I don't do color correction. It takes too long and Natural looks good enough to my eye.

If you want to, though, these are the things I have used:

Panorama theme by desbest
underwater_videography_with_gopro.1746120747.txt.gz · Last modified: 2025/05/01 17:32 by qlyoung
CC Attribution-Noncommercial-Share Alike 4.0 International Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Noncommercial-Share Alike 4.0 International