top of page
RayShaper LOGO_GRISCLAIR.png

DEEP COMPRESSION

RayShaper works in two areas where AI adds value to visual signal processing systems.


Deep tools for classic coders


First is to improve standard algorithms for image and video compression, such as JPEG, HEVC and the like. These standards define the decoder and leave the encoder up to application developers. 

Take JPEG2000 for example: it has a large set of parameters that enable developers to make implementation tradeoffs for applications like medical imagery or digital cinema. These parameters cover system features including: color space; tile size; quantization; lossless coding; number of quality layers; compression ratio; computational complexity; and performance. 

Developers rely on experience and a set of tips and tricks to tune these parameters and hard-code them into the system. As it happens, software-based, content-aware parameter control is one area where AI excels.

An artificial digital eye, for example, can be implemented as an algorithm embedded into the compression encoder and decide, among other things, which color space is best: RGB or YCbCr. If the image is largely of one color, then RGB is selected. Otherwise, select YCbCr. This toy example is just the tip of the iceberg for AI-powered computational vision.


End-to-End All Deep Compression


The second is in end-to-end deep compression. For 30 years. image/video compression has relied on hybrid coding: removing redundancies by motion compensation followed by transform coding and quantization. Starting back in 1986 with H.261 for videoconferencing, the latest hybrid coding is the H.266 standard finalized in July 2020.

At RayShaper we believe it’s time for a change, for a transformation to deep compression. The industry recognizes the need for this change as reflected in JPEG AI standardization work.

We believe the real impetus for the transformation is this: Hybrid coding schemes have evolved over the years to become complicated. And complication causes inefficiency. Deep learning algorithms look complex, yet they’re not complicated. The beautiful, regular structure of an autoencoder illustrates this point. Plus, structural regularity promises execution efficiency and results in faster, better, and cheaper implementations

Capture d’écran 2023-01-19 à 14.36.42.png

RADIAL BASIS
NETWORK

RAYSHAPER_N3.jpg

MULTI  LAYER PERCEPTRON

RAYSHAPER_N1.jpg

RECURRENT NEURAL NETWORK

RAYSHAPER_N4.jpg

SINGLE LAYER PERCEPTRON

RAYSHAPER_N2.jpg

NEURONAL NETWORK ARCHITECTURE TYPES

RADIAL BASIS
NETWORK

RAYSHAPER_N5.jpg

RADIAL BASIS
NETWORK

RAYSHAPER_N6.jpg

MULTI  LAYER PERCEPTRON

RAYSHAPER_N7.jpg

RECURRENT NEURAL NETWORK

RAYSHAPER_14.jpg

Input Unit

Output Unit

Hidden Unit

Feed back with memory unit

Backfed input unit

Probalistic hidden unit

bottom of page