ASU Learning Sparks

Image Pixels: How Computers Understand Images

Written by Kimberlee Swisher | Jun 1, 2023 5:15:40 PM

All images are comprised of pixels which determine its resolution and color. Image color is defined by the pixel light and the level of brightness in each. Computers use pixel locations to identify color patterns. This allows computers and machine learning algorithms to conduct pattern matching and classify images.

What’s in an image? If we zoom in on any image to find out what it’s made of, we start to see squares of color. Each square has just 1 color of its own. 

An image is a square grid of numbers, known as pixels. The more pixels we have in an image, the more details we are able to see. 

We have all tried making a small image larger, only to see it become blurry, simply because there aren’t enough pixels to create the detail needed at the larger size. 

In short, the image resolution means the level of detail an image holds - the higher the resolution, the higher the level of detail. There are a number of ways of measuring resolution, but it is often tied to pixel count in digital imaging.

The pixels are blocks of light on a computer screen. If you were able to look very carefully, they are actually 3 tiny lights, each emitting red, green, and blue light. Perhaps you’ve heard of RGB color before - this is what that means, Red, Green and Blue light. If you’ve ever mixed red and blue paint to make purple, you’ll know that colors of paint mixed together can make many different colors based on the amounts and colors you pour in. Digital color works similarly, except instead of mixing paint, we mix colors of light. Our eyes take in the individual colors of light,  but our brain interprets them as variations of millions of colors in a sophisticated process of perception. (*rabbit hole alert: actually, your brain does this with everything around you - your color receptors in your eyes are called Cones, and there are only three kinds:  one for red, green, and blue wavelengths of light. The brain interprets how much of each of these cones are firing electrical signals to the brain, and mixes it there. Way cool! )

Software can tell the computer how bright to make each light, and with 256 different brightness levels on 3 different lights, we can create millions of different colors - over 16 million actually. 

So back to our square grid of numbers, known as pixels. Each of these are numbers because we can define each color by the amount of light emitting from each diode, or how bright the light is. Higher numbers are brighter, lower numbers are darker. 

In addition to one specific color, each pixel has its own location. We use two numbers to represent the location - a left/right or horizontal number, and a top/bottom or vertical number. So we can count in an image and find out exactly what color is at exactly what spot. 

This is important because we also can use this knowledge to find out what color one specific pixel is, say, the blue of the eye on this face, 

but we can also find out what colors are surrounding those pixels - the white of the eye and the darker colours surrounding it. 

So that tells us that something special is happening with this blue area - it’s unique and maybe we should pay attention to it. This is a color pattern, one that we can see with our eyes, and one of many patterns that computers can learn to recognize. This is how basic image classification algorithms work; by looking at patterns of pixel colors within an image and looking for similar patterns in other images, this is called “pattern matching.” Machine learning algorithms are even better at this process because they train a neural network to recognize the patterns of numbers within the image. 

Once a machine can match a pattern, the applications are endless.