What Tensors Meaning, Applications & Example

Multidimensional arrays used in machine learning and deep learning.

What are Tensors?

Tensors are multi-dimensional arrays or matrices that are a fundamental data structure in machine learning and deep learning . They represent data in the form of arrays with varying dimensions, such as scalars (0D), vectors (1D), matrices (2D), and higher-dimensional arrays (3D or more). Tensors are used to store and manipulate data, such as inputs, outputs, and parameters, in machine learning models.

Types of Tensors

  1. Scalar (0D Tensor): A single number, such as 5 or -3.14.
  2. Vector (1D Tensor): An ordered list of numbers, such as [1, 2, 3, 4].
  3. Matrix (2D Tensor): A 2-dimensional array, such as a table or grid, represented as rows and columns.
  4. Higher-Dimensional Tensor: Tensors that represent data with three or more dimensions, such as a batch of images with width, height, and color channels (e.g., 3D tensors for images).

Applications of Tensors

Example of Tensors

In a neural network , the input data (e.g., an image) is represented as a tensor. For instance, a color image of size 28x28 pixels with RGB color channels would be stored as a 3D tensor of shape (28, 28, 3). Operations like convolution, activation functions, and backpropagation all involve manipulating tensors to learn the model parameters during training.

Read the Governor's Letter

Stay ahead with Governor's Letter, the newsletter delivering expert insights, AI updates, and curated knowledge directly to your inbox.

By subscribing to the Governor's Letter, you consent to receive emails from AI Guv.
We respect your privacy - read our Privacy Policy to learn how we protect your information.

A

B

C

D

E

F

G

H

I

J

K

L

M

N

O

P

Q

R

S

T

U

V

W

X

Y

Z