Tensors
2024 | AI Dictionary
Multidimensional arrays used in machine learning and deep learning.
What are Tensors?
Tensors are multi-dimensional arrays or matrices that are a fundamental data structure in machine learning and deep learning . They represent data in the form of arrays with varying dimensions, such as scalars (0D), vectors (1D), matrices (2D), and higher-dimensional arrays (3D or more). Tensors are used to store and manipulate data, such as inputs, outputs, and parameters, in machine learning models.
Types of Tensors
- Scalar (0D Tensor): A single number, such as 5 or -3.14.
- Vector (1D Tensor): An ordered list of numbers, such as [1, 2, 3, 4].
- Matrix (2D Tensor): A 2-dimensional array, such as a table or grid, represented as rows and columns.
- Higher-Dimensional Tensor: Tensors that represent data with three or more dimensions, such as a batch of images with width, height, and color channels (e.g., 3D tensors for images).
Applications of Tensors
- Neural Networks: Tensors are used to represent the inputs, weights, and outputs in neural networks.
- Data Manipulation: Tensors allow for efficient mathematical operations, such as matrix multiplication and element-wise addition, which are essential in training machine learning models.
- Computer Vision and NLP: Tensors represent images, text, and other data types used for tasks like image classification and language modeling.
Example of Tensors
In a neural network , the input data (e.g., an image) is represented as a tensor. For instance, a color image of size 28x28 pixels with RGB color channels would be stored as a 3D tensor of shape (28, 28, 3). Operations like convolution, activation functions, and backpropagation all involve manipulating tensors to learn the model parameters during training.
Did you liked the Tensors gist?
Learn about 250+ need-to-know artificial intelligence terms in the AI Dictionary.