Deep Learning meets PyTorch (part-2)

Duy Anh Nguyen
7 min readApr 20, 2019

Tensor is the start

tensors are the fundamental data structure in PyTorch. A tensor is
an array, that is, a data structure storing collection of numbers that are accessible individually using an index, that can be indexed with multiple indices.

a list of three numbers are represented in Python

it is very easy to access the first element of the list using the corresponding 0-based index:

However in order to deal with vectors of numbers such as the coordinates of a 2D line, we do not use list for this task because of several reasons:

  • Python will box them in a box full-fledged Python object with reference counting, etc.; this is not a problem if we need to store a small number of them, but allocating millions of such numbers gets very inefficient;
  • lists in Python are meant for sequential collections of objects: there are no operations defined for, say, efficiently taking the dot product of two vectors, or summing vectors together; also, Python lists have no way of optimizing the layout of their content in memory
  • the Python interpreter is slow compared to what it can be achieved using optimized code written in a compiled, low-level language like C,

For these reasons NumPy is the main horse of data science instead of list; PyTorch tensors which are similar to NumPy arrays that provide efficient low-level implementations of numerical data structures and related operations on them, wrapped in a convenient high-level API.

Let’s construct our first PyTorch tensor and see what it looks like:

what is the above code doing?

well, after importing the module, we called a function that creates a torch
(one-dimensional) tensor of size 3 filled with the value . We can access an element using its 1.0 0-based index or assign a new value to it.

Although, it dos not looks different from a list of number objects, under the hood things are completely different. Python lists or tuples of numbers are
collections of Python objects that are individually allocated in memory. PyTorch Tensors or NumPy arrays on the other hand are views over (typically) contiguous memory blocks

Python object (boxed) numeric values vs. Tensor (unboxed array) numeric
values

Tensors and Storages

we start getting hints about the implementation under the hood: Values are allocated in contiguous chunks of memory, managed by torch.Storage instances. A storage is a one-dimensional array of numerical data, i.e. a contiguous block of memory containing numbers of a given type, such a or . A PyTorch is a view over such a that float short Tensor Storage is capable of indexing into that storage using an offset and and per-dimension strides.

Multiple tensors can index the same storage, even if they index into the data differently. The underlying memory is only allocated once, however, so creating alternate tensor-views on the data can be done quickly, no matter the size of the data managed by the Storage instance.

Tensors are views over a Storage instance.

Let’s see how indexing into the storage works in practice with our 2D points. The storage for a given tensor is accessible using the property:

Even though the tensor reports itself as having 3 rows and 2 columns, the storage under the hood is a contiguous array of size 6. In this sense, the tensor just knows how to translate a pair of indices into a location in the storage.

We will seldom, if ever, use storage instances directly, but understanding the relationship between a tensor and the underlying storage is very useful to understand the cost (or lack thereof) of certain operations later on. It’s a good mental model to keep in mind when we want to write effective PyTorch code.

Size, offset, strides

In order to index into a storage, tensors rely on a few pieces of information, which, together with their storage, unequivocally define them: size, storage offset and strides. The size (or shape, in NumPy parlance) is a tuple indicating how many elements across each dimension the tensor represents. The storage offset is the index in the storage corresponding to the first element in the tensor. Stride is the number of elements in the storage that need to be skipped over to obtain the next element along each dimension.

Relationship between a tensor’s offset, size and stride.

For instance, if we get the second point in the tensor by providing the corresponding index:

The resulting tensor has offset 2 in the storage and the size is an instance of the class Size containing one element, since the tensor is one-dimensional. Important note: this is the same information as contained in the property of tensor objects. Last, stride is a tuple indicating the number of elements in the storage that have to be skipped when the index is increased by 1 in each dimension. For instance, our points tensor has stride

Accessing an element in a 2D tensor, results in accessing the i, j storage_offset + stride[0] * i + stride[1] * j element in the storage.

This leads some operations, like transposing a tensor or extracting a sub-tensor, to be inexpensive, as they do not lead to memory reallocations; instead they consist in allocating a new tensor object with a different value for size, storage offset or strides.

For example:

the sub-tensor has one fewer dimension, as one would expect, while still indexing the same storage as the original tensor

Let’s try with transposing now:

We can easily verify that the two tensors share the same storage:

and that they only differ by the shape and stride

This picture below explained how stride works

Transpose operation applied to a tensor.

Numeric Types

Here are some comment numeric type for Tensor:

  • torch.float32 or torch.float: 32-bit floating point
  • torch.float64 or torch.double: 64-bit, double precision floating point
  • torch.float16 or torch.half: 16-bit, half precision floating point
  • torch.int8: signed 8-bit integers
  • torch.uint8: unsigned 8-bit integers
  • torch.int16 or torch.short: signed 16-bit integers
  • torch.int32 or : signed 32-bit integers torch.int
  • torch.int64 or torch.long: signed 64-bit integers

Moving tensors to the GPU

Every Torch tensor can be transferred to (one of) the GPU(s) in order to perform massively parallel, fast computations. All operations that will be performed on the tensor will be carried out using GPU-specific routines that come with PyTorch.

Here is how we can create a tensor on the GPU by specifying the
corresponding argument to the constructor:

We can also copy a tensor created on the CPU onto the GPU using the to method:

Doing so returns a new tensor that has the same numerical data, but stored in the RAM of the GPU, rather than in regular system RAM. Now that the data is stored locally on the GPU, we’ll start to see the speedups mentioned earlier when performing mathematical operations on the tensor

In case our machine has more than one GPU, we can also decide on which GPU we allocate the tensor by passing a zero-based integer identifying the GPU on the machine, such as

Summary

  • Tensors are the basic data structure in PyTorch, they are multidimensional arrays.
  • PyTorch has a comprehensive standard library for tensor creation, manipulation and mathematical operations.
  • All tensor operations in PyTorch can execute on the CPU as well as on the GPU, with no change in the code.

--

--