Deep Learning meets PyTorch (part-2)

Tensor is the start

tensors are the fundamental data structure in PyTorch. A tensor is
an array, that is, a data structure storing collection of numbers that are accessible individually using an index, that can be indexed with multiple indices.

  • lists in Python are meant for sequential collections of objects: there are no operations defined for, say, efficiently taking the dot product of two vectors, or summing vectors together; also, Python lists have no way of optimizing the layout of their content in memory
  • the Python interpreter is slow compared to what it can be achieved using optimized code written in a compiled, low-level language like C,
Python object (boxed) numeric values vs. Tensor (unboxed array) numeric
values

Tensors and Storages

we start getting hints about the implementation under the hood: Values are allocated in contiguous chunks of memory, managed by torch.Storage instances. A storage is a one-dimensional array of numerical data, i.e. a contiguous block of memory containing numbers of a given type, such a or . A PyTorch is a view over such a that float short Tensor Storage is capable of indexing into that storage using an offset and and per-dimension strides.

Tensors are views over a Storage instance.

Size, offset, strides

In order to index into a storage, tensors rely on a few pieces of information, which, together with their storage, unequivocally define them: size, storage offset and strides. The size (or shape, in NumPy parlance) is a tuple indicating how many elements across each dimension the tensor represents. The storage offset is the index in the storage corresponding to the first element in the tensor. Stride is the number of elements in the storage that need to be skipped over to obtain the next element along each dimension.

Relationship between a tensor’s offset, size and stride.
Transpose operation applied to a tensor.
  • torch.float64 or torch.double: 64-bit, double precision floating point
  • torch.float16 or torch.half: 16-bit, half precision floating point
  • torch.int8: signed 8-bit integers
  • torch.uint8: unsigned 8-bit integers
  • torch.int16 or torch.short: signed 16-bit integers
  • torch.int32 or : signed 32-bit integers torch.int
  • torch.int64 or torch.long: signed 64-bit integers

Moving tensors to the GPU

Every Torch tensor can be transferred to (one of) the GPU(s) in order to perform massively parallel, fast computations. All operations that will be performed on the tensor will be carried out using GPU-specific routines that come with PyTorch.

Summary

  • Tensors are the basic data structure in PyTorch, they are multidimensional arrays.
  • PyTorch has a comprehensive standard library for tensor creation, manipulation and mathematical operations.
  • All tensor operations in PyTorch can execute on the CPU as well as on the GPU, with no change in the code.

--

--

AI Researcher - NLP Practitioner

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store