list_of_lists = [[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]]
array1 = np.array(list_of_lists)
print(array1)[[ 1 2 3 4]
[ 5 6 7 8]
[ 9 10 11 12]]
So far we worked with 1D arrays (vectors). NumPy supports higher-dimensional arrays; here we focus on 2D arrays (matrices) — ordered values in rows and columns.
list_of_lists = [[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]]
array1 = np.array(list_of_lists)
print(array1)[[ 1 2 3 4]
[ 5 6 7 8]
[ 9 10 11 12]]
array[row_index, col_index]. NumPy uses 0-based indexing.print(array1[1, 2]) # second row, third column -> 7
print(array1[2, -1]) # last element of third row -> 127
12
print(array1[:2, :3]) # first 2 rows and first 3 columns[[1 2 3]
[5 6 7]]
print(array1[1:, :]) # rows from index 1 to end, all columns
print(array1[1, :]) # entire second row -> returns 1D array
print(array1[:, 3]) # entire fourth column -> returns 1D array[[ 5 6 7 8]
[ 9 10 11 12]]
[5 6 7 8]
[ 4 8 12]
Slicing a single row/column returns a 1D array (shape (n,)). If you need a 2D column vector, reshape or use array1[:, 3:4] which yields shape (rows, 1).
reshape() and -1 inference.shape attribute:print(array1.shape) # (3, 4)(3, 4)
array_1d = np.array([5, 7, 1, 8, 4])
print(array_1d.shape) # (5,)(5,)
reshape() reinterprets the same data with new dimensions (total elements must match). Order is row-major by default:array2 = array1.reshape(2, 6)
print('Shape =', array2.shape)
print(array2)Shape = (2, 6)
[[ 1 2 3 4 5 6]
[ 7 8 9 10 11 12]]
-1 to let NumPy infer the dimension:array3 = array1.reshape(4, -1) # -1 inferred as 3
print('Shape =', array3.shape)
print(array3)Shape = (4, 3)
[[ 1 2 3]
[ 4 5 6]
[ 7 8 9]
[10 11 12]]
array_1d = np.array([3, 1, 4])
row_array = array_1d.reshape(1, 3) # shape (1,3)
col_array = array_1d.reshape(3, 1) # shape (3,1)shape parameter:ones_3x5 = np.ones(shape=(3,5))
print(ones_3x5)[[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]]
rand_array = np.random.uniform(low=0, high=1, size=(2,5))
rand_array = np.round(rand_array, 2)
print(rand_array)[[0.79 0.59 0.89 0. 0.09]
[0.71 0.86 0.05 0.73 0.06]]
x1 = np.array([[7, 0, 8],
[5, 6, 2]])
x2 = np.array([[2, 5, 3],
[7, 1, 4]])
print('x1 + x2 =\n', x1 + x2)
print('x1 * x2 =\n', x1 * x2) # elementwise multiplication
print('x1 ** x2 =\n', x1 ** x2)x1 + x2 =
[[ 9 5 11]
[12 7 6]]
x1 * x2 =
[[14 0 24]
[35 6 8]]
x1 ** x2 =
[[ 49 0 512]
[78125 6 16]]
Example: adding a column or row vector to a matrix.
y1 = np.array([[3, 1, 0],
[5, 2, 7]]) # shape (2,3)
y2 = np.array([[10], [20]]) # shape (2,1)
y3 = np.array([[10, 20, 30]]) # shape (1,3)
# Broadcasting a column over rows
print(y1 + y2) # adds 10 to all elements in the first row, and
# 20 to all elements in the second row.
# Broadcasting a row over columns (elementwise multiplication)
print(y1 * y3) # multiplies each row by [10,20,30][[13 11 10]
[25 22 27]]
[[ 30 20 0]
[ 50 40 210]]
Broadcasting rules (brief):
Broadcasting avoids explicit loops and is memory-efficient.
np.dot() behaves differently depending on input dimensions:
Examples:
# dot of 1D arrays (inner product)
z1 = np.array([2, 5, 1])
z2 = np.array([3, 4, 2])
print('dot(z1, z2) =', np.dot(z1, z2))
# matrix multiplication with compatible dimensions
M1 = np.array([1, 2, 3, 4]).reshape(2, 2) # shape (2,2)
M2 = np.array([5, 6, 7, 8, 9, 0]).reshape(2, 3) # shape (2,3)
print('M1:\n', M1)
print('M2:\n', M2)
print('np.dot(M1, M2):\n', np.dot(M1, M2))dot(z1, z2) = 28
M1:
[[1 2]
[3 4]]
M2:
[[5 6 7]
[8 9 0]]
np.dot(M1, M2):
[[21 24 7]
[47 54 21]]
print(np.dot(M2, M1)) # M2 shape (2,3) cannot be
# dot-multiplied with M1 shape (2,2)@ (same semantics as np.matmul / np.dot for 2D):A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8]])
print(A @ B)[[19 22]
[43 50]]
The transpose of a matrix interchanges rows and columns. In NumPy, use .T or np.transpose().
A = np.array([[1, 2, 3],
[4, 5, 6]])
print(A.T)[[1 4]
[2 5]
[3 6]]
The determinant is a scalar value computed from a square matrix. It provides information about whether the matrix is invertible.
B = np.array([[4, 2],
[3, 1]])
det_B = np.linalg.det(B)
print(det_B)-2.0
A square matrix A has an inverse A⁻¹ only if det(A) ≠ 0. Use np.linalg.inv() to compute it.
C = np.array([[2, 1],
[5, 3]])
inv_C = np.linalg.inv(C)
print(inv_C)[[ 3. -1.]
[-5. 2.]]
The rank is the number of linearly independent rows or columns in a matrix. Use np.linalg.matrix_rank().
D = np.array([[1, 2, 3],
[2, 4, 6],
[1, 1, 1]])
rank_D = np.linalg.matrix_rank(D)
print(rank_D)2
Here, the second row is a multiple of the first, so the rank is 2.
The trace is the sum of the diagonal elements of a square matrix.
E = np.array([[5, 1, 3],
[0, 2, 4],
[7, 8, 6]])
print(np.trace(E))13
Eigenvalues and eigenvectors are fundamental in linear algebra, used in PCA, Markov chains, and many statistical models.
F = np.array([[4, 2],
[1, 3]])
eigvals, eigvecs = np.linalg.eig(F)
print("Eigenvalues:\n", eigvals)
print("Eigenvectors (columns):\n", eigvecs)Eigenvalues:
[5. 2.]
Eigenvectors (columns):
[[ 0.89442719 -0.70710678]
[ 0.4472136 0.70710678]]
Each eigenvector corresponds to an eigenvalue. You can verify the eigenvalue equation: \[ A \mathbf{v} = \lambda \mathbf{v} \]
| Operation | Function | Description |
|---|---|---|
| Transpose | A.T or np.transpose(A) |
Flips rows and columns |
| Determinant | np.linalg.det(A) |
Scalar determinant |
| Inverse | np.linalg.inv(A) |
Matrix inverse (if nonsingular) |
| Rank | np.linalg.matrix_rank(A) |
Number of independent rows/cols |
| Trace | np.trace(A) |
Sum of diagonal entries |
| Eigenvalues & Eigenvectors | np.linalg.eig(A) |
Decomposition into λ and v |
You can combine arrays horizontally or vertically. Useful functions:
np.hstack([a, b, ...]) — horizontal stack (concatenate columns). All arrays must have same number of rows.np.vstack([a, b, ...]) — vertical stack (concatenate rows). All arrays must have same number of columns.np.concatenate([a, b, ...], axis=0 or 1) — general-purpose concatenate along an axis.Examples:
a1 = np.array([11, 12, 13, 14, 15, 16]).reshape(3, 2)
a2 = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9]).reshape(3, 3)
a3 = np.array([-1, -2, -3, -4]).reshape(2, 2)
# horizontal stack: a1 and a2 must have same number of rows (3)
print(np.hstack([a1, a2]))
# vertical stack: a1 and a3 must have same number of columns (2)
print(np.vstack([a1, a3]))
# general concatenate along axis 1 (same as hstack)
print(np.concatenate([a1, a2], axis=1))[[11 12 1 2 3]
[13 14 4 5 6]
[15 16 7 8 9]]
[[11 12]
[13 14]
[15 16]
[-1 -2]
[-3 -4]]
[[11 12 1 2 3]
[13 14 4 5 6]
[15 16 7 8 9]]
You can stack many arrays at once as long as shapes are compatible.
axis argument)axis parameter to operate along rows or columns.v = np.array([1, 2, 3, 4, 5, 6, 7, 8]).reshape(2, 4)
print(v)[[1 2 3 4]
[5 6 7 8]]
np.sum() by default sums all elements:print('Total sum:', np.sum(v))Total sum: 36
axis=0 to reduce rows (i.e., compute column-wise results):print('Column sums:', np.sum(v, axis=0)) # shape (4,)Column sums: [ 6 8 10 12]
axis=1 to reduce columns (i.e., compute row-wise results):print('Row sums:', np.sum(v, axis=1)) # shape (2,)Row sums: [10 26]
keepdims=True:print(np.sum(v, axis=1, keepdims=True)) # shape (2,1)[[10]
[26]]
np.prod(), np.mean(), np.std(), np.min(), np.max(), np.argmax(), etc.:print('Column products:', np.prod(v, axis=0))
print('Row means:', np.mean(v, axis=1))Column products: [ 5 12 21 32]
Row means: [2.5 6.5]