數值線性代數中,逐次超鬆弛(successive over-relaxation,SOR)迭代法是高斯-賽德爾迭代的一種變體,用於求解線性方程組。類似方法也可用於任何緩慢收斂的迭代過程。
SOR迭代法由David M. Young Jr.和Stanley P. Frankel在1950年同時獨立提出,目的是在計算機上自動求解線性方程組。之前,人們已經為計算員的計算開發過超鬆弛法,如路易斯·弗萊·理查德森的方法以及R. V. Southwell開發的方法。但這些方法需要一定專業知識確保求解的收斂,不適用於計算機編程。David M. Young Jr.的論文對這些方面進行了探討。[1]
形式化
給定n個線性方程組成的方系統:
其中
則A可分解為對角矩陣D、嚴格上下三角矩陣U、L:
其中
線性方程組可重寫為
其中是常數,稱作鬆弛因子(relaxation factor)。
逐次超鬆弛迭代法可以通過迭代逼近x的精確解,可分析地寫作
其中是的第k次迭代值,是下一次迭代所得的值。 利用的三角形,可用向前替換法依次計算的元素:
收斂性
鬆弛因子的選擇並不容易,取決於係數矩陣的性質。1947年,亞歷山大·馬雅科維奇·奧斯特洛夫斯基證明,若A是對稱正定矩陣,則。因此,迭代過程將收斂,但更高的收斂速度更有價值。
則收斂速度可表為
最優鬆弛因子是
特別地,時SOR法即退化為高斯-賽德爾迭代,有。 對最優的,有,表明SOR法的效率約是高斯-賽德爾迭代的4倍。
最後一條假設對三對角矩陣也滿足,因為對對角陣Z,其元素、。
算法
由於此算法中,元素可在迭代過程中被覆蓋,所以只需一個存儲向量,不需要向量索引。
输入:A, b, ω 输出:φ 选择初始解φ repeat until convergence for i from 1 until n do set σ to 0 for j from 1 until n do if j ≠ i then set σ to σ + aij φj end if end (j-loop) set φi to (1 − ω)φi + ω(bi − σ) / aii end (i-loop) check if convergence is reached end (repeat)
- 注意:也可寫作,這樣每次外層for循環可以省去一次乘法。
例子
解線性方程組
擇鬆弛因子與初始解。由SOR算法可得下表,在38步取得精確解(3, −2, 2, 1)。
迭代 | ||||
---|---|---|---|---|
1 | 0.25 | −2.78125 | 1.6289062 | 0.5152344 |
2 | 1.2490234 | −2.2448974 | 1.9687712 | 0.9108547 |
3 | 2.070478 | −1.6696789 | 1.5904881 | 0.76172125 |
... | ... | ... | ... | ... |
37 | 2.9999998 | −2.0 | 2.0 | 1.0 |
38 | 3.0 | −2.0 | 2.0 | 1.0 |
用Common Lisp的簡單實現:
;; 默认浮点格式设为long-float,以确保在更大范围数字上正确运行
(setf *read-default-float-format* 'long-float)
(defparameter +MAXIMUM-NUMBER-OF-ITERATIONS+ 100
"The number of iterations beyond which the algorithm should cease its
operation, regardless of its current solution. A higher number of
iterations might provide a more accurate result, but imposes higher
performance requirements.")
(declaim (type (integer 0 *) +MAXIMUM-NUMBER-OF-ITERATIONS+))
(defun get-errors (computed-solution exact-solution)
"For each component of the COMPUTED-SOLUTION vector, retrieves its
error with respect to the expected EXACT-SOLUTION vector, returning a
vector of error values.
---
While both input vectors should be equal in size, this condition is
not checked and the shortest of the twain determines the output
vector's number of elements.
---
The established formula is the following:
Let resultVectorSize = min(computedSolution.length, exactSolution.length)
Let resultVector = new vector of resultVectorSize
For i from 0 to (resultVectorSize - 1)
resultVector[i] = exactSolution[i] - computedSolution[i]
Return resultVector"
(declare (type (vector number *) computed-solution))
(declare (type (vector number *) exact-solution))
(map '(vector number *) #'- exact-solution computed-solution))
(defun is-convergent (errors &key (error-tolerance 0.001))
"Checks whether the convergence is reached with respect to the
ERRORS vector which registers the discrepancy betwixt the computed
and the exact solution vector.
---
The convergence is fulfilled if and only if each absolute error
component is less than or equal to the ERROR-TOLERANCE, that is:
For all e in ERRORS, it holds: abs(e) <= errorTolerance."
(declare (type (vector number *) errors))
(declare (type number error-tolerance))
(flet ((error-is-acceptable (error)
(declare (type number error))
(<= (abs error) error-tolerance)))
(every #'error-is-acceptable errors)))
(defun make-zero-vector (size)
"Creates and returns a vector of the SIZE with all elements set to 0."
(declare (type (integer 0 *) size))
(make-array size :initial-element 0.0 :element-type 'number))
(defun successive-over-relaxation (A b omega
&key (phi (make-zero-vector (length b)))
(convergence-check
#'(lambda (iteration phi)
(declare (ignore phi))
(>= iteration +MAXIMUM-NUMBER-OF-ITERATIONS+))))
"Implements the successive over-relaxation (SOR) method, applied upon
the linear equations defined by the matrix A and the right-hand side
vector B, employing the relaxation factor OMEGA, returning the
calculated solution vector.
---
The first algorithm step, the choice of an initial guess PHI, is
represented by the optional keyword parameter PHI, which defaults
to a zero-vector of the same structure as B. If supplied, this
vector will be destructively modified. In any case, the PHI vector
constitutes the function's result value.
---
The terminating condition is implemented by the CONVERGENCE-CHECK,
an optional predicate
lambda(iteration phi) => generalized-boolean
which returns T, signifying the immediate termination, upon achieving
convergence, or NIL, signaling continuant operation, otherwise. In
its default configuration, the CONVERGENCE-CHECK simply abides the
iteration's ascension to the ``+MAXIMUM-NUMBER-OF-ITERATIONS+'',
ignoring the achieved accuracy of the vector PHI."
(declare (type (array number (* *)) A))
(declare (type (vector number *) b))
(declare (type number omega))
(declare (type (vector number *) phi))
(declare (type (function ((integer 1 *)
(vector number *))
*)
convergence-check))
(let ((n (array-dimension A 0)))
(declare (type (integer 0 *) n))
(loop for iteration from 1 by 1 do
(loop for i from 0 below n by 1 do
(let ((rho 0))
(declare (type number rho))
(loop for j from 0 below n by 1 do
(when (/= j i)
(let ((a[ij] (aref A i j))
(phi[j] (aref phi j)))
(incf rho (* a[ij] phi[j])))))
(setf (aref phi i)
(+ (* (- 1 omega)
(aref phi i))
(* (/ omega (aref A i i))
(- (aref b i) rho))))))
(format T "~&~d. solution = ~a" iteration phi)
;; Check if convergence is reached.
(when (funcall convergence-check iteration phi)
(return))))
(the (vector number *) phi))
;; Summon the function with the exemplary parameters.
(let ((A (make-array (list 4 4)
:initial-contents
'(( 4 -1 -6 0 )
( -5 -4 10 8 )
( 0 9 4 -2 )
( 1 0 -7 5 ))))
(b (vector 2 21 -12 -6))
(omega 0.5)
(exact-solution (vector 3 -2 2 1)))
(successive-over-relaxation
A b omega
:convergence-check
#'(lambda (iteration phi)
(declare (type (integer 0 *) iteration))
(declare (type (vector number *) phi))
(let ((errors (get-errors phi exact-solution)))
(declare (type (vector number *) errors))
(format T "~&~d. errors = ~a" iteration errors)
(or (is-convergent errors :error-tolerance 0.0)
(>= iteration +MAXIMUM-NUMBER-OF-ITERATIONS+))))))
上述偽代碼的簡單Python實現。
import numpy as np
from scipy import linalg
def sor_solver(A, b, omega, initial_guess, convergence_criteria):
"""
This is an implementation of the pseudo-code provided in the Wikipedia article.
Arguments:
A: nxn numpy matrix.
b: n dimensional numpy vector.
omega: relaxation factor.
initial_guess: An initial solution guess for the solver to start with.
convergence_criteria: The maximum discrepancy acceptable to regard the current solution as fitting.
Returns:
phi: solution vector of dimension n.
"""
step = 0
phi = initial_guess[:]
residual = linalg.norm(A @ phi - b) # Initial residual
while residual > convergence_criteria:
for i in range(A.shape[0]):
sigma = 0
for j in range(A.shape[1]):
if j != i:
sigma += A[i, j] * phi[j]
phi[i] = (1 - omega) * phi[i] + (omega / A[i, i]) * (b[i] - sigma)
residual = linalg.norm(A @ phi - b)
step += 1
print("Step {} Residual: {:10.6g}".format(step, residual))
return phi
# An example case that mirrors the one in the Wikipedia article
residual_convergence = 1e-8
omega = 0.5 # Relaxation factor
A = np.array([[4, -1, -6, 0],
[-5, -4, 10, 8],
[0, 9, 4, -2],
[1, 0, -7, 5]])
b = np.array([2, 21, -12, -6])
initial_guess = np.zeros(4)
phi = sor_solver(A, b, omega, initial_guess, residual_convergence)
print(phi)
對稱逐次超鬆弛
對對稱矩陣A,其中
有對稱逐次超鬆弛迭代法(SSOR):
迭代法為
SOR與SSOR法都來自David M. Young Jr.
其他應用
任何迭代法都可應用相似技巧。若原迭代的形式為
則可將其改為
但若將x視作完整向量,則上述解線性方程組的公式不是這種公式的特例。此公式基礎上,計算下一個向量的公式是
其中。用於加快收斂速度,可使發散的迭代收斂或加快過調(overshoot)過程的收斂。有多種方法可根據觀察到的收斂過程行為,自適應地調整鬆弛因子。這些方法通常只對一部分問題有效。
另見
註釋
參考文獻
外部連結
Wikiwand in your browser!
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.