粒子群算法

   本文将详细介绍粒子群算法(Particle swarm optimization)的算法由来、数学模型、算法流程以及算法的Python实现。(注:本文由有何不可原创,如若转载,请注明来源!)

1. 算法由来

   粒子群算法源自对鸟群行为的研究:假设一群鸟在某个区域内寻找食物,所有的鸟知道自己的位置与食物的大概距离,那么寻找食物的最佳方法就是,搜寻与食物距离最近的那只鸟的邻近区域。

   在粒子群算法中,每个优化问题的潜在解都是搜索空间内的一只鸟,称之为“粒子”。“粒子”有着两个属性:位置和速度,并且具有记忆功能,可以记忆自己的历史经验,即在先前的搜索过程中,粒子距离“食物”的远近。粒子群算法基于生物群体之间的“信息共享”机制,来实现在搜索空间对潜在解的寻找,即每个“粒子”在搜索空间内不停地搜索,并且其搜索行为不仅在不同程度上受群体中其他粒子的影响,而且还受其自身经验的影响。

2. 算法数学模型

   假设在一个D维(待优化问题的变量数)的搜索空间中,有N个粒子组成一个种群,其中第k个粒子表示为一个D维的向量:

$$ X_k = {x_{k1}, x_{k2}, …, x_{kD} }, k = 1,2,…,N $$

   第k个粒子飞行的速度也是一个D维的向量:

$$ V_k = {v_{k1}, v_{k2}, …, v_{kD} }, k = 1,2,…,N $$

   第k个粒子迄今为止搜索到的最优位置称为个体极值,记为,

$$ P_{k,best} = {p_{k1}, p_{k2}, …, p_{kD} }, k = 1,2,…,N $$

   粒子群迄今为止搜索到的最优位置称为全局极值,记为,

$$ G_{best} = {g_{1}, g_{2}, …, g_{D} } $$

   在寻到个体极值和全局极值后,粒子按照下面两个公式进行更新:

   速度更新公式:
$$ V_{k}(t+1) = wV_{k}(t) + c_1r_1(t)[P_{k,best}(t)-X_k(t)] + c_2r_2(t)[G_{best}(t)-X_k(t)] $$

     式中, w – 惯性权重系数;c1、c2 – 学习因子;r1、r2 – (0, 1)之间的随机数

   位置更新公式:
$$ X_{k}(t+1) = X_{k}(t) + V_{k}(t+1) $$

3. 算法流程

  1. 初始化粒子群,设置算法的控制参数(如惯性权重系数,学习因子);
  2. 计算粒子群中各粒子的适应度值;
  3. 更新各粒子的个体最优和粒子群的群体最优;
  4. 更新粒子群中各粒子的速度和位置;**
  5. 判断是否满足迭代条件,若满足,则退出;否则,跳转到2。

4. 代码示例 – Python

注意:该算法采用控制参数随迭代次数动态变化…

1
2
3
4
# Import libs
import numpy as np
import random as rd
import matplotlib.pyplot as plt

1
2
3
4
5
6
7
8
9
10
MIN_POS = [-5, -5]                                    # Minimum position of the particle
MAX_POS = [5, 5] # Maximum position of the particle
MIN_SPD = [-0.5, -0.5] # Minimum speed of the particle
MAX_SPD = [1, 1] # Maximum speed of the particle
C1_MIN = 0
C1_MAX = 1.5
C2_MIN = 0
C2_MAX = 1.5
W_MAX = 1.4
W_MIN = 0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
# PSO
class PSO():
"""
PSO class
"""

def __init__(self,iters=100,pcount=50,pdim=2,mode='min'):
"""
PSO initialization
------------------
"""

self.w = None # Inertia factor
self.c1 = None # Learning factor
self.c2 = None # Learning factor

self.iters = iters # Number of iterations
self.pcount = pcount # Number of particles
self.pdim = pdim # Particle dimension
self.gbpos = np.array([0.0]*pdim) # Group optimal position

self.mode = mode # The mode of PSO

self.cur_pos = np.zeros((pcount, pdim)) # Current position of the particle
self.cur_spd = np.zeros((pcount, pdim)) # Current speed of the particle
self.bpos = np.zeros((pcount, pdim)) # The optimal position of the particle

self.trace = [] # Record the function value of the optimal solution


def init_particles(self):
"""
init_particles function
-----------------------
"""

# Generating particle swarm
for i in range(self.pcount):
for j in range(self.pdim):
self.cur_pos[i,j] = rd.uniform(MIN_POS[j], MAX_POS[j])
self.cur_spd[i,j] = rd.uniform(MIN_SPD[j], MAX_SPD[j])
self.bpos[i,j] = self.cur_pos[i,j]

# Initial group optimal position
for i in range(self.pcount):
if self.mode == 'min':
if self.fitness(self.cur_pos[i]) < self.fitness(self.gbpos):
gbpos = self.cur_pos[i]
elif self.mode == 'max':
if self.fitness(self.cur_pos[i]) > self.fitness(self.gbpos):
gbpos = self.cur_pos[i]

def fitness(self, x):
"""
fitness function
----------------
Parameter:
x :
"""

# Objective function
fitval = 5*np.cos(x[0]*x[1])+x[0]*x[1]+x[1]**3 # min
#fitval = 20*(x[0]**2-x[1]**2)**2 - (1-x[1])**2 - 3*(1+x[1])**2 + 0.3 # min
#fitval = (np.cos(x[0]**2+x[1]**2)-0.1) / (1+0.3*(x[0]**2+x[1]**2)**2) + 3 # max
#fitval = x[0] + 6*np.sin(4*x[0])+ 9*np.cos(5*x[0]) # min
# Retyrn value
return fitval

def adaptive(self, t, p, c1, c2, w):
"""

"""

#w = 0.95 #0.9-1.2
if t == 0:
c1 = 0
c2 = 0
w = 0.95
else:
if self.mode == 'min':
# c1
if self.fitness(self.cur_pos[p]) > self.fitness(self.bpos[p]):
c1 = C1_MIN + (t/self.iters)*C1_MAX + np.random.uniform(0,0.1)
elif self.fitness(self.cur_pos[p]) <= self.fitness(self.bpos[p]):
c1 = c1
# c2
if self.fitness(self.bpos[p]) > self.fitness(self.gbpos):
c2 = C2_MIN + (t/self.iters)*C2_MAX + np.random.uniform(0,0.1)
elif self.fitness(self.bpos[p]) <= self.fitness(self.gbpos):
c2 = c2
# w
#c1 = C1_MAX - (C1_MAX-C1_MIN)*(t/self.iters)
#c2 = C2_MIN + (C2_MAX-C2_MIN)*(t/self.iters)
w = W_MAX - (W_MAX-W_MIN)*(t/self.iters)
elif self.mode == 'max':
pass

return c1, c2, w

def update(self, t):
"""
update function
---------------
Note that :
1. Update particle position
2. Update particle speed
3. Update particle optimal position
4. Update group optimal position
"""

# Part1 : Traverse the particle swarm
for i in range(self.pcount):

# Dynamic parameters
self.c1, self.c2, self.w = self.adaptive(t,i,self.c1,self.c2,self.w)

# Calculate the speed after particle iteration
# Update particle speed
self.cur_spd[i] = self.w*self.cur_spd[i] \
+self.c1*rd.uniform(0,1)*(self.bpos[i]-self.cur_pos[i])\
+self.c2*rd.uniform(0,1)*(self.gbpos - self.cur_pos[i])
for n in range(self.pdim):
if self.cur_spd[i,n] > MAX_SPD[n]:
self.cur_spd[i,n] = MAX_SPD[n]
elif self.cur_spd[i,n] < MIN_SPD[n]:
self.cur_spd[i,n] = MIN_SPD[n]

# Calculate the position after particle iteration
# Update particle position
self.cur_pos[i] = self.cur_pos[i] + self.cur_spd[i]
for n in range(self.pdim):
if self.cur_pos[i,n] > MAX_POS[n]:
self.cur_pos[i,n] = MAX_POS[n]
elif self.cur_pos[i,n] < MIN_POS[n]:
self.cur_pos[i,n] = MIN_POS[n]

# Part2 : Update particle optimal position
for k in range(self.pcount):
if self.mode == 'min':
if self.fitness(self.cur_pos[k]) < self.fitness(self.bpos[k]):
self.bpos[k] = self.cur_pos[k]
elif self.mode == 'max':
if self.fitness(self.cur_pos[k]) > self.fitness(self.bpos[k]):
self.bpos[k] = self.cur_pos[k]

# Part3 : Update group optimal position
for k in range(self.pcount):
if self.mode == 'min':
if self.fitness(self.bpos[k]) < self.fitness(self.gbpos):
self.gbpos = self.bpos[k]
elif self.mode == 'max':
if self.fitness(self.bpos[k]) > self.fitness(self.gbpos):
self.gbpos = self.bpos[k]

def run(self):
"""
run function
-------------
"""

# Initialize the particle swarm
self.init_particles()

# Iteration
for t in range(self.iters):
# Update all particle information
self.update(t)
#
self.trace.append(self.fitness(self.gbpos))
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# main
def main():
"""
main function
"""

pso = PSO(iters=100,pcount=50,pdim=2, mode='min')
pso.run()

#
print('='*40)
print('= Optimal solution:')
print('= x=', pso.gbpos[0])
print('= y=', pso.gbpos[1])
print('= Function value:')
print('= f(x,y)=', pso.fitness(pso.gbpos))
#print(pso.w)
print('='*40)

#
plt.plot(pso.trace, 'r')
title = 'MIN: ' + str(pso.fitness(pso.gbpos))
plt.title(title)
plt.xlabel("Number of iterations")
plt.ylabel("Function values")
plt.show()
#
input('= Press any key to exit...')
print('='*40)
exit() ython

5. 代码链接

   具体代码参见Github

有何不可 wechat
Subscribe to my blog by scanning my wechat account~
书山有路勤为径,学海无涯苦作舟~