@@ -22,19 +22,19 @@ The optimization class is created using the following call:
22
22
23
23
.. code-block :: python
24
24
25
- >> > optProb = Optimization(' name' , objFun)
25
+ optProb = Optimization(" name" , objFun)
26
26
27
27
The general template of the objective function is as follows:
28
28
29
29
.. code-block :: python
30
30
31
31
def obj_fun (xdict ):
32
- funcs = {}
33
- funcs[' obj_name' ] = function(xdict)
34
- funcs[' con_name' ] = function(xdict)
35
- fail = False # Or True if an analysis failed
32
+ funcs = {}
33
+ funcs[" obj_name" ] = function(xdict)
34
+ funcs[" con_name" ] = function(xdict)
35
+ fail = False # Or True if an analysis failed
36
36
37
- return funcs, fail
37
+ return funcs, fail
38
38
39
39
where:
40
40
@@ -51,19 +51,19 @@ to simply call :meth:`addVar <pyoptsparse.pyOpt_optimization.Optimization.addVar
51
51
52
52
.. code-block :: python
53
53
54
- >> > optProb.addVar(' var_name' )
54
+ optProb.addVar(" var_name" )
55
55
56
56
This will result in a scalar variable included in the ``x `` dictionary call to ``obj_fun `` which can be accessed by doing
57
57
58
58
.. code-block :: python
59
59
60
- >> > x[ ' var_name' ]
60
+ x[ " var_name" ]
61
61
62
62
A more complex example will include lower bounds, upper bounds and a non-zero initial value:
63
63
64
64
.. code-block :: python
65
65
66
- >> > optProb.addVar(' var_name' , lower = - 10 , upper = 5 , value = - 2 )
66
+ optProb.addVar(" var_name" , lower = - 10 , upper = 5 , value = - 2 )
67
67
68
68
The ``lower `` or ``upper `` keywords may be specified as ``None `` to signify there is no bound on the variable.
69
69
@@ -84,7 +84,7 @@ For example, to add 10 variables with no lower bound, and a scale factor of 0.1:
84
84
85
85
.. code-block :: python
86
86
87
- >> > optProb.addVarGroup(' con_group' , 10 , upper = 2.5 , scale = 0.1 )
87
+ optProb.addVarGroup(" con_group" , 10 , upper = 2.5 , scale = 0.1 )
88
88
89
89
90
90
Constraints
@@ -95,22 +95,22 @@ to use the function :meth:`addCon <pyoptsparse.pyOpt_optimization.Optimization.a
95
95
96
96
.. code-block :: python
97
97
98
- >> > optProb.addCon(' not_a_real_constraint' )
98
+ optProb.addCon(" not_a_real_constraint" )
99
99
100
100
To include bounds on the constraints, use the ``lower `` and ``upper `` keyword arguments.
101
101
If ``lower `` and ``upper `` are the same, it will be treated as an equality constraint:
102
102
103
103
.. code-block :: python
104
104
105
- >> > optProb.addCon(' inequality_constraint' , upper = 10 )
106
- >> > optProb.addCon(' equality_constraint' , lower = 5 , upper = 5 )
105
+ optProb.addCon(" inequality_constraint" , upper = 10 )
106
+ optProb.addCon(" equality_constraint" , lower = 5 , upper = 5 )
107
107
108
108
Like design variables, it is often necessary to scale constraints such that all constraint values are approximately the same order of magnitude.
109
109
This can be specified using the ``scale `` keyword:
110
110
111
111
.. code-block :: python
112
112
113
- >> > optProb.addCon(' scaled_constraint' , upper = 10000 , scale = 1.0 / 10000 )
113
+ optProb.addCon(" scaled_constraint" , upper = 10000 , scale = 1.0 / 10000 )
114
114
115
115
Even if the ``scale `` keyword is given, the ``lower `` and ``upper `` bounds are given in their un-scaled form.
116
116
Internally, pyOptSparse will use the scaling factor to produce the following constraint:
@@ -153,14 +153,14 @@ By way of example, the code that generates the hypothetical optimization proble
153
153
154
154
.. code-block :: python
155
155
156
- optProb.addVarGroup(' varA' , 3 )
157
- optProb.addVarGroup(' varB' , 1 )
158
- optProb.addVarGroup(' varC' , 3 )
156
+ optProb.addVarGroup(" varA" , 3 )
157
+ optProb.addVarGroup(" varB" , 1 )
158
+ optProb.addVarGroup(" varC" , 3 )
159
159
160
- optProb.addConGroup(' conA' , 2 , upper = 0.0 , wrt = [' varB' , ' varC' ])
161
- optProb.addConGroup(' conB' , 2 , upper = 0.0 , wrt = [' varC' , ' varA' ])
162
- optProb.addConGroup(' conC' , 4 , upper = 0.0 )
163
- optProb.addConGroup(' conD' , 3 , upper = 0.0 , wrt = [' varC' ])
160
+ optProb.addConGroup(" conA" , 2 , upper = 0.0 , wrt = [" varB" , " varC" ])
161
+ optProb.addConGroup(" conB" , 2 , upper = 0.0 , wrt = [" varC" , " varA" ])
162
+ optProb.addConGroup(" conC" , 4 , upper = 0.0 )
163
+ optProb.addConGroup(" conD" , 3 , upper = 0.0 , wrt = [" varC" ])
164
164
165
165
Note that the order of the ``wrt `` (which stands for with-respect-to) is not significant.
166
166
Furthermore, if the ``wrt `` argument is omitted altogether, pyOptSparse assumes that the constraint is dense.
@@ -170,7 +170,7 @@ To do so, use the following call after adding all the design variables, objectiv
170
170
171
171
.. code-block :: python
172
172
173
- >> > optProb.printSparsity()
173
+ optProb.printSparsity()
174
174
175
175
Using the ``wrt `` keyword allows the user to determine the overall sparsity structure of the constraint Jacobian.
176
176
However, we have currently assumed that each of the blocks with an ``X `` in is a dense sub-block.
@@ -182,12 +182,12 @@ By way of example, the call instead may be as follows:
182
182
183
183
.. code-block :: python
184
184
185
- jac = sparse.lil_matrix((3 ,3 ))
186
- jac[0 ,0 ] = 1.0
187
- jac[1 ,1 ] = 4.0
188
- jac[2 ,2 ] = 5.0
185
+ jac = sparse.lil_matrix((3 , 3 ))
186
+ jac[0 , 0 ] = 1.0
187
+ jac[1 , 1 ] = 4.0
188
+ jac[2 , 2 ] = 5.0
189
189
190
- optProb.addConGroup(' conD' , 3 , upper = 0.0 , wrt = [' varC' ], linear = True , jac = {' varC' : jac})
190
+ optProb.addConGroup(" conD" , 3 , upper = 0.0 , wrt = [" varC" ], linear = True , jac = {" varC" : jac})
191
191
192
192
We have created a linked list sparse matrix using ``scipy.sparse ``.
193
193
Any SciPy sparse matrix format can be accepted.
@@ -215,7 +215,7 @@ This is accomplished using a the call to :meth:`addObj <pyoptsparse.pyOpt_optimi
215
215
216
216
.. code-block :: python
217
217
218
- optProb.addObj(' obj_name' )
218
+ optProb.addObj(" obj_name" )
219
219
220
220
What this does is tell pyOptSparse that the key ``obj_name `` in the function returns will be taken as the objective.
221
221
For optimizers that can do multi-objective optimization (e.g. NSGA2), multiple objectives can be added.
@@ -264,6 +264,7 @@ The first, and most explicit approach is to directly import the optimizer class,
264
264
.. code-block :: python
265
265
266
266
from pyoptsparse import SLSQP
267
+
267
268
opt = SLSQP(... )
268
269
269
270
However, in order to easily switch between different optimizers without having to import each class, a convenience function called
@@ -273,6 +274,7 @@ It accepts a string argument in addition to the usual options, and instantiates
273
274
.. code-block :: python
274
275
275
276
from pyoptsparse import OPT
277
+
276
278
opt = OPT(" SLSQP" , ... )
277
279
278
280
Note that the name of the optimizer is case-insensitive, so ``slsqp `` can also be used.
0 commit comments