https://optimization.mccormick.northwestern.edu/api.php?action=feedcontributions&user=DGarcia90&feedformat=atomoptimization - User contributions [en]2021-10-26T02:22:48ZUser contributionsMediaWiki 1.21.3https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-09T02:36:41Z<p>DGarcia90: /* Applications */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Example===<br />
<br />
A variety of problems can be algebraically reformulated into GP's. As an example, consider the optimization of a box with height ''h'', width ''w'', and depth ''d''. As constraints, the total area of the walls, ''2(hw+hd)'', is limited to some <math>A_{wall}</math>, the floor area ''wd'' is limited to some <math>A_{floor}</math>, and there are upper and lower constraints on the ratios ''h/w'' and ''w/d''. The volume of the box, ''hwd'' is to be optimized:<br />
<br />
:<math> \max ~hwd</math><br />
:<math> s.t. ~~2(hw+hd) \le A_{wall}</math><br />
:::<math> ~wd \le A_{floor} </math><br />
:::<math> ~\alpha \le h/w \le \beta </math><br />
:::<math> ~\gamma \le d/w \le \delta </math><br />
<br />
While this problem is not in the standard GP form shown above, minor algebraic manipulations allow for the rearranging of the constraints and objective into a standard GP:<br />
<br />
:<math>\min ~h^{-1}w^{-1}d^{-1}</math><br />
:<math> s.t. ~~(2/A_{wall})hw+(2/A_{wall})hd \le 1</math><br />
:::<math> (1/A_{floor})wd \le 1</math><br />
:::<math> \alpha h^{-1}w \le 1</math><br />
:::<math> (1/\beta )hw^{-1} \le 1</math><br />
:::<math> \gamma wd^{-1} \le 1 </math><br />
:::<math> (1/\delta )w^{-1}d \le 1</math><br />
<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems - a review of the basics of formulation and theory is presented in the next section.<br />
<br />
==Formulation==<br />
<br />
Geometric programs with posynomials should be transformed to a convex problem for quick solving times. This can be achieved with an exponential transformation, as described and formulated in this section. <br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)-G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Feasibility Analysis===<br />
<br />
Of course, it is important to determine that the solution to the exponentially transformed convex is feasible to the original geometric program. <br />
<br />
It is immediately apparent that if the transformed problem is infeasible, then the original problem must also be infeasible, as the constraints are identical in both programs. Thus, infeasibility in this case means the original constraints are too "tight" or do not allow for a feasible solution anyway, regardless of the exponential transformation. <br />
<br />
One common method to determine feasibility between the transformed problem and the original geometric program is to find a point that is, albeit, still infeasible to the original problem, but not far from feasibility. One way to do this might be to set up the following geometric program:<br />
<br />
:<math> \min ~s </math><br />
:<math>s.t. ~~f_i(x) \le s, i = 1,...,m</math><br />
:::<math>g_i(x)=1, i=1,...,p</math><br />
:::<math>s \ge 1</math><br />
<br />
<br />
Thus, as s nears a value of 1, the original problem nears feasibility. For example, if the optimal s = 1.1, then the optimal x is, theoretically, only 10% infeasible for the original problem. Thus, the goal of method is to find a solution such that s = 1, and x is feasible to the original geometric program.<br />
<br />
==Applications==<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management. See the tutorial under "External Links and References" for an excellent resource recapping the theory of this article on geometric programming as well as a plethora of other applications and examples. <br />
<br />
Examples under the External Links and References section include applications concerning:<br />
<br />
- Power control in communications systems<br />
<br />
- Optimal doping profile in semiconductor device engineering<br />
<br />
- Floor planning for configuration of potentially furniture, process units, etc. on a floor<br />
<br />
- Digital circuit gate sizing<br />
<br />
- The optimal design of a mechanical truss system<br />
<br />
- Wire segment sizing within an integrated circuit<br />
<br />
An illustrative example involving digital circuit optimization via geometric programming is given by Boyd, Kim, Patil, and Horowitz in the references section.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.<br />
<br />
9. S. Boyd, S. J. Kim, L. Vandenberghe, and A. Hassibi, A Tutorial on Geometric Programming<br />
<br />
10. S. Boyd, S. J. Kim, D. Patil, and M. Horowitz Digital Circuit Optimization via Geometric Programming</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-09T02:36:07Z<p>DGarcia90: /* References */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Example===<br />
<br />
A variety of problems can be algebraically reformulated into GP's. As an example, consider the optimization of a box with height ''h'', width ''w'', and depth ''d''. As constraints, the total area of the walls, ''2(hw+hd)'', is limited to some <math>A_{wall}</math>, the floor area ''wd'' is limited to some <math>A_{floor}</math>, and there are upper and lower constraints on the ratios ''h/w'' and ''w/d''. The volume of the box, ''hwd'' is to be optimized:<br />
<br />
:<math> \max ~hwd</math><br />
:<math> s.t. ~~2(hw+hd) \le A_{wall}</math><br />
:::<math> ~wd \le A_{floor} </math><br />
:::<math> ~\alpha \le h/w \le \beta </math><br />
:::<math> ~\gamma \le d/w \le \delta </math><br />
<br />
While this problem is not in the standard GP form shown above, minor algebraic manipulations allow for the rearranging of the constraints and objective into a standard GP:<br />
<br />
:<math>\min ~h^{-1}w^{-1}d^{-1}</math><br />
:<math> s.t. ~~(2/A_{wall})hw+(2/A_{wall})hd \le 1</math><br />
:::<math> (1/A_{floor})wd \le 1</math><br />
:::<math> \alpha h^{-1}w \le 1</math><br />
:::<math> (1/\beta )hw^{-1} \le 1</math><br />
:::<math> \gamma wd^{-1} \le 1 </math><br />
:::<math> (1/\delta )w^{-1}d \le 1</math><br />
<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems - a review of the basics of formulation and theory is presented in the next section.<br />
<br />
==Formulation==<br />
<br />
Geometric programs with posynomials should be transformed to a convex problem for quick solving times. This can be achieved with an exponential transformation, as described and formulated in this section. <br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)-G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Feasibility Analysis===<br />
<br />
Of course, it is important to determine that the solution to the exponentially transformed convex is feasible to the original geometric program. <br />
<br />
It is immediately apparent that if the transformed problem is infeasible, then the original problem must also be infeasible, as the constraints are identical in both programs. Thus, infeasibility in this case means the original constraints are too "tight" or do not allow for a feasible solution anyway, regardless of the exponential transformation. <br />
<br />
One common method to determine feasibility between the transformed problem and the original geometric program is to find a point that is, albeit, still infeasible to the original problem, but not far from feasibility. One way to do this might be to set up the following geometric program:<br />
<br />
:<math> \min ~s </math><br />
:<math>s.t. ~~f_i(x) \le s, i = 1,...,m</math><br />
:::<math>g_i(x)=1, i=1,...,p</math><br />
:::<math>s \ge 1</math><br />
<br />
<br />
Thus, as s nears a value of 1, the original problem nears feasibility. For example, if the optimal s = 1.1, then the optimal x is, theoretically, only 10% infeasible for the original problem. Thus, the goal of method is to find a solution such that s = 1, and x is feasible to the original geometric program.<br />
<br />
==Applications==<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management. See the tutorial under "External Links and References" for an excellent resource recapping the theory of this article on geometric programming as well as a plethora of other applications and examples. <br />
<br />
Examples under the External Links and References section include applications concerning:<br />
<br />
- Power control in communications systems<br />
<br />
- Optimal doping profile in semiconductor device engineering<br />
<br />
- Floor planning for configuration of potentially furniture, process units, etc. on a floor<br />
<br />
- Digital circuit gate sizing<br />
<br />
- The optimal design of a mechanical truss system<br />
<br />
- Wire segment sizing within an integrated circuit<br />
<br />
An illustrative example involving digital circuit optimization via geometric programming is given next.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.<br />
<br />
9. S. Boyd, S. J. Kim, L. Vandenberghe, and A. Hassibi, A Tutorial on Geometric Programming<br />
<br />
10. S. Boyd, S. J. Kim, D. Patil, and M. Horowitz Digital Circuit Optimization via Geometric Programming</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-09T02:34:52Z<p>DGarcia90: /* Digital Circuit Optimization via Geometric Programming */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Example===<br />
<br />
A variety of problems can be algebraically reformulated into GP's. As an example, consider the optimization of a box with height ''h'', width ''w'', and depth ''d''. As constraints, the total area of the walls, ''2(hw+hd)'', is limited to some <math>A_{wall}</math>, the floor area ''wd'' is limited to some <math>A_{floor}</math>, and there are upper and lower constraints on the ratios ''h/w'' and ''w/d''. The volume of the box, ''hwd'' is to be optimized:<br />
<br />
:<math> \max ~hwd</math><br />
:<math> s.t. ~~2(hw+hd) \le A_{wall}</math><br />
:::<math> ~wd \le A_{floor} </math><br />
:::<math> ~\alpha \le h/w \le \beta </math><br />
:::<math> ~\gamma \le d/w \le \delta </math><br />
<br />
While this problem is not in the standard GP form shown above, minor algebraic manipulations allow for the rearranging of the constraints and objective into a standard GP:<br />
<br />
:<math>\min ~h^{-1}w^{-1}d^{-1}</math><br />
:<math> s.t. ~~(2/A_{wall})hw+(2/A_{wall})hd \le 1</math><br />
:::<math> (1/A_{floor})wd \le 1</math><br />
:::<math> \alpha h^{-1}w \le 1</math><br />
:::<math> (1/\beta )hw^{-1} \le 1</math><br />
:::<math> \gamma wd^{-1} \le 1 </math><br />
:::<math> (1/\delta )w^{-1}d \le 1</math><br />
<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems - a review of the basics of formulation and theory is presented in the next section.<br />
<br />
==Formulation==<br />
<br />
Geometric programs with posynomials should be transformed to a convex problem for quick solving times. This can be achieved with an exponential transformation, as described and formulated in this section. <br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)-G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Feasibility Analysis===<br />
<br />
Of course, it is important to determine that the solution to the exponentially transformed convex is feasible to the original geometric program. <br />
<br />
It is immediately apparent that if the transformed problem is infeasible, then the original problem must also be infeasible, as the constraints are identical in both programs. Thus, infeasibility in this case means the original constraints are too "tight" or do not allow for a feasible solution anyway, regardless of the exponential transformation. <br />
<br />
One common method to determine feasibility between the transformed problem and the original geometric program is to find a point that is, albeit, still infeasible to the original problem, but not far from feasibility. One way to do this might be to set up the following geometric program:<br />
<br />
:<math> \min ~s </math><br />
:<math>s.t. ~~f_i(x) \le s, i = 1,...,m</math><br />
:::<math>g_i(x)=1, i=1,...,p</math><br />
:::<math>s \ge 1</math><br />
<br />
<br />
Thus, as s nears a value of 1, the original problem nears feasibility. For example, if the optimal s = 1.1, then the optimal x is, theoretically, only 10% infeasible for the original problem. Thus, the goal of method is to find a solution such that s = 1, and x is feasible to the original geometric program.<br />
<br />
==Applications==<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management. See the tutorial under "External Links and References" for an excellent resource recapping the theory of this article on geometric programming as well as a plethora of other applications and examples. <br />
<br />
Examples under the External Links and References section include applications concerning:<br />
<br />
- Power control in communications systems<br />
<br />
- Optimal doping profile in semiconductor device engineering<br />
<br />
- Floor planning for configuration of potentially furniture, process units, etc. on a floor<br />
<br />
- Digital circuit gate sizing<br />
<br />
- The optimal design of a mechanical truss system<br />
<br />
- Wire segment sizing within an integrated circuit<br />
<br />
An illustrative example involving digital circuit optimization via geometric programming is given next.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T21:01:06Z<p>DGarcia90: /* Digital Circuit Optimization via Geometric Programming */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Example===<br />
<br />
A variety of problems can be algebraically reformulated into GP's. As an example, consider the optimization of a box with height ''h'', width ''w'', and depth ''d''. As constraints, the total area of the walls, ''2(hw+hd)'', is limited to some <math>A_{wall}</math>, the floor area ''wd'' is limited to some <math>A_{floor}</math>, and there are upper and lower constraints on the ratios ''h/w'' and ''w/d''. The volume of the box, ''hwd'' is to be optimized:<br />
<br />
:<math> \max ~hwd</math><br />
:<math> s.t. ~~2(hw+hd) \le A_{wall}</math><br />
:::<math> ~wd \le A_{floor} </math><br />
:::<math> ~\alpha \le h/w \le \beta </math><br />
:::<math> ~\gamma \le d/w \le \delta </math><br />
<br />
While this problem is not in the standard GP form shown above, minor algebraic manipulations allow for the rearranging of the constraints and objective into a standard GP:<br />
<br />
:<math>\min ~h^{-1}w^{-1}d^{-1}</math><br />
:<math> s.t. ~~(2/A_{wall})hw+(2/A_{wall})hd \le 1</math><br />
:::<math> (1/A_{floor})wd \le 1</math><br />
:::<math> \alpha h^{-1}w \le 1</math><br />
:::<math> (1/\beta )hw^{-1} \le 1</math><br />
:::<math> \gamma wd^{-1} \le 1 </math><br />
:::<math> (1/\delta )w^{-1}d \le 1</math><br />
<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems - a review of the basics of formulation and theory is presented in the next section.<br />
<br />
==Formulation==<br />
<br />
Geometric programs with posynomials should be transformed to a convex problem for quick solving times. This can be achieved with an exponential transformation, as described and formulated in this section. <br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)-G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Feasibility Analysis===<br />
<br />
Of course, it is important to determine that the solution to the exponentially transformed convex is feasible to the original geometric program. <br />
<br />
It is immediately apparent that if the transformed problem is infeasible, then the original problem must also be infeasible, as the constraints are identical in both programs. Thus, infeasibility in this case means the original constraints are too "tight" or do not allow for a feasible solution anyway, regardless of the exponential transformation. <br />
<br />
One common method to determine feasibility between the transformed problem and the original geometric program is to find a point that is, albeit, still infeasible to the original problem, but not far from feasibility. One way to do this might be to set up the following geometric program:<br />
<br />
:<math> \min ~s </math><br />
:<math>s.t. ~~f_i(x) \le s, i = 1,...,m</math><br />
:::<math>g_i(x)=1, i=1,...,p</math><br />
:::<math>s \ge 1</math><br />
<br />
<br />
Thus, as s nears a value of 1, the original problem nears feasibility. For example, if the optimal s = 1.1, then the optimal x is, theoretically, only 10% infeasible for the original problem. Thus, the goal of method is to find a solution such that s = 1, and x is feasible to the original geometric program.<br />
<br />
==Applications==<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management. See the tutorial under "External Links and References" for an excellent resource recapping the theory of this article on geometric programming as well as a plethora of other applications and examples. <br />
<br />
Examples under the External Links and References section include applications concerning:<br />
<br />
- Power control in communications systems<br />
<br />
- Optimal doping profile in semiconductor device engineering<br />
<br />
- Floor planning for configuration of potentially furniture, process units, etc. on a floor<br />
<br />
- Digital circuit gate sizing<br />
<br />
- The optimal design of a mechanical truss system<br />
<br />
- Wire segment sizing within an integrated circuit<br />
<br />
An illustrative example involving digital circuit optimization via geometric programming is given next.<br />
<br />
===Digital Circuit Optimization via Geometric Programming===<br />
[[File:Circuit.JPG|200px|thumb|right|alt text]]<br />
Boyd et. al published an article in 2005 employing geometric programming methods to optimize the design of a digital circuit. Specifically, they optimized gate sizing of a circuit (see Figure 1). The goal is to minimize the total gate delay, ''D'', which can be approximated with each gate's driving resistance <math>R_i</math>, the input capacitance <math> C_i^L</math>, and each gate's internal or intrinsic capacitance <math> C_i^{int}</math>:<br />
<br />
<math> D_i=R_i(C_i^{int}+C_i^L)</math><br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T20:56:48Z<p>DGarcia90: /* Digital Circuit Optimization via Geometric Programming */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Example===<br />
<br />
A variety of problems can be algebraically reformulated into GP's. As an example, consider the optimization of a box with height ''h'', width ''w'', and depth ''d''. As constraints, the total area of the walls, ''2(hw+hd)'', is limited to some <math>A_{wall}</math>, the floor area ''wd'' is limited to some <math>A_{floor}</math>, and there are upper and lower constraints on the ratios ''h/w'' and ''w/d''. The volume of the box, ''hwd'' is to be optimized:<br />
<br />
:<math> \max ~hwd</math><br />
:<math> s.t. ~~2(hw+hd) \le A_{wall}</math><br />
:::<math> ~wd \le A_{floor} </math><br />
:::<math> ~\alpha \le h/w \le \beta </math><br />
:::<math> ~\gamma \le d/w \le \delta </math><br />
<br />
While this problem is not in the standard GP form shown above, minor algebraic manipulations allow for the rearranging of the constraints and objective into a standard GP:<br />
<br />
:<math>\min ~h^{-1}w^{-1}d^{-1}</math><br />
:<math> s.t. ~~(2/A_{wall})hw+(2/A_{wall})hd \le 1</math><br />
:::<math> (1/A_{floor})wd \le 1</math><br />
:::<math> \alpha h^{-1}w \le 1</math><br />
:::<math> (1/\beta )hw^{-1} \le 1</math><br />
:::<math> \gamma wd^{-1} \le 1 </math><br />
:::<math> (1/\delta )w^{-1}d \le 1</math><br />
<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems - a review of the basics of formulation and theory is presented in the next section.<br />
<br />
==Formulation==<br />
<br />
Geometric programs with posynomials should be transformed to a convex problem for quick solving times. This can be achieved with an exponential transformation, as described and formulated in this section. <br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)-G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Feasibility Analysis===<br />
<br />
Of course, it is important to determine that the solution to the exponentially transformed convex is feasible to the original geometric program. <br />
<br />
It is immediately apparent that if the transformed problem is infeasible, then the original problem must also be infeasible, as the constraints are identical in both programs. Thus, infeasibility in this case means the original constraints are too "tight" or do not allow for a feasible solution anyway, regardless of the exponential transformation. <br />
<br />
One common method to determine feasibility between the transformed problem and the original geometric program is to find a point that is, albeit, still infeasible to the original problem, but not far from feasibility. One way to do this might be to set up the following geometric program:<br />
<br />
:<math> \min ~s </math><br />
:<math>s.t. ~~f_i(x) \le s, i = 1,...,m</math><br />
:::<math>g_i(x)=1, i=1,...,p</math><br />
:::<math>s \ge 1</math><br />
<br />
<br />
Thus, as s nears a value of 1, the original problem nears feasibility. For example, if the optimal s = 1.1, then the optimal x is, theoretically, only 10% infeasible for the original problem. Thus, the goal of method is to find a solution such that s = 1, and x is feasible to the original geometric program.<br />
<br />
==Applications==<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management. See the tutorial under "External Links and References" for an excellent resource recapping the theory of this article on geometric programming as well as a plethora of other applications and examples. <br />
<br />
Examples under the External Links and References section include applications concerning:<br />
<br />
- Power control in communications systems<br />
<br />
- Optimal doping profile in semiconductor device engineering<br />
<br />
- Floor planning for configuration of potentially furniture, process units, etc. on a floor<br />
<br />
- Digital circuit gate sizing<br />
<br />
- The optimal design of a mechanical truss system<br />
<br />
- Wire segment sizing within an integrated circuit<br />
<br />
An illustrative example involving digital circuit optimization via geometric programming is given next.<br />
<br />
===Digital Circuit Optimization via Geometric Programming===<br />
<br />
Boyd et. al published an article in 2005 employing geometric programming methods to optimize the design of a digital circuit. Specifically, they optimized gate sizing of a circuit (see Figure 1).<br />
<br />
[[File:Circuit.JPG|200px|thumb|left|alt text]]<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T20:55:55Z<p>DGarcia90: /* Digital Circuit Optimization via Geometric Programming */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Example===<br />
<br />
A variety of problems can be algebraically reformulated into GP's. As an example, consider the optimization of a box with height ''h'', width ''w'', and depth ''d''. As constraints, the total area of the walls, ''2(hw+hd)'', is limited to some <math>A_{wall}</math>, the floor area ''wd'' is limited to some <math>A_{floor}</math>, and there are upper and lower constraints on the ratios ''h/w'' and ''w/d''. The volume of the box, ''hwd'' is to be optimized:<br />
<br />
:<math> \max ~hwd</math><br />
:<math> s.t. ~~2(hw+hd) \le A_{wall}</math><br />
:::<math> ~wd \le A_{floor} </math><br />
:::<math> ~\alpha \le h/w \le \beta </math><br />
:::<math> ~\gamma \le d/w \le \delta </math><br />
<br />
While this problem is not in the standard GP form shown above, minor algebraic manipulations allow for the rearranging of the constraints and objective into a standard GP:<br />
<br />
:<math>\min ~h^{-1}w^{-1}d^{-1}</math><br />
:<math> s.t. ~~(2/A_{wall})hw+(2/A_{wall})hd \le 1</math><br />
:::<math> (1/A_{floor})wd \le 1</math><br />
:::<math> \alpha h^{-1}w \le 1</math><br />
:::<math> (1/\beta )hw^{-1} \le 1</math><br />
:::<math> \gamma wd^{-1} \le 1 </math><br />
:::<math> (1/\delta )w^{-1}d \le 1</math><br />
<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems - a review of the basics of formulation and theory is presented in the next section.<br />
<br />
==Formulation==<br />
<br />
Geometric programs with posynomials should be transformed to a convex problem for quick solving times. This can be achieved with an exponential transformation, as described and formulated in this section. <br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)-G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Feasibility Analysis===<br />
<br />
Of course, it is important to determine that the solution to the exponentially transformed convex is feasible to the original geometric program. <br />
<br />
It is immediately apparent that if the transformed problem is infeasible, then the original problem must also be infeasible, as the constraints are identical in both programs. Thus, infeasibility in this case means the original constraints are too "tight" or do not allow for a feasible solution anyway, regardless of the exponential transformation. <br />
<br />
One common method to determine feasibility between the transformed problem and the original geometric program is to find a point that is, albeit, still infeasible to the original problem, but not far from feasibility. One way to do this might be to set up the following geometric program:<br />
<br />
:<math> \min ~s </math><br />
:<math>s.t. ~~f_i(x) \le s, i = 1,...,m</math><br />
:::<math>g_i(x)=1, i=1,...,p</math><br />
:::<math>s \ge 1</math><br />
<br />
<br />
Thus, as s nears a value of 1, the original problem nears feasibility. For example, if the optimal s = 1.1, then the optimal x is, theoretically, only 10% infeasible for the original problem. Thus, the goal of method is to find a solution such that s = 1, and x is feasible to the original geometric program.<br />
<br />
==Applications==<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management. See the tutorial under "External Links and References" for an excellent resource recapping the theory of this article on geometric programming as well as a plethora of other applications and examples. <br />
<br />
Examples under the External Links and References section include applications concerning:<br />
<br />
- Power control in communications systems<br />
<br />
- Optimal doping profile in semiconductor device engineering<br />
<br />
- Floor planning for configuration of potentially furniture, process units, etc. on a floor<br />
<br />
- Digital circuit gate sizing<br />
<br />
- The optimal design of a mechanical truss system<br />
<br />
- Wire segment sizing within an integrated circuit<br />
<br />
An illustrative example involving digital circuit optimization via geometric programming is given next.<br />
<br />
===Digital Circuit Optimization via Geometric Programming===<br />
<br />
Boyd et. al published an article in 2005 employing geometric programming methods to optimize the design of a digital circuit. Specifically, they optimized gate sizing of a circuit (see Figure 1).<br />
<br />
[[File:Circuit.jpg]]<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/File:Circuit.JPGFile:Circuit.JPG2014-06-06T20:53:45Z<p>DGarcia90: Figure 1 from Boyd et al (2005) concerning the to-be-optimized circuit.</p>
<hr />
<div>Figure 1 from Boyd et al (2005) concerning the to-be-optimized circuit.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T20:52:38Z<p>DGarcia90: /* Applications */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Example===<br />
<br />
A variety of problems can be algebraically reformulated into GP's. As an example, consider the optimization of a box with height ''h'', width ''w'', and depth ''d''. As constraints, the total area of the walls, ''2(hw+hd)'', is limited to some <math>A_{wall}</math>, the floor area ''wd'' is limited to some <math>A_{floor}</math>, and there are upper and lower constraints on the ratios ''h/w'' and ''w/d''. The volume of the box, ''hwd'' is to be optimized:<br />
<br />
:<math> \max ~hwd</math><br />
:<math> s.t. ~~2(hw+hd) \le A_{wall}</math><br />
:::<math> ~wd \le A_{floor} </math><br />
:::<math> ~\alpha \le h/w \le \beta </math><br />
:::<math> ~\gamma \le d/w \le \delta </math><br />
<br />
While this problem is not in the standard GP form shown above, minor algebraic manipulations allow for the rearranging of the constraints and objective into a standard GP:<br />
<br />
:<math>\min ~h^{-1}w^{-1}d^{-1}</math><br />
:<math> s.t. ~~(2/A_{wall})hw+(2/A_{wall})hd \le 1</math><br />
:::<math> (1/A_{floor})wd \le 1</math><br />
:::<math> \alpha h^{-1}w \le 1</math><br />
:::<math> (1/\beta )hw^{-1} \le 1</math><br />
:::<math> \gamma wd^{-1} \le 1 </math><br />
:::<math> (1/\delta )w^{-1}d \le 1</math><br />
<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems - a review of the basics of formulation and theory is presented in the next section.<br />
<br />
==Formulation==<br />
<br />
Geometric programs with posynomials should be transformed to a convex problem for quick solving times. This can be achieved with an exponential transformation, as described and formulated in this section. <br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)-G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Feasibility Analysis===<br />
<br />
Of course, it is important to determine that the solution to the exponentially transformed convex is feasible to the original geometric program. <br />
<br />
It is immediately apparent that if the transformed problem is infeasible, then the original problem must also be infeasible, as the constraints are identical in both programs. Thus, infeasibility in this case means the original constraints are too "tight" or do not allow for a feasible solution anyway, regardless of the exponential transformation. <br />
<br />
One common method to determine feasibility between the transformed problem and the original geometric program is to find a point that is, albeit, still infeasible to the original problem, but not far from feasibility. One way to do this might be to set up the following geometric program:<br />
<br />
:<math> \min ~s </math><br />
:<math>s.t. ~~f_i(x) \le s, i = 1,...,m</math><br />
:::<math>g_i(x)=1, i=1,...,p</math><br />
:::<math>s \ge 1</math><br />
<br />
<br />
Thus, as s nears a value of 1, the original problem nears feasibility. For example, if the optimal s = 1.1, then the optimal x is, theoretically, only 10% infeasible for the original problem. Thus, the goal of method is to find a solution such that s = 1, and x is feasible to the original geometric program.<br />
<br />
==Applications==<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management. See the tutorial under "External Links and References" for an excellent resource recapping the theory of this article on geometric programming as well as a plethora of other applications and examples. <br />
<br />
Examples under the External Links and References section include applications concerning:<br />
<br />
- Power control in communications systems<br />
<br />
- Optimal doping profile in semiconductor device engineering<br />
<br />
- Floor planning for configuration of potentially furniture, process units, etc. on a floor<br />
<br />
- Digital circuit gate sizing<br />
<br />
- The optimal design of a mechanical truss system<br />
<br />
- Wire segment sizing within an integrated circuit<br />
<br />
An illustrative example involving digital circuit optimization via geometric programming is given next.<br />
<br />
===Digital Circuit Optimization via Geometric Programming===<br />
<br />
Boyd et. al published an article in 2005 employing geometric programming methods to optimize the design of a digital circuit. Specifically, they optimized gate sizing of a circuit (see Figure 1).<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T20:50:46Z<p>DGarcia90: /* Possible Applications */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Example===<br />
<br />
A variety of problems can be algebraically reformulated into GP's. As an example, consider the optimization of a box with height ''h'', width ''w'', and depth ''d''. As constraints, the total area of the walls, ''2(hw+hd)'', is limited to some <math>A_{wall}</math>, the floor area ''wd'' is limited to some <math>A_{floor}</math>, and there are upper and lower constraints on the ratios ''h/w'' and ''w/d''. The volume of the box, ''hwd'' is to be optimized:<br />
<br />
:<math> \max ~hwd</math><br />
:<math> s.t. ~~2(hw+hd) \le A_{wall}</math><br />
:::<math> ~wd \le A_{floor} </math><br />
:::<math> ~\alpha \le h/w \le \beta </math><br />
:::<math> ~\gamma \le d/w \le \delta </math><br />
<br />
While this problem is not in the standard GP form shown above, minor algebraic manipulations allow for the rearranging of the constraints and objective into a standard GP:<br />
<br />
:<math>\min ~h^{-1}w^{-1}d^{-1}</math><br />
:<math> s.t. ~~(2/A_{wall})hw+(2/A_{wall})hd \le 1</math><br />
:::<math> (1/A_{floor})wd \le 1</math><br />
:::<math> \alpha h^{-1}w \le 1</math><br />
:::<math> (1/\beta )hw^{-1} \le 1</math><br />
:::<math> \gamma wd^{-1} \le 1 </math><br />
:::<math> (1/\delta )w^{-1}d \le 1</math><br />
<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems - a review of the basics of formulation and theory is presented in the next section.<br />
<br />
==Formulation==<br />
<br />
Geometric programs with posynomials should be transformed to a convex problem for quick solving times. This can be achieved with an exponential transformation, as described and formulated in this section. <br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)-G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Feasibility Analysis===<br />
<br />
Of course, it is important to determine that the solution to the exponentially transformed convex is feasible to the original geometric program. <br />
<br />
It is immediately apparent that if the transformed problem is infeasible, then the original problem must also be infeasible, as the constraints are identical in both programs. Thus, infeasibility in this case means the original constraints are too "tight" or do not allow for a feasible solution anyway, regardless of the exponential transformation. <br />
<br />
One common method to determine feasibility between the transformed problem and the original geometric program is to find a point that is, albeit, still infeasible to the original problem, but not far from feasibility. One way to do this might be to set up the following geometric program:<br />
<br />
:<math> \min ~s </math><br />
:<math>s.t. ~~f_i(x) \le s, i = 1,...,m</math><br />
:::<math>g_i(x)=1, i=1,...,p</math><br />
:::<math>s \ge 1</math><br />
<br />
<br />
Thus, as s nears a value of 1, the original problem nears feasibility. For example, if the optimal s = 1.1, then the optimal x is, theoretically, only 10% infeasible for the original problem. Thus, the goal of method is to find a solution such that s = 1, and x is feasible to the original geometric program.<br />
<br />
==Applications==<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management. See the tutorial under "External Links and References" for an excellent resource recapping the theory of this article on geometric programming as well as a plethora of other applications and examples. <br />
<br />
Examples under the External Links and References section include applications concerning:<br />
<br />
- Power control in communications systems<br />
<br />
- Optimal doping profile in semiconductor device engineering<br />
<br />
- Floor planning for configuration of potentially furniture, process units, etc. on a floor<br />
<br />
- Digital circuit gate sizing<br />
<br />
- The optimal design of a mechanical truss system<br />
<br />
- Wire segment sizing within an integrated circuit<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T20:49:53Z<p>DGarcia90: /* Possible Applications */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Example===<br />
<br />
A variety of problems can be algebraically reformulated into GP's. As an example, consider the optimization of a box with height ''h'', width ''w'', and depth ''d''. As constraints, the total area of the walls, ''2(hw+hd)'', is limited to some <math>A_{wall}</math>, the floor area ''wd'' is limited to some <math>A_{floor}</math>, and there are upper and lower constraints on the ratios ''h/w'' and ''w/d''. The volume of the box, ''hwd'' is to be optimized:<br />
<br />
:<math> \max ~hwd</math><br />
:<math> s.t. ~~2(hw+hd) \le A_{wall}</math><br />
:::<math> ~wd \le A_{floor} </math><br />
:::<math> ~\alpha \le h/w \le \beta </math><br />
:::<math> ~\gamma \le d/w \le \delta </math><br />
<br />
While this problem is not in the standard GP form shown above, minor algebraic manipulations allow for the rearranging of the constraints and objective into a standard GP:<br />
<br />
:<math>\min ~h^{-1}w^{-1}d^{-1}</math><br />
:<math> s.t. ~~(2/A_{wall})hw+(2/A_{wall})hd \le 1</math><br />
:::<math> (1/A_{floor})wd \le 1</math><br />
:::<math> \alpha h^{-1}w \le 1</math><br />
:::<math> (1/\beta )hw^{-1} \le 1</math><br />
:::<math> \gamma wd^{-1} \le 1 </math><br />
:::<math> (1/\delta )w^{-1}d \le 1</math><br />
<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems - a review of the basics of formulation and theory is presented in the next section.<br />
<br />
==Formulation==<br />
<br />
Geometric programs with posynomials should be transformed to a convex problem for quick solving times. This can be achieved with an exponential transformation, as described and formulated in this section. <br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)-G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Feasibility Analysis===<br />
<br />
Of course, it is important to determine that the solution to the exponentially transformed convex is feasible to the original geometric program. <br />
<br />
It is immediately apparent that if the transformed problem is infeasible, then the original problem must also be infeasible, as the constraints are identical in both programs. Thus, infeasibility in this case means the original constraints are too "tight" or do not allow for a feasible solution anyway, regardless of the exponential transformation. <br />
<br />
One common method to determine feasibility between the transformed problem and the original geometric program is to find a point that is, albeit, still infeasible to the original problem, but not far from feasibility. One way to do this might be to set up the following geometric program:<br />
<br />
:<math> \min ~s </math><br />
:<math>s.t. ~~f_i(x) \le s, i = 1,...,m</math><br />
:::<math>g_i(x)=1, i=1,...,p</math><br />
:::<math>s \ge 1</math><br />
<br />
<br />
Thus, as s nears a value of 1, the original problem nears feasibility. For example, if the optimal s = 1.1, then the optimal x is, theoretically, only 10% infeasible for the original problem. Thus, the goal of method is to find a solution such that s = 1, and x is feasible to the original geometric program.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management. See the tutorial under "External Links and References" for an excellent resource recapping the theory of this article on geometric programming as well as a plethora of other applications and examples. <br />
<br />
Examples under the External Links and References section include applications concerning:<br />
<br />
- Power control in communications systems<br />
<br />
- Optimal doping profile in semiconductor device engineering<br />
<br />
- Floor planning for configuration of potentially furniture, process units, etc. on a floor<br />
<br />
- Digital circuit gate sizing<br />
<br />
- The optimal design of a mechanical truss system<br />
<br />
- Wire segment sizing within an integrated circuit<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T20:45:30Z<p>DGarcia90: /* Development */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Example===<br />
<br />
A variety of problems can be algebraically reformulated into GP's. As an example, consider the optimization of a box with height ''h'', width ''w'', and depth ''d''. As constraints, the total area of the walls, ''2(hw+hd)'', is limited to some <math>A_{wall}</math>, the floor area ''wd'' is limited to some <math>A_{floor}</math>, and there are upper and lower constraints on the ratios ''h/w'' and ''w/d''. The volume of the box, ''hwd'' is to be optimized:<br />
<br />
:<math> \max ~hwd</math><br />
:<math> s.t. ~~2(hw+hd) \le A_{wall}</math><br />
:::<math> ~wd \le A_{floor} </math><br />
:::<math> ~\alpha \le h/w \le \beta </math><br />
:::<math> ~\gamma \le d/w \le \delta </math><br />
<br />
While this problem is not in the standard GP form shown above, minor algebraic manipulations allow for the rearranging of the constraints and objective into a standard GP:<br />
<br />
:<math>\min ~h^{-1}w^{-1}d^{-1}</math><br />
:<math> s.t. ~~(2/A_{wall})hw+(2/A_{wall})hd \le 1</math><br />
:::<math> (1/A_{floor})wd \le 1</math><br />
:::<math> \alpha h^{-1}w \le 1</math><br />
:::<math> (1/\beta )hw^{-1} \le 1</math><br />
:::<math> \gamma wd^{-1} \le 1 </math><br />
:::<math> (1/\delta )w^{-1}d \le 1</math><br />
<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems - a review of the basics of formulation and theory is presented in the next section.<br />
<br />
==Formulation==<br />
<br />
Geometric programs with posynomials should be transformed to a convex problem for quick solving times. This can be achieved with an exponential transformation, as described and formulated in this section. <br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)-G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Feasibility Analysis===<br />
<br />
Of course, it is important to determine that the solution to the exponentially transformed convex is feasible to the original geometric program. <br />
<br />
It is immediately apparent that if the transformed problem is infeasible, then the original problem must also be infeasible, as the constraints are identical in both programs. Thus, infeasibility in this case means the original constraints are too "tight" or do not allow for a feasible solution anyway, regardless of the exponential transformation. <br />
<br />
One common method to determine feasibility between the transformed problem and the original geometric program is to find a point that is, albeit, still infeasible to the original problem, but not far from feasibility. One way to do this might be to set up the following geometric program:<br />
<br />
:<math> \min ~s </math><br />
:<math>s.t. ~~f_i(x) \le s, i = 1,...,m</math><br />
:::<math>g_i(x)=1, i=1,...,p</math><br />
:::<math>s \ge 1</math><br />
<br />
<br />
Thus, as s nears a value of 1, the original problem nears feasibility. For example, if the optimal s = 1.1, then the optimal x is, theoretically, only 10% infeasible for the original problem. Thus, the goal of method is to find a solution such that s = 1, and x is feasible to the original geometric program.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T17:38:29Z<p>DGarcia90: /* Feasibility Analysis */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Example===<br />
<br />
A variety of problems can be algebraically reformulated into GP's. As an example, consider the optimization of a box with height ''h'', width ''w'', and depth ''d''. As constraints, the total area of the walls, ''2(hw+hd)'', is limited to some <math>A_{wall}</math>, the floor area ''wd'' is limited to some <math>A_{floor}</math>, and there are upper and lower constraints on the ratios ''h/w'' and ''w/d''. The volume of the box, ''hwd'' is to be optimized:<br />
<br />
:<math> \max ~hwd</math><br />
:<math> s.t. ~~2(hw+hd) \le A_{wall}</math><br />
:::<math> ~wd \le A_{floor} </math><br />
:::<math> ~\alpha \le h/w \le \beta </math><br />
:::<math> ~\gamma \le d/w \le \delta </math><br />
<br />
While this problem is not in the standard GP form shown above, minor algebraic manipulations allow for the rearranging of the constraints and objective into a standard GP:<br />
<br />
:<math>\min ~h^{-1}w^{-1}d^{-1}</math><br />
:<math> s.t. ~~(2/A_{wall})hw+(2/A_{wall})hd \le 1</math><br />
:::<math> (1/A_{floor})wd \le 1</math><br />
:::<math> \alpha h^{-1}w \le 1</math><br />
:::<math> (1/\beta )hw^{-1} \le 1</math><br />
:::<math> \gamma wd^{-1} \le 1 </math><br />
:::<math> (1/\delta )w^{-1}d \le 1</math><br />
<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
Geometric programs with posynomials should be transformed to a convex problem for quick solving times. This can be achieved with an exponential transformation, as described and formulated in this section. <br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)-G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Feasibility Analysis===<br />
<br />
Of course, it is important to determine that the solution to the exponentially transformed convex is feasible to the original geometric program. <br />
<br />
It is immediately apparent that if the transformed problem is infeasible, then the original problem must also be infeasible, as the constraints are identical in both programs. Thus, infeasibility in this case means the original constraints are too "tight" or do not allow for a feasible solution anyway, regardless of the exponential transformation. <br />
<br />
One common method to determine feasibility between the transformed problem and the original geometric program is to find a point that is, albeit, still infeasible to the original problem, but not far from feasibility. One way to do this might be to set up the following geometric program:<br />
<br />
:<math> \min ~s </math><br />
:<math>s.t. ~~f_i(x) \le s, i = 1,...,m</math><br />
:::<math>g_i(x)=1, i=1,...,p</math><br />
:::<math>s \ge 1</math><br />
<br />
<br />
Thus, as s nears a value of 1, the original problem nears feasibility. For example, if the optimal s = 1.1, then the optimal x is, theoretically, only 10% infeasible for the original problem. Thus, the goal of method is to find a solution such that s = 1, and x is feasible to the original geometric program.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T17:28:58Z<p>DGarcia90: /* Formulation */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Example===<br />
<br />
A variety of problems can be algebraically reformulated into GP's. As an example, consider the optimization of a box with height ''h'', width ''w'', and depth ''d''. As constraints, the total area of the walls, ''2(hw+hd)'', is limited to some <math>A_{wall}</math>, the floor area ''wd'' is limited to some <math>A_{floor}</math>, and there are upper and lower constraints on the ratios ''h/w'' and ''w/d''. The volume of the box, ''hwd'' is to be optimized:<br />
<br />
:<math> \max ~hwd</math><br />
:<math> s.t. ~~2(hw+hd) \le A_{wall}</math><br />
:::<math> ~wd \le A_{floor} </math><br />
:::<math> ~\alpha \le h/w \le \beta </math><br />
:::<math> ~\gamma \le d/w \le \delta </math><br />
<br />
While this problem is not in the standard GP form shown above, minor algebraic manipulations allow for the rearranging of the constraints and objective into a standard GP:<br />
<br />
:<math>\min ~h^{-1}w^{-1}d^{-1}</math><br />
:<math> s.t. ~~(2/A_{wall})hw+(2/A_{wall})hd \le 1</math><br />
:::<math> (1/A_{floor})wd \le 1</math><br />
:::<math> \alpha h^{-1}w \le 1</math><br />
:::<math> (1/\beta )hw^{-1} \le 1</math><br />
:::<math> \gamma wd^{-1} \le 1 </math><br />
:::<math> (1/\delta )w^{-1}d \le 1</math><br />
<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
Geometric programs with posynomials should be transformed to a convex problem for quick solving times. This can be achieved with an exponential transformation, as described and formulated in this section. <br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)-G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Feasibility Analysis===<br />
<br />
Of course, it is important to determine that the solution to the exponentially transformed convex is feasible to the original geometric program. <br />
<br />
It is immediately apparent<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T17:22:56Z<p>DGarcia90: /* Formulation */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Example===<br />
<br />
A variety of problems can be algebraically reformulated into GP's. As an example, consider the optimization of a box with height ''h'', width ''w'', and depth ''d''. As constraints, the total area of the walls, ''2(hw+hd)'', is limited to some <math>A_{wall}</math>, the floor area ''wd'' is limited to some <math>A_{floor}</math>, and there are upper and lower constraints on the ratios ''h/w'' and ''w/d''. The volume of the box, ''hwd'' is to be optimized:<br />
<br />
:<math> \max ~hwd</math><br />
:<math> s.t. ~~2(hw+hd) \le A_{wall}</math><br />
:::<math> ~wd \le A_{floor} </math><br />
:::<math> ~\alpha \le h/w \le \beta </math><br />
:::<math> ~\gamma \le d/w \le \delta </math><br />
<br />
While this problem is not in the standard GP form shown above, minor algebraic manipulations allow for the rearranging of the constraints and objective into a standard GP:<br />
<br />
:<math>\min ~h^{-1}w^{-1}d^{-1}</math><br />
:<math> s.t. ~~(2/A_{wall})hw+(2/A_{wall})hd \le 1</math><br />
:::<math> (1/A_{floor})wd \le 1</math><br />
:::<math> \alpha h^{-1}w \le 1</math><br />
:::<math> (1/\beta )hw^{-1} \le 1</math><br />
:::<math> \gamma wd^{-1} \le 1 </math><br />
:::<math> (1/\delta )w^{-1}d \le 1</math><br />
<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
Geometric programs with posynomials should be transformed to a convex problem for quick solving times. This can be achieved with an exponential transformation, as described and formulated in this section. <br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)-G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T17:14:51Z<p>DGarcia90: /* Formulation */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Example===<br />
<br />
A variety of problems can be algebraically reformulated into GP's. As an example, consider the optimization of a box with height ''h'', width ''w'', and depth ''d''. As constraints, the total area of the walls, ''2(hw+hd)'', is limited to some <math>A_{wall}</math>, the floor area ''wd'' is limited to some <math>A_{floor}</math>, and there are upper and lower constraints on the ratios ''h/w'' and ''w/d''. The volume of the box, ''hwd'' is to be optimized:<br />
<br />
:<math> \max ~hwd</math><br />
:<math> s.t. ~~2(hw+hd) \le A_{wall}</math><br />
:::<math> ~wd \le A_{floor} </math><br />
:::<math> ~\alpha \le h/w \le \beta </math><br />
:::<math> ~\gamma \le d/w \le \delta </math><br />
<br />
While this problem is not in the standard GP form shown above, minor algebraic manipulations allow for the rearranging of the constraints and objective into a standard GP:<br />
<br />
:<math>\min ~h^{-1}w^{-1}d^{-1}</math><br />
:<math> s.t. ~~(2/A_{wall})hw+(2/A_{wall})hd \le 1</math><br />
:::<math> (1/A_{floor})wd \le 1</math><br />
:::<math> \alpha h^{-1}w \le 1</math><br />
:::<math> (1/\beta )hw^{-1} \le 1</math><br />
:::<math> \gamma wd^{-1} \le 1 </math><br />
:::<math> (1/\delta )w^{-1}d \le 1</math><br />
<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)-G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T16:11:49Z<p>DGarcia90: /* Background */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Example===<br />
<br />
A variety of problems can be algebraically reformulated into GP's. As an example, consider the optimization of a box with height ''h'', width ''w'', and depth ''d''. As constraints, the total area of the walls, ''2(hw+hd)'', is limited to some <math>A_{wall}</math>, the floor area ''wd'' is limited to some <math>A_{floor}</math>, and there are upper and lower constraints on the ratios ''h/w'' and ''w/d''. The volume of the box, ''hwd'' is to be optimized:<br />
<br />
:<math> \max ~hwd</math><br />
:<math> s.t. ~~2(hw+hd) \le A_{wall}</math><br />
:::<math> ~wd \le A_{floor} </math><br />
:::<math> ~\alpha \le h/w \le \beta </math><br />
:::<math> ~\gamma \le d/w \le \delta </math><br />
<br />
While this problem is not in the standard GP form shown above, minor algebraic manipulations allow for the rearranging of the constraints and objective into a standard GP:<br />
<br />
:<math>\min ~h^{-1}w^{-1}d^{-1}</math><br />
:<math> s.t. ~~(2/A_{wall})hw+(2/A_{wall})hd \le 1</math><br />
:::<math> (1/A_{floor})wd \le 1</math><br />
:::<math> \alpha h^{-1}w \le 1</math><br />
:::<math> (1/\beta )hw^{-1} \le 1</math><br />
:::<math> \gamma wd^{-1} \le 1 </math><br />
:::<math> (1/\delta )w^{-1}d \le 1</math><br />
<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T16:11:23Z<p>DGarcia90: /* Background */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Example===<br />
<br />
As an example, consider the optimization of a box with height ''h'', width ''w'', and depth ''d''. As constraints, the total area of the walls, ''2(hw+hd)'', is limited to some <math>A_{wall}</math>, the floor area ''wd'' is limited to some <math>A_{floor}</math>, and there are upper and lower constraints on the ratios ''h/w'' and ''w/d''. The volume of the box, ''hwd'' is to be optimized:<br />
<br />
:<math> \max ~hwd</math><br />
:<math> s.t. ~~2(hw+hd) \le A_{wall}</math><br />
:::<math> ~wd \le A_{floor} </math><br />
:::<math> ~\alpha \le h/w \le \beta </math><br />
:::<math> ~\gamma \le d/w \le \delta </math><br />
<br />
While this problem is not in the standard GP form shown above, minor algebraic manipulations allow for the rearranging of the constraints and objective into a standard GP:<br />
<br />
:<math>\min ~h^{-1}w^{-1}d^{-1}</math><br />
:<math> s.t. ~~(2/A_{wall})hw+(2/A_{wall})hd \le 1</math><br />
:::<math> (1/A_{floor})wd \le 1</math><br />
:::<math> \alpha h^{-1}w \le 1</math><br />
:::<math> (1/\beta )hw^{-1} \le 1</math><br />
:::<math> \gamma wd^{-1} \le 1 </math><br />
:::<math> (1/\delta )w^{-1}d \le 1</math><br />
<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T15:41:50Z<p>DGarcia90: /* Background */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program (named after the geometric-arithmetic mean inequality) is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T15:40:43Z<p>DGarcia90: /* Background */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials, and the variables are all positive.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T15:30:01Z<p>DGarcia90: /* Background */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials.<br />
<br />
Note that this formulation is more restrictive than a typical nonlinear program (NLP). Specifically, the GP formulation is more constrained in the form of the objective function and the constraints. This represents a potential payoff - while it is more difficult to formulate a problem as a GP due to these constraints, if a problem can be formulated as a GP, highly efficient, global solving methods can be employed. In the NLP case, one may have to settle for a local solution, if a solution ca be found at all. Of course, GP's have their own set of solving challenges.<br />
<br />
For example, it is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-06T14:02:24Z<p>DGarcia90: /* Background */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
The process of formulating a problem as a geometric program is called ''GP modeling'', and has been gaining traction as a means to solve a variety of problems. A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials.<br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-05T03:28:36Z<p>DGarcia90: /* References */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials.<br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. R.J. Duffin, E.L. Peterson, C. Zener, ''Geometric Programming'', John Wiley and Sons, 1967.<br />
<br />
2. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
3. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
4. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
5. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
6. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
7. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
8. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-05T03:27:10Z<p>DGarcia90: /* Background */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
A geometric program may be represented as follows:<br />
<br />
:<math> \min ~f_0(x)</math><br />
:<math>s.t. ~~ f_i(x) \le 1, i = 1,...,m </math><br />
:::<math>h_i(x)=1, i=1,...,p</math><br />
<br />
Where <math> f_0,...,f_m</math> are posynomials and <math> h_1,...,h_p</math> are monomials.<br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
5. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
6. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
7. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-05T03:23:03Z<p>DGarcia90: /* Examples */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
:<math> \min ~f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
:<math>s.t. ~~ t_1 \ge 0 </math><br />
:::<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
:<math> \min ~f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
:<math>s.t. ~~e^ \left ( z_1 \right ) \ge 0 </math><br />
:::<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
:<math> \min ~f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
:<math>s.t. ~~ \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
:<math> \min ~f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
:<math>s.t. ~~ z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
5. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
6. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
7. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-05T03:20:39Z<p>DGarcia90: /* Formulation */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:::<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
:<math> \min ~G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
:<math> s.t. ~~G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
:::<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
<math> \min f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
s.t. <math> t_1 \ge 0 </math><br />
<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
<math> \min f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
s.t. <math> e^ \left ( z_1 \right ) \ge 0 </math><br />
<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
<math> \min f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
<br />
s.t. <math> \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
<math> \min f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
<br />
s.t. <math> z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
5. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
6. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
7. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-05T03:14:33Z<p>DGarcia90: /* Formulation */</p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min ~G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. ~~G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
:<math> ~~~~t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
<math> \min G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
<br />
<math> s.t. G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
<br />
<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
<math> \min f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
s.t. <math> t_1 \ge 0 </math><br />
<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
<math> \min f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
s.t. <math> e^ \left ( z_1 \right ) \ge 0 </math><br />
<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
<math> \min f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
<br />
s.t. <math> \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
<math> \min f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
<br />
s.t. <math> z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
5. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
6. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
7. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-05T03:11:48Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
:<math> \min G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
:<math> s.t. G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
<math> \min G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
<br />
<math> s.t. G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
<br />
<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
<math> \min f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
s.t. <math> t_1 \ge 0 </math><br />
<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
<math> \min f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
s.t. <math> e^ \left ( z_1 \right ) \ge 0 </math><br />
<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
<math> \min f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
<br />
s.t. <math> \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
<math> \min f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
<br />
s.t. <math> z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
5. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
6. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
7. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-05T03:08:01Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
<math> \min G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
<math> s.t. G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
<math> t_i \ge 0, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
<math> \min G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
<br />
<math> s.t. G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
<br />
<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
<math> \min f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math> <br />
s.t. <math> t_1 \ge 0 </math><br />
<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
<math> \min f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
s.t. <math> e^ \left ( z_1 \right ) \ge 0 </math><br />
<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
<math> \min f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
<br />
s.t. <math> \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
<math> \min f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
<br />
s.t. <math> z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
5. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
6. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
7. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-05T03:03:44Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
<math> \min G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
<br />
<math> s.t. G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
<math> t_i \ge 0, i=1,...,N </math><br />
<br />
Where:<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
<math> \min G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
<br />
subject to <math> G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
<br />
<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
<math> \min f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math><br />
<br />
s.t. <math> t_1 \ge 0 </math><br />
<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
<br />
<math> \min f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
<br />
s.t. <math> e^ \left ( z_1 \right ) \ge 0 </math><br />
<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
<math> \min f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
<br />
s.t. <math> \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
<math> \min f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
<br />
s.t. <math> z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
5. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
6. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
7. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-06-05T03:03:14Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
<math> \min G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
<br />
<math> s.t. G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
<math> t_i \ge 0, i=1,...,N </math><br />
<br />
Where:<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
<math> \min G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
<br />
subject to <math> G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
<br />
<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
<math> \min f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math><br />
<br />
s.t. <math> t_1 \ge 0 </math><br />
<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
<br />
<math> \min f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
<br />
s.t. <math> e^ \left ( z_1 \right ) \ge 0 </math><br />
<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
<math> \min f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
<br />
s.t. <math> \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
<math> \min f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
<br />
s.t. <math> z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
5. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
6. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
7. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-26T03:23:44Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development and use will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances, provides a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
<math> \min G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
subject to:<br />
<br />
<math> G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
<math> t_i \ge 0, i=1,...,N </math><br />
<br />
Where:<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
<math> \min G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
<br />
subject to <math> G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
<br />
<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
<math> \min f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math><br />
<br />
s.t. <math> t_1 \ge 0 </math><br />
<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
<br />
<math> \min f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
<br />
s.t. <math> e^ \left ( z_1 \right ) \ge 0 </math><br />
<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
<math> \min f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
<br />
s.t. <math> \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
<math> \min f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
<br />
s.t. <math> z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
5. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
6. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
7. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T22:22:59Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances discussed in the limitation section, provided a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
<math> \min G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
subject to:<br />
<br />
<math> G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
<math> t_i \ge 0, i=1,...,N </math><br />
<br />
Where:<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
<math> \min G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
<br />
subject to <math> G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
<br />
<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
The square root of the variables (current, inductance) results in a nonconvex term that can be transformed with the exponential transformation. In the same vein, many other problems exist in electrical engineering that can also benefit from the exponential transformation.<br />
<br />
<br />
Additional applications can be in the field of finance, supply chain, and project management.<br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
<math> \min f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math><br />
<br />
s.t. <math> t_1 \ge 0 </math><br />
<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
<br />
<math> \min f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
<br />
s.t. <math> e^ \left ( z_1 \right ) \ge 0 </math><br />
<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
<math> \min f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
<br />
s.t. <math> \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
<math> \min f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
<br />
s.t. <math> z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S.J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C.D. Maranas, C.A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.<br />
<br />
5. R. Dembo, ''A set of geometric programming test problems and their solutions'', Mathematical Programming, 10 (1) pp. 192-213, 1976.<br />
<br />
6. M.D.M. Hershenson, S.P. Boyd, ''Optimal Design of a CMOS Op-Amp via Geometric Programming'', IEEE Transactions of Computer-Aided Design of Integrated Circuits and Systems, 20 (1) pp.1-21, 2001.<br />
<br />
7. B.M. Worrall, M.A. Hall, ''The analysis of an inventory control model using psynomial geometric programming'', International Journal of Production Research 20 (5) pp. 657-667, 1982.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T22:17:07Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances discussed in the limitation section, provided a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
<math> \min G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
subject to:<br />
<br />
<math> G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
<math> t_i \ge 0, i=1,...,N </math><br />
<br />
Where:<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
<math> \min G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
<br />
subject to <math> G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
<br />
<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between two convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Possible Applications===<br />
<br />
Exponential transformation to convexify an objective function or constraints can be used on any geometric program that meets the criteria discussed above. As noted above, geometric programs can arise in a number of different fields and applications. <br />
<br />
Ron Dembo a geometric optimization problem concerning optimal reactor design. His system has eight nonconvex terms, and can be transformed with the exponential transformation method. Thus, chemical systems such as reactors, kinetics, and mass balance equations can be solved with exponential transformations: consider the following law of mass-action for any generic reaction:<br />
<br />
<math> rate=kC_A^{0.5}C_B^{0.44} </math><br />
<br />
This equation is nonconvex as it stands, but makes a good candidate for exponential transformation. <br />
<br />
<br />
Posynomial expressions also develop in the field of electrical engineering. In 2001, Hershenson et al formulated a geometric optimization problem for the optimal construction, layout, and connections within CMOS operational amplifiers. Key trade-offs they looked to optimize included power dissipation, unity-gain bandwidth, and open-loop gain. An example of a nonconvex posynomial constraint that arose in their analysis involves the voltage constraints upon a sample transistor in the network, for example:<br />
<br />
<math> \sqrt{\frac{I_7L_7}{\mu_pC_{ox}/2W_7} } \le V_{dd}-V_{out,max} </math> <br />
<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
<math> \min f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math><br />
<br />
s.t. <math> t_1 \ge 0 </math><br />
<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
<br />
<math> \min f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
<br />
s.t. <math> e^ \left ( z_1 \right ) \ge 0 </math><br />
<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
<math> \min f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
<br />
s.t. <math> \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
<math> \min f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
<br />
s.t. <math> z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S. J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C. D. Maranas, C. A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T19:55:33Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances discussed in the limitation section, provided a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
<math> \min G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
subject to:<br />
<br />
<math> G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
<math> t_i \ge 0, i=1,...,N </math><br />
<br />
Where:<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
<math> \min G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
<br />
subject to <math> G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
<br />
<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between to convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Historical Use===<br />
Give some examples here.<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
<math> \min f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math><br />
<br />
s.t. <math> t_1 \ge 0 </math><br />
<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
<br />
<math> \min f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
<br />
s.t. <math> e^ \left ( z_1 \right ) \ge 0 </math><br />
<math> e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
<math> \min f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
<br />
s.t. <math> \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
<math> \min f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
<br />
s.t. <math> z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S. J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C. D. Maranas, C. A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T19:54:43Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances discussed in the limitation section, provided a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
<math> \min G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
subject to:<br />
<br />
<math> G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
<math> t_i \ge 0, i=1,...,N </math><br />
<br />
Where:<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
<math> \min G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
<br />
subject to <math> G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
<br />
<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between to convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Historical Use===<br />
Give some examples here.<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
<math> \min f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math><br />
<br />
s.t. <math> t_1 \ge 0 </math><br />
<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
<br />
<math> \min f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
<br />
s.t. <math> e^ \left ( z_1 \right ) \ge 0 </math><br />
<math) e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
<math> \min f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
<br />
s.t. <math> \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
<math> \min f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
<br />
s.t. <math> z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S. J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C. D. Maranas, C. A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T19:54:07Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances discussed in the limitation section, provided a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
<math> \min G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
subject to:<br />
<br />
<math> G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
<math> t_i \ge 0, i=1,...,N </math><br />
<br />
Where:<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
<math> \min G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
<br />
subject to <math> G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
<br />
<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between to convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found. A full algorithm and theoretical proofs can be found in the 1997 Maranas and Floudas paper. The reader is directed to the paper for a more rigorous and comprehensive interrogation into the theory behind the problem; this wiki article is simply meant to introduce the curious reader to the topic at large.<br />
<br />
===Historical Use===<br />
Give some examples here.<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
<math> \min f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math><br />
<br />
s.t. <math> t_1 \ge 0 </math><br />
<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
<br />
<math> \min f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
<br />
s.t. <math> e^ \left ( z_1 \right ) \ge 0 </math><br />
<math) e^ \left ( z_2 \right ) \ge 0 </math><br />
<br />
<br />
<br />
'''Example 2'''<br />
<br />
Original problem:<br />
<math> \min f(t_1, t_2)=t_1^{3/5}t_2^2 </math><br />
<br />
s.t. <math> \frac{t_1}{t_2} \le t_1^{0.7} </math><br />
<br />
<br />
becomes:<br />
<br />
<math> \min f(z_1, z_2)=e^ \left ( 0.6z_1+2z_2 \right ) </math><br />
s.t. <math> z_1-z_2 \le 0.7z_1 </math><br />
<br />
<br />
Note that in this case, the reformulated constraint is linear and, thus, convex.<br />
<br />
There are a multitude of cases where this reformulation can be applied. As long as the coefficients of the original posynomial are strictly positive.<br />
<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S. J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C. D. Maranas, C. A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T19:37:03Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances discussed in the limitation section, provided a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
<math> \min G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
subject to:<br />
<br />
<math> G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
<math> t_i \ge 0, i=1,...,N </math><br />
<br />
Where:<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
<math> \min G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
<br />
subject to <math> G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
<br />
<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between to convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found.<br />
<br />
===Historical Use===<br />
Give some examples here.<br />
==Examples==<br />
<br />
Convexification of a few objective functions and constraints are provided in this section as illustrative examples. The resulting reformulations can be tested in GAMS using appropriate solvers to verify that there is indeed a globally optimal solution.<br />
<br />
'''Example 1'''<br />
<br />
Original problem: <br />
<math> \min f(t_1,t_2)=14t_1^{3.2}t_2^{4}+t_1^2t_2^7 </math><br />
<br />
s.t. <math> t_1 \ge 0 </math><br />
<math> t_2 \ge 0 </math><br />
<br />
<br />
After exponential transformation, this problem becomes:<br />
<br />
<math> \min f(z_1, z_2)=14e^ \left (3.2z_1+4z_2 \right ) + e^ \left ( 2z_1+7z_2 \right ) </math><br />
<br />
s.t. <math> <br />
<br />
==Conclusions==<br />
Content under conclusions<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S. J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C. D. Maranas, C. A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T19:31:01Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances discussed in the limitation section, provided a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
<math> \min G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
subject to:<br />
<br />
<math> G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
<math> t_i \ge 0, i=1,...,N </math><br />
<br />
Where:<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
<math> \min G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
<br />
subject to <math> G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
<br />
<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between to convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, since <math> z_i^L= \ln t_i^L </math>, it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found.<br />
<br />
===Historical Use===<br />
Give some examples here.<br />
==Limitations==<br />
Discuss limitations here<br />
==Conclusions==<br />
Content under conclusions<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S. J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C. D. Maranas, C. A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T19:29:10Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances discussed in the limitation section, provided a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
<math> \min G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
subject to:<br />
<br />
<math> G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
<math> t_i \ge 0, i=1,...,N </math><br />
<br />
Where:<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
One can simply apply the transformation <math> t_i=e^{z_i}, i=1,...,N </math> to obtain the transformed optimization problem:<br />
<br />
<br />
<br />
<math> \min G_0(z)=G_0^+(z)-G_0^-(z) </math><br />
<br />
<br />
subject to <math> G_j(z)=G_j^+(z)=G_j^-(z) \le 0, j=1, ...,M </math><br />
<br />
<br />
<math> z_i^L \le z_i \le z_i^U, i=1,...,N </math><br />
<br />
<br />
Where:<br />
<br />
<br />
<math> G_j^+(z)=\sum_{k \in K_j^+} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
<math> G_j^-(z)=\sum_{k \in K_j^-} c_{jk}e^ \left ( \sum_{i=1}^N \alpha_{ijk}z_i \right ) , j=0,...,M </math><br />
<br />
<br />
Note that this reformulation results in the difference between to convex functions. Thus, the reformulation has, in theory, "convexified" the original nonconvex problem. However, an additional constraint of <math> z_i^L \le z_i \le z_i^U </math> is added. This is because <math> z_i^L= \ln t_i^L </math>, so it is necessary that the lower bound <math> t_i^L </math> be strictly positive for this reformulation to exist. Maranas and Floudas help circumvent this issue by essentially pre-scaling the original variables to ensure that their lower bounds will be positive. Thus, make sure that for each t_i:<br />
<br />
<br />
<math> t_i^'=t_i+\max \left ( 0, -t_i^L+\epsilon \right ) , \epsilon > 0. </math> <br />
<br />
<br />
With this pre-scaling in place, it is evident that the reformulated problem is a convex programming problem, and a global solution can be found.<br />
<br />
===Historical Use===<br />
Give some examples here.<br />
==Limitations==<br />
Discuss limitations here<br />
==Conclusions==<br />
Content under conclusions<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S. J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C. D. Maranas, C. A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T19:05:10Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvex, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances discussed in the limitation section, provided a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
<math> \min G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
subject to:<br />
<br />
<math> G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
<math> t_i \ge 0, i=1,...,N </math><br />
<br />
Where:<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
===Historical Use===<br />
Give some examples here.<br />
==Limitations==<br />
Discuss limitations here<br />
==Conclusions==<br />
Content under conclusions<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S. J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C. D. Maranas, C. A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T19:00:23Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvext, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances discussed in the limitation section, provided a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
<math> \min G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
subject to:<br />
<br />
<math> G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
<math> t_i \ge 0, i=1,...,N </math><br />
<br />
Where:<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) </math> is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
===Historical Use===<br />
Give some examples here.<br />
==Limitations==<br />
Discuss limitations here<br />
==Conclusions==<br />
Content under conclusions<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S. J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C. D. Maranas, C. A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T19:00:03Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvext, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances discussed in the limitation section, provided a method to globally optimize certain posynomial programming problems.<br />
<br />
==Formulation==<br />
<br />
A posynomial program may be written as a generalized geometric problem (GGP) as so:<br />
<br />
<br />
<math> \min G_0(t)=G_0^+(t)-G_0^-(t) </math><br />
<br />
subject to:<br />
<br />
<math> G_j(t)=G_j^+(t)-G_j^-(t) \le 0, j=1,...,M </math><br />
<br />
<math> t_i \ge 0, i=1,...,N </math><br />
<br />
Where:<br />
<br />
<math> G_j^+(t)=\sum_{k \in K_j^+} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math> <br />
<br />
<math> G_j^-(t)=\sum_{k \in K_j^-} c_{jk} \prod_{i=1}^N t_i^{\alpha_{ijk}}, j=0,...,M </math><br />
<br />
where <math> t=(t_1,...,t_N) is the positive variable vector; <math> G_j^+, G_j^-, j=0,...,M </math> are positive posynomial functions in t; <math> \alpha_{ijk} </math> are arbitrary real constant exponents; and, finally, <math> c_{jk} </math> are strictly positive coefficients. Sets <math> K_j^+, K_j^- </math> keep track of how many positively or negatively signed monomials form posynomials <math> G_j^+, G_j^- </math>, respectively. This formulation is constructed by grouping together monomials with identical signs.<br />
<br />
===Historical Use===<br />
Give some examples here.<br />
==Limitations==<br />
Discuss limitations here<br />
==Conclusions==<br />
Content under conclusions<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S. J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C. D. Maranas, C. A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T18:44:26Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvext, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997, and, under certain circumstances discussed in the limitation section, provided a method to globally optimize certain posynomial programming problems.<br />
<br />
=== <br />
<br />
<br />
===Historical Use===<br />
Give some examples here.<br />
==Limitations==<br />
Discuss limitations here<br />
==Conclusions==<br />
Content under conclusions<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S. J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C. D. Maranas, C. A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T18:40:44Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvext, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods.<br />
<br />
The exponential transformation method was introduced by Costas Maranas and Christodoulos Floudas in 1997. <br />
<br />
<br />
===Historical Use===<br />
Give some examples here.<br />
==Limitations==<br />
Discuss limitations here<br />
==Conclusions==<br />
Content under conclusions<br />
<br />
==References==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, ''Optimization of Chemical Processes'', McGraw-Hill, 2001.<br />
<br />
2. J. Nocedal, S. J. Wright, ''Numerical Optimization'', Springer, 2006.<br />
<br />
3. J.F. Tsai, M.H. Lin, ''An Efficient Global Approach for Posynomial Geometric Programming Problems'', INFORMS Journal on Computing, 23 (3) pp. 483-492, 2011.<br />
<br />
4. C. D. Maranas, C. A. Floudas, ''Global Optimization in Generalized Geometric Programming'', Computers & Chemical Engineering, 21 (4) pp. 351-369, 1997.</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T18:36:44Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvext, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
Many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods. <br />
<br />
<br />
===Historical Use===<br />
Give some examples here.<br />
==Limitations==<br />
Discuss limitations here<br />
==Conclusions==<br />
Content under conclusions</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T18:36:21Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvext, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a globally optimal solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
To that end, many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"), so-called "psuedo-duality" methods which use a weaker form of duality, and adapted nonlinear programming methods. While locally optimal solutions are certainly better than no solution at all, the desire to find a globally optimal solution was strong enough to spur the development of other methods for posynomial programs in the 1990's. Such methods included global optimization algorithms based on exponential variable transformations of the original posynomial program, convex relaxation of the original problem, and branch-and-bound-type methods. <br />
<br />
<br />
===Historical Use===<br />
Give some examples here.<br />
==Limitations==<br />
Discuss limitations here<br />
==Conclusions==<br />
Content under conclusions<br />
<math>S_{\text{new}} = S_{\text{old}} - \frac{ \left( 5-T \right) ^2} {2}</math></div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T18:32:57Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math> f(x_1,x_2,...,x_n)=\sum_{k=1}^K c_kx_1^{a_{1k}}...x_n^{a_{nk}} </math><br />
<br />
where the variables <math> x_i </math> and the coefficients <math> c_k </math> are positive, real numbers, and all of the exponents <math> a_{ik} </math> are real numbers. For example,<br />
<br />
<math> f(x_1,x_2)=14x_1^{3.2}x_2^{4}+x_1^2x_2^7 </math> <br />
<br />
is a posynomial in two variables, and<br />
<br />
<math> f(x_1)=1500x_1^{3/5} </math><br />
<br />
is a posynomial with one variable. <br />
<br />
It is not difficult to imagine a posynomial that is nonconvext, such as the examples above. Unfortunately, this can cause some problems when attempting to find a globally optimal solution of a posynomial program, as it is known that only convex problems ''guarantee'' a global solution.<br />
<br />
Posynomial or geometric programming has been applied to solve problems in varied fields, such as signal circuit design, engineering design, project management, and inventory management, just to name a few. Clearly, the solution of such problems are important to the chemical engineer, and being able to globally solve such problems will equip the engineer with a powerful tool to solve a myriad of problems.<br />
<br />
===Development===<br />
To that end, many researchers attempted to solve such problems starting in the 1960's and 1970's. Methods used in the day aimed to find only locally optimal solutions, and employed methods such as successive approximation of posynomials (called "condensation"). <br />
===Historical Use===<br />
Give some examples here.<br />
==Limitations==<br />
Discuss limitations here<br />
==Conclusions==<br />
Content under conclusions<br />
<math>S_{\text{new}} = S_{\text{old}} - \frac{ \left( 5-T \right) ^2} {2}</math></div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T18:07:08Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==History and Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math>f(x1,x2,...,xn)=sum_{k=1}^K c_{\text{k}}x_{\text{1}}^a_{\text{1k}}..x_{\text{n}}^a{\text{nk}}</math><br />
===Original Development===<br />
Content under subtitle 1.1.<br />
===Historical Use===<br />
Give some examples here.<br />
==Limitations==<br />
Discuss limitations here<br />
==Conclusions==<br />
Content under conclusions<br />
<math>S_{\text{new}} = S_{\text{old}} - \frac{ \left( 5-T \right) ^2} {2}</math></div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T18:06:47Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==History and Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math>f(x1,x2,...,xn)=\sum_{k=1}^K c_{\text{k}}x_{\text{1}}^a_{\text{1k}}..x_{\text{n}}^a{\text{nk}}</math><br />
===Original Development===<br />
Content under subtitle 1.1.<br />
===Historical Use===<br />
Give some examples here.<br />
==Limitations==<br />
Discuss limitations here<br />
==Conclusions==<br />
Content under conclusions<br />
<math>S_{\text{new}} = S_{\text{old}} - \frac{ \left( 5-T \right) ^2} {2}</math></div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T18:02:22Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia (ChBE 345)<br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==History and Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967) as a function of the form<br />
<br />
<math>f(x1,x2,...,xn)=sum</math><br />
===Original Development===<br />
Content under subtitle 1.1.<br />
===Historical Use===<br />
Give some examples here.<br />
==Limitations==<br />
Discuss limitations here<br />
==Conclusions==<br />
Content under conclusions</div>DGarcia90https://optimization.mccormick.northwestern.edu/index.php/Exponential_transformationExponential transformation2014-05-25T18:01:17Z<p>DGarcia90: </p>
<hr />
<div>Author: Daniel Garcia <br />
<br />
Stewards: Dajun Yue and Prof. Fengqi You<br />
<br />
Date presented: May 25, 2014<br />
<br />
This article concerns the exponential transformation method for globally solving posynomial (or general geometric/signomial) optimization problems with nonconvex objective functions or constraints. A discussion of the method's development, use, and limitations will be presented.<br />
<br />
==History and Background==<br />
Before discussing methods to solve posynomial optimization problems, a brief review of posynomials is of use. A posynomial, as defined by Duffin, Peterson, and Zener (1967)<br />
===Original Development===<br />
Content under subtitle 1.1.<br />
===Historical Use===<br />
Give some examples here.<br />
==Limitations==<br />
Discuss limitations here<br />
==Conclusions==<br />
Content under conclusions</div>DGarcia90