Udo Sglavo photoNote from Udo Sglavo on mathematical optimization: When data scientists look at the essence of analytics and wonder about their daily endeavor, it often comes down to supporting better decisions. Peter F. Drucker, the founder of modern management, stated: "Whenever you see a successful business, someone once made a courageous decision."

In his recent blog post, What is optimization? And why it matters for your decisions," Rob Pratt, Senior Manager in Scientific Computing R&D, wrote: "If the differences in outcomes are significant and the options are numerous, especially if multiple decisions are interdependent, you have a good opportunity to apply analytics." The following post provides additional background on mathematical optimization and its availability in SAS software.

How do you define an optimization problem?

Mathematical optimization is one of the most valuable disciplines in analytics, with applications in every industry. It is used to rigorously search for the best way to use resources to maximize or minimize some metric while respecting business rules that must be satisfied.  The four main ingredients to define an optimization problem are input parameters, decision variables, objectives, and constraints:

  1. Input parameters are fixed constants over which the decision maker has no control.  These can include raw data or outputs from some other analytic methods such as statistical analysis, data mining, or forecasting.
  2. Decision variables represent the quantities that are under the decision maker's control.  Examples include production levels, route selections, resource allocations, schedule elements, and yes/no choices.
  3. Objectives are the metrics that you want to optimize, and an objective function depends on the values of the decision variables.  Common objectives are to maximize profit, minimize costs, minimize distance traveled, minimize unused materials, and minimize model fitting errors.
  4. Constraints enforce relationships that the decision variables must satisfy.  Some examples of constraints are:
    • Adhere to factory capacities.
    • Satisfy customer demands.
    • Use only the available personnel.
    • Choose from among available routes.
    • Spend within budgets.

The goal of optimization is to find an optimal solution, that is, an assignment of values to decision variables that satisfies all constraints and attains the best possible value for the objective function.  An optimization algorithm searches for an optimal solution. An optimization solver is a collection of algorithms for a specific problem type (characterized by the types of functions that are used to define the objective and constraints and the values the decision variables can take on).

Optimization plays a key role in providing optimal or good solutions for many real-world business problems. Notable applications include airline crew scheduling, inventory optimization, revenue management, and price optimization. Recently, optimization has become a core component in other modern analytics areas. Hyperparameter optimization, meta learning, and optimization algorithms for deep learning are among the most active research areas.  The integration between optimization and machine learning, statistics, forecasting, and econometrics has become very tight as new theories and algorithms advance.

How is mathematical optimization used in SAS software?

SAS® Optimization in SAS® Viya® provides access to a wide array of optimization solvers, with each solver specialized to suit a problem type.  For example, the linear programming (LP) solver handles problems where all decision variables are continuous and both the objective and constraints are linear functions of these variables.  If some of the variables must take integer values, the mixed integer linear programming (MILP) solver is used instead.  Other supported solvers include quadratic programming (QP), nonlinear programming (NLP), constraint programming, black-box optimization, and network optimization.

The OPTMODEL procedure and the corresponding runOptmodel action provide an algebraic modeling language that enables you to build and solve optimization problems, with access to all the solvers listed above.  A common use case involves the following steps:

  1. Declare input parameters.
  2. Read the input data.
  3. Declare decision variables, objectives, and constraints.
  4. Call an optimization solver.
  5. Report the optimal solution.

For more complicated problems, you might need to write a customized solution algorithm that calls a solver in a loop or calls multiple solvers, with the output of one solver providing input for another solver.

OPTMODEL includes a programming language, with access to almost all DATA step functions, that enables you to write such customized algorithms.  The following examples that were mentioned in the earlier post all use this functionality:

SAS Optimization includes several features that employ threaded and distributed computation.  OPTMODEL provides a COFOR loop to solve independent problems concurrently, and the runOptmodel action supports BY-group processing for the common use case of building and solving the same problem multiple times with different input data.

For the NLP solver, the multistart feature increases the likelihood of finding a globally optimal solution for highly nonconvex problems that have many local optima.  For the MILP solver, the default branch-and-cut algorithm threads the dynamic tree search and in distributed mode processes tree nodes on different workers and communicates new global lower and upper bounds back to the controller.

The LP and MILP solvers both include a decomposition algorithm that exploits block-angular structure; for MILP problems that consist of loosely coupled subproblems, this algorithm often yields dramatic performance improvements over branch-and-cut.  The network solver contains both threaded and distributed implementations for selected algorithms, and it also supports generic BY-group processing.

Besides the direct use of the optimization solvers by SAS Optimization procedures and actions, several SAS solutions, including SAS® Pack Optimization, SAS® Promotion Optimization, and SAS® Marketing Optimization, use our solvers under the hood.  The optimization engine in SAS Marketing Optimization provides a specialized algorithm that uses the LP and MILP solvers to solve problems with millions of decision variables and constraints.  Customer success stories include Akbank and Scotiabank.

Many procedures in other products also use various optimization solvers.  For example, the PSMATCH procedure uses the network solver to solve the underlying linear assignment and minimum-cost network flow problems. The CAUSALGRAPH procedure also uses the network solver for cycle enumeration, connected components, and vertex separation.  The HPFRECONCILE procedure uses the QP solver to solve a least-squares problem, and many procedures call the NLP solver. SAS Visual Data Mining and Machine Learning uses the black-box solver to tune hyperparameters for a number of machine learning models, including decision tree, gradient boosting, support vector machine, neural network, and logistic regression.

For Python users, the sasoptpy modeling package also provides a modeling interface for the LP, MILP, QP, NLP, and black-box solvers in SAS Optimization.  You can use native Python structures such as dictionaries, tuples, and lists to define an optimization problem and then call one of the solvers.


I hope this blog post has helped you learn about some applications of mathematical optimization and how it is used in SAS software.  For more information, here are some useful links:

LEARN MORE | SAS Optimization

This is the ninth post in our series about statistics and analytics bringing peace of mind during the pandemic.


About Author

Senior Manager

Rob Pratt has worked at SAS since 2000 and is a Senior Manager in the Scientific Computing department in the Analytics R&D division. He manages a team of developers responsible for the optimization modeling language and solvers for linear, mixed integer linear, quadratic, and conic optimization. He earned a B.S. in Mathematics (with a second major in English) from the University of Dayton and both an M.S. in Mathematics and a Ph.D. in Operations Research from The University of North Carolina at Chapel Hill.

Comments are closed.

Back to Top