The DO Loop
Statistical programming in SAS with an emphasis on SAS/IML programs![Getting started with the iml action in SAS Viya Architecture of an MPP session in SAS Viya. The client calls an action, which can use multiple nodes and threads.](https://blogs.sas.com/content/iml/files/2020/06/imlcasMPP1-702x336.png)
A previous article provides an introduction and overview of the iml action, which is available in SAS Viya 3.5. The article compares the iml action to PROC IML and states that most PROC IML programs can be modified to run in the iml action. This article takes a closer look
![An introduction to the iml action in SAS Viya Architecture of an MPP session in SAS Viya. The client calls an action, which can use multiple nodes and threads.](https://blogs.sas.com/content/iml/files/2020/06/imlcasMPP1-702x336.png)
This article introduces the iml action, which is available in SAS Viya 3.5. The iml action supports most of the same syntax and functionality as the SAS/IML matrix language, which is implemented in PROC IML. With minimal changes, most programs that run in PROC IML also run in the iml
![Interactions with spline effects in regression models](https://blogs.sas.com/content/iml/files/2020/06/splineinteract1-640x336.png)
A SAS customer asked how to specify interaction effects between a classification variable and a spline effect in a SAS regression procedure. There are at least two ways to do this. If the SAS procedure supports the EFFECT statement, you can build the interaction term in the MODEL statement. For
![How to estimate the difference between percentiles](https://blogs.sas.com/content/iml/files/2020/06/DiffPctl3-480x336.png)
I recently read an article that describes ways to compute confidence intervals for the difference in a percentile between two groups. In Eaton, Moore, and MacKenzie (2019), the authors describe a problem in hydrology. The data are the sizes of pebbles (grains) in rivers at two different sites. The authors
![The Kullback–Leibler divergence between continuous probability distributions](https://blogs.sas.com/content/iml/files/2020/06/KLDivCont3-640x336.png)
In a previous article, I discussed the definition of the Kullback-Leibler (K-L) divergence between two discrete probability distributions. For completeness, this article shows how to compute the Kullback-Leibler divergence between two continuous distributions. When f and g are discrete distributions, the K-L divergence is the sum of f(x)*log(f(x)/g(x)) over all
![Minimizing the Kullback–Leibler divergence](https://blogs.sas.com/content/iml/files/2020/05/KLDiv7-640x336.png)
The Kullback–Leibler divergence is a measure of dissimilarity between two probability distributions. An application in machine learning is to measure how distributions in a parametric family differ from a data distribution. This article shows that if you minimize the Kullback–Leibler divergence over a set of parameters, you can find a