With SAS Viya 3.3, a new data transfer mechanism Multi Node Data Transfer has been introduced to transfer data between the data source and the SAS’ Cloud Analytics Services. Learn more about this feature.
Author
When using conventional methods to access and analyze data sets from Teradata tables, SAS brings all the rows from a Teradata table to SAS Workspace Server. As the number of rows in the table grows over time, it adds to the network latency to fetch the data from a database
As a SAS Viya user, you may be wondering whether it is possible to execute data append and data update concurrently to a global Cloud Analytic Services (CAS) table from two or more CAS sessions. (Learn more about CAS.) How would this impact the report view while data append or
With the release of SAS® 9.4 M3, you can now access SAS Scalable Performance Data Engine (SPD Engine) data using Hive. SAS provides a custom Hive SerDe for reading SAS SPD Engine data stored on HDFS, enabling users to access the SPD Engine table from other applications. The SPD Engine
The SAS Data Loader directive ‘Copy Data to Hadoop’ enables you to copy data from DBMS to Hadoop Hive tables. The SAS Data Loader for Hadoop can be configured to copy data from any external database which offers JDBC database connectivity. SAS Data Loader uses the Apache Sqoop™ and Oozie
SAS 9.4 M3, introduces a new procedure named PROC SQOOP. This procedure enables users to access an Apache Sqoop utility from a SAS session to transfer data between a database and HDFS. Using SAS PROC SQOOP lets you submit Sqoop commands from within your SAS application to your Hadoop cluster. PROC