Joyce Norris-Montanari says IT and business need to work together when giving business users self-service data preparation tools.
Tag: data preparation
David Loshin describes some steps you can take to ensure that self-service data preparation improves collaboration.
Data processing, data integration, data quality, data security - all these topics sound like a compulsory program for IT and not as “hot” as analytics, data science, the Internet of Things or artificial intelligence. However, companies globally have long recognized that data management is at least as important as the
Datenaufbereitung, Datenintegration, Datenqualität, Datensicherheit – all das hört sich nach Pflichtprogramm für die IT an und ist längst nicht so sexy wie Hype-Themen à la Data Science, Internet of Things oder Artificial Intelligence. Dass Datenmanagement im Businesskontext aber einen mindestens ebenso großen Stellenwert hat – sei es für die Optimierung
Get faster value out of your data by empowering business users to work with data on their own.
Phil Simon weighs in on the value of getting your own hands dirty using self-service data prep.
Get on with your day faster by taking a self-service approach to data preparation.
Helmut Plinke explains why modernizing your data management is essential to supporting your analytics platform.
When developing SAS applications, you can feed database tables into your application by using the libname access engine either by directly referring a database table, or via SAS or database views that themselves refer to one or more of the database tables. However, such on-the-fly data access may not be
Welche Rolle Datenqualität und Data Governance beim Data Management für Analytics spielen, habe ich mit meinem Kollegen Gerhard Svolba zuletzt an dieser Stelle diskutiert. Doch was genau macht modernes Datenmanagement aus, und welche Rolle spielen dabei neue Technologien à la Hadoop und Co.? Und wie sieht überhaupt die künftige Zusammenarbeit
Auch wenn der Hype von Gartner für beendet erklärt wurde: An Big Data und der Auswertung entsprechender (oftmals unstrukturierter) Datenmengen kommt kein Unternehmen vorbei. Doch welche Herausforderungen stellen Big Data und damit einhergehende Entwicklungen an das Data Management? Wie können Data Scientists, IT und Fachabteilung heute zusammenarbeiten? Und wo prallen
The rise of self-service analytics, and the idea of the ‘citizen data scientist’, has also brought a number of issues to the fore in organizations. In particular, two common areas of discussion are the twin pillars of data quality and data preparation. There is no doubt that good quality, well-prepared
„Die IT liefert nicht, der Fachbereich weiß nicht, was er heute oder morgen an Daten haben will“… Beide haben recht, ein Dilemma, das darin endet, dass Selbsthilfe betrieben wird. Der Informationshunger besteht weiterhin, und was nicht geliefert wird, besorgt man sich auf anderem Wege. Da wären: die SAP-Maske, Excel, Datenbank(en),
One aspect of high-quality information is consistency. We often think about consistency in terms of consistent values. A large portion of the effort expended on “data quality dimensions” essentially focuses on data value consistency. For example, when we describe accuracy, what we often mean is consistency with a defined source
It's that time of year again where almost 50 million Americans travel home for Thanksgiving. We'll share a smorgasbord of turkey, stuffing and vegetables and discuss fun political topics, all to celebrate the ironic friendship between colonists and Native Americans. Being part Italian, my family augments the 20-pound turkey with pasta –
.@philsimon says don't treat data self-service as a binary.
Most enterprises employ multiple analytical models in their business intelligence applications and decision-making processes. These analytical models include descriptive analytics that help the organization understand what has happened and what is happening now, predictive analytics that determine the probability of what will happen next, and prescriptive analytics that focus on
.@philsimon on the need to adopt agile methodologies for data prep and analytics.
In Part 1 of this two-part series, I defined data preparation and data wrangling, then raised some questions about requirements gathering in a governed environment (i.e., ODS and/or data warehouse). Now – all of us very-managed people are looking at the horizon, and we see the data lake. How do
Lately I've been binge-watching a lot of police procedural television shows. The standard format for almost every episode is the same. It starts with the commission or discovery of a crime, followed by forensic investigation of the crime scene, analysis of the collected evidence, and interviews or interrogations with potential suspects. It ends
.@philsimon chimes in on new data-gathering methods and what they mean for analytics.
I'm a very fortunate woman. I have the privilege of working with some of the brightest people in the industry. But when it comes to data, everyone takes sides. Do you “govern” the use of all data, or do you let the analysts do what they want with the data to
Critical business applications depend on the enterprise creating and maintaining high-quality data. So, whenever new data is received – especially from a new source – it’s great when that source can provide data without defects or other data quality issues. The recent rise in self-service data preparation options has definitely improved the quality of
Hadoop has driven an enormous amount of data analytics activity lately. And this poses a problem for many practitioners coming from the traditional relational database management system (RDBMS) world. Hadoop is well known for having lots of variety in the structure of data it stores and processes. But it's fair to