Our CEO, Greg Nelson, and our Director of Technical Training, Michelle Buchecker, are presenting a combined EIGHT papers, one pre-conference training session, and leading one Table Talk at SAS Global Forum in Orlando April 2-5, 2017. Attending these presentations will enhance your SAS programming, visualization, and healthcare analytics skills.
Of course, Greg and Michelle are happy to talk with you at any time during the conference. Greg is an expert on visualization as well as healthcare analytics. In addition to SAS programming, Michelle is an avid Walt Disney World fan you can ask for advice on that too (hint: the Disney Dining Plan is usually not worth the cost).
Message us on the SAS Global Forum app; we’d love to chat with you!
See you at #SASGF 2017!
|Date and Time and Location||Title||Abstract|
|Sun, April 2nd, 1:00 PM – 4:30 PM
Dolphin Level 3 – Asia 2
|Data Visualization Best Practices: Practical Storytelling Using SAS®||Data means little without our ability to visually convey it. Whether building a business case to open a new office, acquiring customers, presenting research findings, forecasting or comparing the relative effectiveness of a program, we are crafting a story that is defined by the graphics that we use to tell it. Using practical, real-world examples, students will learn how to critically think about visualizations. Through a series of case studies, students will also get an opportunity to practice the design of data displays through small group projects. Here we will review common techniques for creating graphics across the SAS tool family, including SAS/GRAPH®, JMP®, and SAS Visual Analytics. Users do not need expertise in the software packages, but code samples will be used so that students can gain an appreciation of what it takes to create them. ****Extra Fee Event *****|
|Mon, April 3rd, 10:30 AM – 11:30 AM
Dolphin Level 5 – Southern Hemisphere IV
|A Practical Guide to Healthcare Data: Tips, Traps, and Techniques||Healthcare is weird. Healthcare data is even more so. The digitization of healthcare data that describes the patient experience is a modern phenomenon, with most healthcare organizations still in their infancy. While the business of healthcare is already a century old, most organizations have focused their efforts on the financial aspects of healthcare and not on stakeholder experience or clinical outcomes. Think of the workflow that you might have experienced such as scheduling an appointment through doctor visits, obtaining lab tests, or obtaining prescriptions for interventions such as surgery or physical therapy. The modern healthcare system creates a digital footprint of administrative, process, quality, epidemiological, financial, clinical, and outcome measures, which range in size, cleanliness, and usefulness. Whether you are new to healthcare data or are looking to advance your knowledge of healthcare data and the techniques used to analyze it, this paper serves as a practical guide to understanding and using healthcare data. We explore common methods for how we structure and access data, discuss common challenges such as aggregating data into episodes of care, describe reverse engineering real world events, and talk about dealing with the myriad of unstructured data found in nursing notes. Finally, we discuss the ethical uses of healthcare data and the limits of informed consent, which are critically important for those of us in analytics.|
|Mon, April 3rd, 2:00 PM – 3:00 PM
Dolphin Level 3 – Oceanic 2
|Healthcare Analytics: Examining the Analytic Infrastructure Components||Healthcare analytics is a broad term used in numerous healthcare-related industries to describe the use of data and statistical models, and to quantify and qualify outcomes. The panel’s talking points encompass several aspects of an analytical infrastructure. How does an analytic infrastructure develop in an organization by using components such as design, development, and maintenance? What are the needed capabilities for an organization to compete, as well as how to measure that data, including its strengths and weaknesses? How can analytics support the move from volume to value in the US health care system in the context in the Affordable Care Act? The panel delves into these issues as related to the healthcare industry.|
|Mon, April 3rd, 3:00 PM – 4:00 PM
Dolphin Level 3 – Europe 4
|Training in the Twenty-First Century||The philosophy of training has shifted over time, much like how we work has shifted over time. It is rare for an individual to be able to concentrate exclusively on a single task anymore. Our brains have been rewired to be constantly stimulated. Likewise, training has evolved, and it needs to continue to evolve to address how people learn, which is certainly not by sitting and listening to someone talk for one hour! In this table talk, let’s discuss training strategies and ensure that learners are engaged.|
|Mon, April 3rd, 4:00 PM – 5:00 PM
Dolphin Level 3 – Asia 3
|Developing Your Data Strategy||The ever growing volume of data challenges us to keep pace in ensuring that we use it to its full advantage. Unfortunately, often our response to new data sources, data types, and applications is somewhat reactionary. There exists a misperception that organizations have precious little time to consider a purposeful strategy without disrupting business continuity. Strategy is a phrase that is often misused and ill-defined. However, it is nothing more than a set of integrated choices that help position an initiative for future success. This presentation covers the key elements defining data strategy. The following key topics are included: What data should we keep or toss? How should we structure data (warehouse versus data lake versus real-time event streaming)? How do we store data (cloud, virtualization, federation, cloud, Hadoop)? What is the approach we use to integrate and cleanse data (ETL versus cognitive/ automated profiling)? How do we protect and share data? These topics ensure that the organization gets the most value from our data. They explore how we prioritize and adapt our strategy to meet unanticipated needs in the future. As with any strategy, we need to make sure that we have a roadmap or plan for execution, so we talk specifically about the tools, technologies, methods, and processes that are useful as we design a data strategy that is both relevant and actionable to your organization.|
|Tue, April 4th, 10:00 AM – 10:30 AM
Dolphin Level 3 – Oceanic 4
|Setting Relative Server Paths in SAS® Enterprise Guide®||Imagine if you will a program, a program that loves its data, a program that loves its data to be in the same directory as the program itself. Together, in the same directory. True love. The program loves its data so much, it just refers to it by filename. No need to say what directory the data is in; it is the same directory. Now imagine that program being thrust into the world of the server. The server knows not what directory this program resides in. The server is an uncaring, but powerful, soul. Yet, when the program is executing, and the program refers to the data just by filename, the server bellows “nay, no path, no data.” A knight in shining armor emerges, in the form of a SAS® macro, who says “lo, with the help of the SAS® Enterprise Guide® macro variable minions, I can gift you with the location of the program directory and send that with you to yon mighty server.” And there was much rejoicing. Yay. This paper shows you a SAS macro that you can include in your SAS Enterprise Guide pre-code to automatically set your present working directory to the same directory where your program is saved on your UNIX or Linux operating system. This is applicable to submitting to any type of server, including a SAS Grid Server. It gives you the flexibility of moving your code and data to different locations without having to worry about modifying the code. It also helps save time by not specifying complete pathnames in your programs. And can’t we all use a little more time?|
|Tue, April 4th, 12:30 PM – 1:00 PM
Dolphin Level 3 – Asia 5
|Protecting Your Programs from Unwanted Text Using Macro Quoting Functions
|Face it your data can occasionally contain characters that wreak havoc on your macro code. Characters such as the ampersand in AT&T, or the apostrophe in McDonald’s, for example. This paper is designed for programmers who know most of the ins and outs of SAS® macro code already. Now let’s take your macro skills a step farther by adding to your skill set, specifically, %BQUOTE, %STR, %NRSTR, and %SUPERQ. What is up with all these quoting functions? When do you use one over the other? And why would you need %UNQUOTE? The macro language is full of subtleties and nuances, and the quoting functions represent the epitome of all of this. This paper shows you in which instances you would use the different quoting functions. Specifically, we show you the difference between the compile-time and the execution-time functions. In addition to looking at the traditional quoting functions, you learn how to use %QSCAN and %QSYSFUNC among other functions that apply the regular function and quote the result.|
|Tue, April 4th, 4:30 PM – 5:00 PM
Dolphin Level 3 – Asia 5
|The Ins and Outs of %IF||Have you ever had your macro code not work and you couldn’t figure out why? Maybe even something as simple as %if andsysscp=WIN %then LIBNAME libref ‘c:\temp’; ? This paper is designed for programmers who know %LET and can write basic macro definitions already. Now let’s take your macro skills a step farther by adding to your skill set. The %IF statement can be a deceptively tricky statement due to how IF statements are processed in a DATA step and how that differs from how %IF statements are processed by the macro processor. Focus areas of this paper are: 1) emphasizing the importance of the macro facility as a code-generation facility; 2) how an IF statement in a DATA step differs from a macro %IF statement and when to use which; 3) why semicolons can be misinterpreted in an %IF statement.|
|Wed, April 5th, 12:30 PM – 1:30 PM
Dolphin Level 3 – Oceanic 8
|The Elusive Data Scientist: Real-World Analytic Competencies||You’ve all seen the job posting that looks more like an advertisement for the ever-elusive unicorn. It begins by outlining the required skills that include a mixture of tools, technologies, and masterful “things that you should be able to do.” Unfortunately, many such postings begin with restrictions to those with advanced degrees in math, science, statistics, or computer science and experience in your specific industry. They must be able to perform predictive modeling, natural language processing, and, for good measure, candidates should apply only if they know artificial intelligence, cognitive computing, and machine learning. The candidate should be proficient in SAS®, R, Python, Hadoop, ETL, real-time, in-cloud, in-memory, in-database and must be a master storyteller. I know of no one who would be able to fit that description and still be able to hold a normal conversation with another human. In our work, we have developed a competency model for analytics, which describes nine performance domains that encompass the knowledge, skills, behaviors, and dispositions that today’s analytic professional should possess in support of a learning, analytically driven organization. In this paper, we describe the model and provide specific examples of job families and career paths that can be followed based on the domains that best fit your skills and interests. We also share with participants a self-assessment tool so that they can see where the stack up!|