Proc Arima Moving Average


Os tópicos populares nesta seção incluem o uso de PROC REPORT, SAS Styles, Templates e ODS, bem como uma variedade de técnicas usadas para produzir resultados SAS no Microsoft Excel, Powerpoint e outros aplicativos do Office. Os tópicos incluem gráficos, visualização de dados, publicação e relatórios. Os tópicos populares nesta seção incluem o uso de SASGraph, SAS Styles, Templates e ODS, bem como uma variedade de técnicas usadas para produzir resultados SAS no Microsoft Excel e outros aplicativos do Office. A ciência dos dados é considerada como uma extensão das estatísticas, mineração de dados e análises preditivas. Esta seção se concentra em como quotthe Sexiest Job of the 21st Centuryquot é feito em SAS. Áreas de interesse incluem análise de texto e dados de redes sociais. Os apresentadores preparam uma exibição digital que estará disponível para todos os participantes durante a conferência, ao invés de realizar uma apresentação em estilo de palestra. A seção geralmente exibe conceitos ou idéias de gráficos de alta resolução ou ideais que permitem algum estudo independente por participantes da conferência. As apresentações estão centradas em visualizar dados, incluindo PROC GPLOT, gráficos animados e outras personalizações. Hands-on-Workshops proporcionam aos participantes a interacção lsquohands-on-the-keyboardrsquo com o SAS Software durante cada apresentação. Os apresentadores orientam os participantes através de exemplos de técnicas e recursos de Software SAS, oferecendo a oportunidade de fazer perguntas e aprender através da prática. Todas as apresentações COMO são fornecidas por usuários SAS experientes que são convidados a apresentar. Esta seção apresenta apresentações sobre integração, análise e relatórios de dados, ainda com conteúdo específico do setor. Exemplos de tópicos orientados por conteúdo são: Resultados de saúde e métodos de pesquisa de saúde Padrões de dados e controle de qualidade para submissão de dados de ensaios clínicos para FDA Banco, cartão de crédito, seguros e gerenciamento de risco Modelagem e análise de seguros Esta seção ajuda os usuários do SAS a entender como mergulhar em O rico mundo dos recursos dedicados à realização de treinamento de educação SAS, publicação, redes sociais, consultoria, certificação, suporte técnico e oportunidades de afiliação e crescimento profissional. Esta seção permite que usuários novatos do SAS e outros participem de uma série de apresentações que os guiarão através dos conceitos fundamentais da criação do passo SAS DATA básico e da sintaxe do PROC, seguido de duas Workshops práticos. Todas as apresentações do SAS Essentials são realizadas por usuários SAS experientes que estão convidados a apresentar. Se você tem um programa que funciona muito tempo, ou será executado várias vezes, você pode querer acompanhar o tempo que cada parte do programa leva para executar. Isso pode ajudá-lo a encontrar as partes lentas do seu programa e prever o tempo que uma corrida futura irá realizar. Este artigo apresenta uma ferramenta para ajudar com esses problemas. A macro WriteProgramStatus fornece uma maneira de criar um arquivo de status, facilmente lido por humanos ou máquinas. Além de SEJA ALÉM: Técnicas para a Execução Condicional do Código SAS Quase todos os programas SASreg incluem lógica que faz com que determinado código seja executado somente quando as condições específicas forem atendidas. Isso geralmente é feito usando o IF. ENTÃO. Sintaxe ELSE. Neste trabalho, exploraremos várias maneiras de construir a lógica SAS condicional, incluindo algumas que possam oferecer vantagens sobre a declaração IF. Os tópicos incluirão a indicação SELECT, as funções IFC e IFN, as famílias de funções CHOOSE e WHICH e a função COALESCE. Wersquoll também se certifica de que compreendemos a diferença entre um IF regular e a declaração de macro IF. Uma aplicação Waze para base SASreg: roteamento automático em torno de conjuntos de dados bloqueados, processos de gargalo e outros congestionamentos de tráfego no Superhighwa Data. O aplicativo Waze, comprado pela Google em 2013, alerta milhões de usuários sobre congestionamento de trânsito, colisões, construção e outros As complexidades da estrada que podem impedir os motoristas tentam chegar de A para B. De plataformas jackknifed para carcaças de jackalope, as estradas podem ser enrubescidas por engarrafamentos ou repletas de obstáculos que impedem o fluxo de tráfego e a eficiência. Os algoritmos Waze reencaminham automaticamente os usuários para rotas mais eficientes com base em eventos relatados pelo usuário, bem como normas históricas que demonstram condições típicas da estrada. As infraestruturas de extrair, transformar e carregar (ETL) geralmente representam fluxos de processos serializados que podem imitar rotas e que podem se agrupar de forma semelhante por conjuntos de dados bloqueados, processos lentos e outros fatores que introduzem ineficiência. A macro LOCKITDOWN SASreg, introduzida no WUSS em 2014, detecta e impede colisões de acesso a dados que ocorrem quando dois ou mais processos SAS ou usuários tentam simultaneamente acessar o mesmo conjunto de dados SAS. Além disso, a macro LOCKANDTRACK, introduzida no WUSS em 2015, fornece rastreamento em tempo real e métricas de desempenho histórico para conjuntos de dados bloqueados através de uma tabela de controle unificada, permitindo que os desenvolvedores aprimorem os processos para otimizar a eficiência e a taxa de transferência de dados. Este texto demonstra a implementação do LOCKSMART e suas métricas de desempenho de bloqueio para criar algoritmos de lógica difusa e de dados que redirecionam de forma preventiva o fluxo do programa em torno de conjuntos de dados inacessíveis. Assim, em vez de esperar desnecessariamente um conjunto de dados para se tornar disponível ou um processo para concluir, o software realmente antecipa o tempo de espera com base em normas históricas, executa outras funções (independentes) e retorna ao processo original quando ele fica disponível. Usando a Medicina de Emergência do SAS no século XXI: Rumo a Objetivos, Ações, Resultados e Comissões de Tratamento de Exceção A medicina de emergência compreende um contínuo de cuidados que geralmente começa com primeiros socorros, suporte básico de vida (BLS) ou suporte de vida avançado (ALS). Os intervenientes, incluindo os bombeiros, técnicos médicos de emergência (EMT) e paramédicos, muitas vezes são os primeiros a triagem dos doentes, feridos e enfermos, avaliando rapidamente a situação, fornecendo cuidados curativos e paliativos e transportando pacientes para instalações médicas. Os protocolos de tratamento de serviços médicos de emergência (EMS) e os procedimentos operacionais padrão (SOPs) garantem que, apesar da natureza singular de cada paciente, bem como possíveis complicações, o pessoal treinado possui uma variedade de ferramentas e técnicas para fornecer diferentes graus de cuidados de forma padronizada, Repetitiva e responsável. Assim como os provedores de EMS devem avaliar os pacientes para prescrever um curso efetivo de ação, o software também deve identificar e avaliar o desvio ou falha do processo e, de forma semelhante, prescrever seu curso de ação compatível. O tratamento de exceção descreve a identificação e a resolução de eventos adversos, inesperados ou inoportunos que podem ocorrer durante a execução do software e devem ser implementados no software SASreg que exige confiabilidade e robustez. O objetivo do tratamento de exceção sempre é reencaminhar o controle do processo de volta ao quothappy trailquot ou quothappy pathquotmdashi. e. O caminho de processo originalmente pretendido que oferece valor comercial completo. Mas, quando ocorrem eventos insuperáveis, as rotinas de manipulação de exceção devem instruir o processo, o programa ou a sessão para terminar graciosamente para evitar danos ou outros efeitos adversos. Entre os resultados opostos de um programa totalmente recuperado e o encerramento gracioso do programa, no entanto, existem vários outros caminhos de resolução de exceções que podem fornecer o valor comercial total ou parcial, às vezes apenas com um pequeno atraso. Para esse fim, este texto demonstra esses caminhos e discute diversas modalidades internas e externas para comunicar exceções aos usuários, desenvolvedores e outras partes interessadas do SAS. Não seria bom se o seu programa de longa duração pudesse tocá-lo no ombro e dizer lsquoOkay, Irsquom tudo feito agora. Pode Esta dica rápida mostrar-lhe-á como é fácil ter seu programa SASreg enviando-lhe (ou qualquer outra pessoa) um e-mail durante a execução do programa. Uma vez que o seu conteúdo tenha os princípios básicos simples, o seu conteúdo traz todos os tipos de usos para esta ótima característica, e seu amigo se pergunta como você já viveu sem isso. Encontrando todas as diferenças em duas bibliotecas SAS usando Proc Compare Bharat Kumar Janapala Nas indústrias clínicas que validam os conjuntos de dados por programação paralela e proc, comparando esses conjuntos de dados derivados é uma prática de rotina, mas devido a atualizações constantes em dados brutos, torna-se difícil descobrir diferenças entre Duas bibliotecas. O programa atual aponta todas as diferenças entre as bibliotecas da maneira mais otimizada com a ajuda da comparação Proc e diretórios de ajuda do SAS. Em primeiro lugar, o programa descreve os conjuntos de dados presentes nas bibliotecas e lista os conjuntos de dados incomuns. Em segundo lugar, o programa procura o número total de observações e variáveis ​​presentes nas bibliotecas por conjunto de dados e lista tanto as variáveis ​​incomuns quanto os conjuntos de dados com diferenças no número de observação. Em terceiro lugar, assumindo que ambas as bibliotecas são idênticas, o programa proc compara os conjuntos de dados com nomes similares e captura as diferenças que podem ser monitoradas ao atribuir o número máximo de diferenças por variável para otimização. Finalmente, o programa lê todas as diferenças e fornece um relatório consolidado seguido da descrição por conjunto de dados. Deixe a variável de ambiente ajudá-lo: mover arquivos em todos os estudos e criar a biblioteca SAS On-The-Go Em ensaios clínicos, conjuntos de dados e programas SAS são armazenados em diferentes estudos sob diferentes produtos no Unix. Os programadores SAS precisam acessar esses locais com freqüência, ler dados para programação ou copiar arquivos para reutilização em novas análises. Digitar o longo caminho do diretório é muito demorado e nervoso. Este artigo descreve uma maneira eficiente de armazenar os vários caminhos de diretório com antecedência através de variáveis ​​de ambiente. Essas variáveis ​​de ambiente predefinidas podem ser usadas para operações de arquivos Unix (copiar, excluir, procurar arquivos, etc.). As informações transportadas por essas variáveis ​​também podem ser passadas para o SAS para construir bibliotecas onde quer que você vá. Verifique por favor: uma abordagem automatizada para verificação de registro na indústria farmacêutica, nos encontramos a ter que re-executar nossos programas repetidamente para cada entrega. Esses programas podem ser executados individualmente em uma sessão SASreg interativa que nos permitirá revisar os logs à medida que executamos os programas. Poderíamos executar o programa individual em lote e abrir cada registro individual para revisar mensagens de log não desejadas, como ERROR, ADVERTÊNCIA, não inicializado, foram convertidos, etc. Ambas as abordagens estão bem se houver apenas um punhado de programas para executar. Mas o que você faz se você tiver centenas de programas que precisam ser re-executados Você quer abrir todos os programas e procurar mensagens indesejadas. Esta abordagem manual pode levar horas e é propensa a descuidos acidentais. Este artigo irá discutir uma macro que irá pesquisar através de um diretório especificado e verificar todos os logs no diretório ou verificar apenas registros com uma convenção de nomenclatura específica ou verificar apenas os arquivos listados. A macro então produz um relatório que lista todos os arquivos verificados e indica se os problemas foram encontrados ou não. Deixe SASreg fazer seu trabalho irresistível Certifique-se de que todas as informações necessárias para replicar um entregável salvo podem ser uma tarefa pesada. Você deseja garantir que todos os conjuntos de dados brutos sejam salvos, todos os conjuntos de dados derivados, sejam eles conjuntos de dados SDTM ou ADaM, são salvos e você prefere que os selos de data e hora sejam preservados. Não só você precisa dos conjuntos de dados, mas também precisa manter uma cópia de todos os programas que foram utilizados para produzir o entregável e os registros correspondentes quando os programas foram executados. Qualquer outra informação necessária para produzir os resultados necessários deve ser salva. Tudo isso precisa ser feito para cada entrega e pode ser fácil ignorar uma etapa ou alguma informação chave. A maioria das pessoas faz esse processo manualmente e pode ser um processo demorado, então por que não deixar o SAS fazer o trabalho para você Arquivos LST com Proc Compare Result Manvitha Yennam e Srinivas Vanam O método mais utilizado para validar programas é Double Programming, que Envolve dois programadores trabalhando em um único programa e, finalmente, comparando suas saídas usando o procedimento como ldquocomparerdquo. Os resultados da Proc Compare geralmente são produzidos em arquivos. LST. A maioria das empresas faz a revisão manual, verificando todos os arquivos. LST para garantir que as saídas sejam semelhantes. Mas este processo manual é demorado, bem como propenso a erros. O objetivo deste artigo é usar uma macro SAS em vez de seguir o processo de revisão manual. Esta macro SAS lê todos os arquivos. LST forneceu um caminho e cria um resumo dos arquivos da lista e indica se ele tem problema ou não e também o tipo de problema. Leia qualquer publicação, de mídia nacional para o seu site de notícias local. A realização educacional, particularmente nos campos STEM, é uma grande preocupação e bilhões de dólares são gastos para resolver esse problema. Como o SAS pode ser aplicado para analisar o resultado de uma intervenção e, igualmente importante, transmitir os resultados dessa análise a um público não técnico. Usando dados reais das avaliações de jogos educacionais, esta apresentação passa pelas etapas de uma avaliação, de Precisa de avaliação para validação de medição para comparação de teste pré-pós. As técnicas aplicadas incluem PROC FREQ com opções para dados correlatos, PROC FACTOR para análise fatorial, PROC TEST e PROC GLM para ANOVA de medidas repetidas. O uso feliz é feito em todo o ODS Statistical Graphics. Usando procedimentos SASSTAT padrão, essas análises podem ser executadas em qualquer sistema operacional com SAS, incluindo SAS Studio em um iPad. Construindo Intervalos de Confiança para Diferenças de Proporções Binomiais em SASreg Com duas proporções binomiais, desejamos construir um intervalo de confiança para a diferença. O método mais conhecido é o método Wald (ou seja, a aproximação normal), mas pode produzir resultados indesejáveis ​​em casos extremos (por exemplo, quando as proporções são próximas de 0 ou 1). Existem muitos outros métodos, incluindo métodos assintóticos, métodos aproximados e métodos exatos. Este artigo apresenta 9 métodos diferentes para a construção de tais intervalos de confiança, 8 dos quais estão disponíveis nos procedimentos SASreg 9.3. Os métodos são comparados e os pensamentos são dados sobre o método a ser usado. Um Guia Animado: Modelagem de Resposta Incremental em Enterprise Miner Algumas pessoas podem esperar comprar um produto sem contato de marketing. Se todos os potenciais clientes forem contatados, uma empresa não pode determinar o verdadeiro efeito de uma manipulação de marketing. Esta conversa usa o nó INCREMENTAL RESPONSE no SASreg Enterprise Minertrade para resolver um problema básico de marketing. Os comerciantes costumam segmentar e gastar dinheiro em contato com todos os potenciais clientes. Isso é um desperdício, uma vez que algumas dessas pessoas se tornariam clientes por conta própria. Este nó usa um conjunto de dados para separar clientes em grupos: 1) provavelmente comprar 2) provavelmente comprará se eles são um assunto de campanhas de marketing e 3) clientes que se espera sejam resistentes aos esforços de marketing. Empregando análises latentes em estudos longitudinais: uma exploração de procedimentos SASreg independentemente desenvolvidos Este artigo analisa várias maneiras de investigar variáveis ​​latentes em pesquisas longitudinais usando três procedimentos SASreg criados independentemente. Três análises diferentes para a descoberta de variáveis ​​latentes serão analisadas e exploradas: análise de classe latente, análise de transição latente e análise de trajetória latente. Os procedimentos de análise latente explorados neste documento (todos os quais foram desenvolvidos fora do SASreg Institute) são PROC LCA, PROC LTA e PROC TRAJ. Os detalhes por trás desses procedimentos e como adicioná-los à biblioteca de procedimentos onersquos serão explorados e depois aplicados a uma questão de estudo de caso exploratório. O efeito das variáveis ​​latentes sobre o ajuste e uso do modelo de regressão em comparação com um modelo similar usando dados observados também pode ser revisado brevemente. Os dados utilizados para este estudo foram obtidos através do Estudo Longitudinal Nacional de Saúde do Adolescente, um estudo distribuído e coletado pela Add Health. Os dados foram analisados ​​usando o SAS 9.4. Este documento destina-se a usuários SASreg de nível moderado a avançado. Este artigo também é escrito para uma audiência com antecedentes em ciência comportamental e estatísticas. MIghty PROC MI para o resgate Os dados em falta são uma característica de muitos conjuntos de dados, pois os participantes podem retirar-se dos estudos, não fornecer medidas auto-relatadas e, às vezes, problemas técnicos podem interferir com a coleta de dados. Se usarmos apenas observações completas, ficamos com erros padrão maiores, intervalos de confiança mais amplos e maiores valores de p. Os métodos de dados que faltam, como a análise completa de casos ou a imputação, podem ser usados, mas os mecanismos e padrões de dados que faltam devem ser entendidos primeiro. Este artigo fornecerá uma visão geral das fontes de dados, padrões e mecanismos de falta de dados. Um conjunto de dados completo será usado para obter resultados de análise de regressão verdadeira. Serão criados dois conjuntos de dados com valores em falta, um com os dados faltando completamente aleatoriamente e um com dados ausentes, não aleatoriamente. Os métodos de dados em falta de caso completo, a imputação única e múltipla serão aplicados. Proc MI e MIANALYZE serão usados ​​no SASreg 9.4 para a análise. Os resultados dos métodos de dados em falta serão comparados entre si e com os resultados verdadeiros. John Amrhein e Fei Wang Motivados pela freqüente necessidade de testes de equivalência em ensaios clínicos, este documento fornece informações sobre os testes de equivalência. Nós resumimos e comparamos testes de equivalência para diferentes projetos de estudo, incluindo projetos para um problema de amostra, projetos para o problema de duas amostras (observações emparelhadas e duas amostras não relacionadas) e projetos com múltiplos braços de tratamento. O poder e a estimativa do tamanho da amostra são discutidos. Nós também fornecemos exemplos para implementar os métodos usando os procedimentos FREQ, TTEST, MIXED e POWER no software SASSATreg. Correlação de distância para vetores: uma macro SASreg O coeficiente de correlação de Pearson é bem conhecido e amplamente utilizado. No entanto, sofre de certas restrições: é uma medida de dependência linear (apenas) e não fornece um teste de independência estatística, e é restrito a variáveis ​​aleatórias univariadas. Desde a sua criação, foram propostas medidas relacionadas e alternativas para superar essas restrições. Várias novas medidas para substituir ou complementar a correlação de Pearson foram propostas na literatura estatística nos últimos anos. Szekeley et al. (2007) descreve uma nova medida - correlação de distância - que supera as deficiências da correlação de Pearson. A correlação de distância é definida para 2 variáveis ​​aleatórias X, Y (que podem ser vetores) como uma função de peso ou distância aplicada à diferença entre a função característica da junção para (X, Y) e o produto das funções características individuais para X, Y . Na prática, é estimado pelo cálculo das matrizes de distância individuais para X, Y e correlação de distância é uma medida de similaridade para as 2 matrizes. Para o caso normal bivariante, a correlação de distância é uma função da correlação de Pearson. A correlação de distância também suporta um teste relacionado de independência estatística. A correlação de distância teve bons resultados em estudos de simulação comparando-a com outras alternativas à correlação de Pearson. Aqui apresentamos uma macro SASreg básica para calcular a correlação de distância para vetores reais arbitrários. Determinando a funcionalidade das bombas de água na Tanzânia Usando SASreg EM e VA Índia Kiran Chowdaravarpu, Vivek Manikandan Damodaran e Ram Prasad Poudel A acessibilidade para a água potável limpa e higiênica é um luxo básico que todo ser humano merece. Na Tanzânia, existem 23 milhões de pessoas que não têm acesso a água potável e são obrigadas a caminhar por quilômetros para buscar água para necessidades diárias. O problema prevalecente é mais um resultado da falta de manutenção e do funcionamento ineficiente da infra-estrutura existente, como bombas manuais. Para resolver a atual crise da água e garantir a acessibilidade a água potável, é necessário localizar bombas não funcionais e funcionais que precisam ser reparadas para que possam ser reparadas ou substituídas. É altamente ineficaz e impraticável verificar a funcionalidade de mais de 74.251 pontos de água manualmente em um país como a Tanzânia, onde os recursos são muito limitados. O objetivo deste estudo é construir um modelo para prever quais bombas são funcionais, que precisam de alguns reparos e que donrsquot funcionam de forma alguma utilizando os dados do Ministério da Água da Tanzânia. Também encontramos as variáveis ​​importantes que prevêem a condição de trabalho de pumprsquos. Os dados são gerenciados pelo painel de instrumentos da Taarifa waterpoints. Após o pré-processamento, os dados finais consistem em 39 variáveis ​​e 74.251 observações. Utilizamos a SAS Bridge para ESRI e SAS VA para ilustrar a variação espacial dos pontos de água funcionais a nível regional da Tanzânia, juntamente com outras variáveis ​​socioeconômicas. Entre a árvore de decisão, a rede neural, a regressão logística e os modelos florestais HPrandom, o modelo de floresta aleatória da HP foi o melhor modelo. A taxa de erro de classificação, sensibilidade e especificidade do modelo são 24,91, 62,7 e 91,7, respectivamente. A classificação das bombas de água usando o modelo de campeão acelerará as operações de manutenção de pontos de água que garantirão água limpa e acessível em toda a Tanzânia em baixo custo e em um curto período de tempo. Modelos de limite de ajuste usando os procedimentos SASreg NLIN e NLMIXED Modelos lineares generalizados hierárquicos para risco de saúde comportamental - Taxas padronizadas de Readmissão de 30 dias e 90 dias O programa Achievements in Excellence Clinical (ACE) incentiva a excelência em todas as instalações da rede de saúde comportamental, promovendo aqueles que Fornecem a mais alta qualidade de cuidados. Dois pontos de referência principais da eficácia do resultado no programa ACE são as taxas de readmissão de 30 dias ajustadas ao risco e as taxas de readmissão de 90 dias ajustadas ao risco. O ajuste de risco foi realizado com modelos lineares gerais hierárquicos (HGLM) para explicar as diferenças entre os hospitais nas características demográficas e clínicas do paciente. Um ano de dados administrativos de admissão (30 de junho de 2013 a 1 de julho de 2014) de pacientes durante 30 dias (N78,761, N Hospitais2,233) e 90 dias (N74,540, N Hospital 2,205) foram os fontes de dados. HGLM simultaneamente modelos de dois níveis 1) Ndash modelo de nível de pacientes log-odds de readmissão hospitalar usando idade, sexo, covariáveis ​​clínicas selecionadas e uma intercepção específica do hospital, e 2) nível hospitalar ndash uma interceptação hospitalar aleatória que responde por correlação intra-hospitalar Do observado. O PROC GLIMMIX foi usado para implementar um HGLM com hospital como variável aleatória (hierarquicamente) separadamente para admissões de transtorno de uso de substância (SUD) e admissões de saúde mental (MH) e agrupadas para obter uma taxa de readmissão ajustada ao risco em todo o hospital. A metodologia HGLM foi derivada da documentação do Center for Medicare amp Medicaid Services (CMS) para o pacote de medidas de Readmissão Estendidas Padronizadas de risco de todo o risco de 2013, SAS. Esta metodologia foi realizada separadamente nos dados de readmissão de 30 dias e 90 dias. As métricas finais foram um percentual de taxa de readmissão de 30 dias ajustado ao risco em todo o hospital e um percentual de taxa de readmissão de 90 dias ajustado ao risco hospitalar. Os modelos HGLM foram validados de forma cruzada em novos dados de produção que se sobrepuseram com a amostra de desenvolvimento. Os modelos revisados ​​de HGLM foram testados em abril de 2015, e as estatísticas de resultados foram extremamente semelhantes. Em suma, o teste do modelo revisado validou os modelos HGLM originais, porque os modelos revisados ​​foram baseados em diferentes amostras. Desmistificando a Declaração CONTRAST e ESTIMATE Muitos analistas são mistificados sobre como usar as declarações CONTRAST e ESTIMATE no SAS para testar uma variedade de hipóteses lineares gerais (GLH). GLHs podem ser usados ​​para testar parsimoniosamente comparações chave e hipóteses complexas. No entanto, configurar um GLH simples tende a intimidar alguns usuários do SAS. Exemplos de várias fontes parecem mágicamente surgir com a resposta correta. A chave é entender como o procedimento parametriza o modelo e depois usa essa parametrização para construir o GLH. As declarações CONTRAST e ou ESTIMATE podem ser encontradas em muitos dos procedimentos de modelagem no SAS. No entanto, nem todos os procedimentos usam a mesma sintaxe para essas declarações. Esta apresentação desmistificará o uso das declarações CONTRAST e ESTIMATE usando exemplos em PROCs GLM, LOGISTIC, MIXED, GLIMMIX e GENMOD. Breve Introdução à Engenharia de Confiabilidade e CONFIABILIDADE do PROC com engenheiros não engenheiros A engenharia de confiabilidade é especializada com a frequência com que um produto ou sistema falha em condições declaradas ao longo do tempo. No mundo moderno, é importante que um produto ou sistema mantenha por um longo tempo. Como a tecnologia está bem desenvolvida nos dias de hoje, alguns sistemas acabarão por falhar. Métodos matemáticos e estatísticos são úteis para quantificar e analisar dados de confiabilidade. No entanto, a prioridade mais importante da engenharia de confiabilidade é aplicar conhecimento de engenharia para evitar a probabilidade de falhas. Este artigo apresenta a idéia de engenharia de confiabilidade para não engenheiros, bem como CONFIABILIDADE do PROC que demonstra algumas aplicações de dados de confiabilidade. Simulando modelos em fila no SAS Este artigo apresenta aos usuários como simular modelos de filas usando um conjunto de macros SAS: MM1, MG1 e MMC. As macros SAS simularão o sistema de filas em que as entidades (como clientes, pacientes, carros ou mensagens de e-mail) chegam, sejam servidas em uma única estação ou em várias estações, por sua vez, podem ter que esperar em uma ou mais filas para o serviço, e Então pode sair. Após a simulação, a SAS fornecerá um resultado gráfico, bem como a análise estatística do modelo de enfileiramento desejado. Compartilhamento de seleção: como a utilização da pontuação da propensão ajuda a controlar uma força importante de estudos observacionais é a capacidade de estimar um comportamento chave ou efeito de tratamento em um resultado específico da saúde. Esta é uma força crucial, pois a maioria dos estudos de pesquisa de resultados de saúde não podem usar desenhos experimentais devido a restrições éticas e outras. Tendo em mente isso, uma desvantagem dos estudos observacionais (que os estudos experimentais controlam naturalmente) é que eles não têm a capacidade de randomizar seus participantes em grupos de tratamento. Isso pode resultar na inclusão indesejada de um viés de seleção. Uma maneira de se ajustar para um viés de seleção é através da utilização de uma análise de pontuação de propensão. Neste artigo, exploramos um exemplo de como utilizar esses tipos de análises. Para demonstrar esta técnica, procuraremos explorar se o abuso recente de substâncias tem um efeito sobre a identificação de adolescentes de pensamentos suicidas. Para realizar esta análise, foi identificado um viés de seleção e o ajuste foi buscado através de três formas comuns de pontuação de propensão: estratificação, correspondência e ajuste de regressão. Cada formulário é realizado separadamente, revisado e avaliado quanto à sua eficácia na melhoria do modelo. Os dados para este estudo foram coletados através do Sistema de Vigilância do Comportamento de Risco Juvenil, um projeto nacional em andamento dos Centros de Controle e Prevenção de Doenças. Esta apresentação é projetada para qualquer nível de estatístico, programador SASreg ou analista de dados com interesse em controlar o viés de seleção. Usando o SAS para analisar os dados do levantamento do condado: um olhar sobre experiências adversas da infância e seu impacto na saúde a longo prazo. A escala de experiências adversas da infância (ECAs) mede a exposição infantil ao abuso e à disfunção doméstica. Pesquisas sugerem que as ACEs estão associadas a maiores riscos de se engajar em comportamentos de risco, baixa qualidade de vida, morbidade e mortalidade no final da vida. No município de Santa Clara, um grande município diversificado, onde 88 residentes têm acesso à internet doméstico, realizamos um levantamento do fator de risco comportamental em todo o município de adultos com um acompanhamento exclusivo na web. Realizamos uma pesquisa telefônica com números de dígitos aleatórios (N4,186) e pesquisa on-line de acompanhamento usando o módulo CDC BRFSS ACE. Daqueles elegíveis para a pesquisa na web, a taxa de resposta foi de 33. O módulo ACE online compreendeu 11 questões para formar 8 categorias sobre abuso e disfunção familiar. PROC SURVEYFREQ e SURVEYLOGISTIC foram utilizados no SAS 9.4 para analisar os dados da pesquisa e fornecer estimativas para o condado de Santa Clara como um todo. A maioria dos entrevistados (74) relatou ter experimentado 1 ACEs. O abuso emocional foi o mais comum (44), seguido de abuso de substâncias domésticas (28) e doença mental doméstica (25). A prevalência de abuso emocional, abuso de substâncias domésticas, abuso físico e doença mental doméstica foi maior entre indivíduos com ACE alta (3) e baixa (1-2). Os indicadores de saúde pobre percebida mostraram uma forte associação entre indivíduos com ACEs. As chances de 1 mau dia de saúde mental no mês passado foram maiores entre indivíduos com ACEs baixas (OR2.86), ACEs altas (OR6.74) e entre mulheres (OR2.27). Uma pesquisa baseada na web oferece um meio confiável para avaliar uma população sobre assuntos sensíveis como ACE a um custo menor do que uma pesquisa telefônica em jurisdições menores. Os resultados sugerem que as ACEs são comuns entre os adultos no município e podem ser subestimadas em entrevistas telefônicas. PROC SURVEYFREQ e SURVEYLOGISTIC em SAS são ferramentas poderosas que podem ser usadas para analisar dados de pesquisas, especialmente para estimativas de pequena área sobre a saúde dos residentes do condado. Como D-I-D você faz esses modelos de Diferenças-Diferenças Básicas no SAS. Um grande número de pesquisas de econometria, modelos de diferencia-em-diferenças (DID) recentemente se tornaram mais comumente usados ​​em serviços de saúde e pesquisa epidemiológica. Os projetos de estudo DID são quase experimentais, podem ser usados ​​com dados observacionais retrospectivos e não requer aleatorização de exposição. Este estudo estima a diferença nas mudanças pré-pós em um resultado comparando um grupo exposto a um grupo não exposto (referência). A mudança de resultado no grupo não exposto avalia a mudança esperada no grupo exposto se o grupo tivesse sido contrafactually, não exposto. Ao subtrair esta alteração da mudança no grupo exposto (a diferença de diferenças em diferenças), os efeitos das tendências seculares de fundo são removidos. No modelo DID básico, cada assunto serve como seu próprio controle, eliminando a confusão por fatores individuais conhecidos e desconhecidos associados ao resultado de interesse. Assim, o DID gera uma estimativa causal da mudança em um resultado associado ao início da exposição de interesse enquanto controla os viés devido a tendências seculares e à confusão. Um modelo linear básico generalizado de medidas repetidas fornece estimativas das inclinações médias da população entre dois pontos de tempo para os grupos expostos e não expostos e verifica se as inclinações diferem, incluindo um termo de interação entre o tempo e as variáveis ​​de exposição. Neste artigo, ilustramos os conceitos por trás do modelo DID básico e o código SAS atual para a execução desses modelos. We include a brief discussion of more advanced DID methods and present an example of a real-world analysis using data from a study on the impact of introducing a value-based insurance design (VBID) medication plan at Kaiser Permanente Northern California on change in medication adherence. Using PROC PHREG to Assess Hazard Ratio in Longitudinal Environmental Health Study Air pollution, especially combustion products, can activate metabolic disorders through inflammatory pathways potentially leading to obesity. The effect of air-pollution on BMI growth was shown by a previous study (Jerrett, et al. 2014). Recognizing the role of air pollution in the development of obesity in children can help guide possible interventions reducing obesity formation. The objective of this paper is to analyze the obesity incidence of children participating in Childrenrsquos Hospital Study (CHS) who were non-obese at baseline, identify the time interval for the onset of obesity, and identify the effects of various risk factors, especially air pollutants. The PROC PHREG procedure was used in creating a model within a macro that included community random effects, stratified by sex, and adjusting for baseline characteristics. Using PROC LOGISTIC for Conditional Logistic Regression to Evaluate Vehicle Safety Performance The LOGISTIC Procedure has several capabilities beyond standard logistic regression on binary outcome variables. For a conditional logit model, PROC LOGISTIC can perform several types of matching, 1:1, 1:M matching, and even M:N matching. This paper shows an example of using PROC LOGISTIC for conditional logit models to evaluate vehicle safety performance in fatal accidents using the Fatality Analysis Reporting System (FARS) 2004-2011 database. Conditional logistic regression models were performed with an additional stratum parameter to model the relationship between fatality of the drivers and the vehiclersquos continent of origin. Identifying Duplicates Made Easy Elizabeth Angel and Yunin Ludena Have you ever had trouble removing or finding the exact type of duplicate you want SAS offers several different ways to identify, extract, andor remove duplicates, depending on exactly what you want. We will start by demonstrating perhaps the most commonly used method, PROC SORT, and the types of duplicates it can identify and how to remove, flag, or store them. Then, we will present the other less commonly used methods which might give information that PROC SORT cannot offer, including the data step (FIRST. LAST.), PROC SQL, PROC FREQ, and PROC SUMMARY. The programming is demonstrated at a beginnerrsquos level. Dont Forget About Small Data Beginning in the world of data analytics and eventually flowing into mainstream media, we are seeing a lot about Big Data and how it can influence our work and our lives. Through examples, this paper will explore how Small Data - ndash which is everything Big Data is not - ndash can and should influence our programming efforts. The ease with which we can read and manipulate data from different formats into usable tables in SASreg makes using data to manage data very simple and supports healthy and efficient practices. This paper will explore how using small or summarized data can help to organize and track program development, simplify coding and optimize code. Let the CAT Out of the Bag: String Concatenation in SASreg 9 Are you still using TRIM, LEFT, and vertical bar operators to concatenate strings Its time to modernize and streamline that clumsy code by using the string concatenation functions introduced in SASreg 9. This paper is an overview of the CAT, CATS, CATT, and CATX functions introduced in SASreg 9, and the new CATQ function added in SASreg 9.2. In addition to making your code more compact and readable, this family of functions also offers some new tricks for accomplishing previously cumbersome tasks. SASreg Abbreviations: a Shortcut for Remembering Complicated Syntax SASreg Abbreviations: a Shortcut for Remembering Complicated Syntax Yaorui Liu, Department of Preventive Medicine, University of Southern California ABSTRACT One of many difficulties for a SASreg programmer is remembering how to accurately use SAS syntax, especially the ones that include many parameters. Not mastering the basic syntax parameters by heart will definitely make onersquos coding inefficient because one would have to check the SAS reference manual constantly to ensure that onersquos syntax was implemented properly. One of the more useful tools in SAS, but seldom known by novice programmers, is the use of SAS Abbreviations. It allows users to store text strings, such as the syntax of a DATA step function, a SAS procedure, or a complete DATA step, with a user-defined and easy-to-remember abbreviated term. Once this abbreviated term is typed within the enhanced editor, SAS will automatically bring-up the corresponding stored syntax. Knowing how to use SAS Abbreviations will ultimately be beneficial to programmers with varying levels of SAS expertise. In this paper, various examples by utilizing SAS Abbreviations will be demonstrated. Implementation of Good Programming Practices in Clinical SAS SASreg Base software provides users with many choices for accessing, manipulating, analyzing, and processing data and results. Partly due to the power offered by the SAS software and the size of data sources, many application developers and end-users are in need of guidelines for more efficient use. This presentation highlights my personal top ten list of performance tuning techniques for SAS users to apply in their applications. Attendees learn DATA and PROC step language statements and options that can help conserve CPU, IO, data storage, and memory resources while accomplishing tasks involving processing, sorting, grouping, joining (merging), and summarizing data. Sorting a Bajillion Records: Conquering Scalability in a Big Data World quotBig dataquot is often distinguished as encompassing high volume, velocity, or variability of data. While big data can signal big business intelligence and big business value, it also can wreak havoc on systems and software ill-prepared for its profundity. Scalability describes the ability of a system or software to adequately meet the needs of additional users or its ability to utilize additional processors or resources to fulfill those added requirements. Scalability also describes the adequate and efficient response of a system to increased data throughput. Because sorting data is one of the most common as well as resource-intensive operations in any software language, inefficiencies or failures caused by big data often are first observed during sorting routines. Much SASreg literature has been dedicated to optimizing big data sorts for efficiency, including minimizing execution time and, to a lesser extent, minimizing resource usage (i. e. memory and storage consumption.) Less attention has been paid, however, to implementing big data sorting that is reliable and robust even when confronted with resource limitations. To that end, this text introduces the SAFESORT macro that facilitates a priori exception handling routines (which detect environmental and data set attributes that could cause process failure) and post hoc exception handling routines (which detect actual failed sorting routines.) If exception handling is triggered, SAFESORT automatically reroutes program flow from the default sort routine to a less resource-intensive routine, thus sacrificing execution speed for reliability. However, because SAFESORT does not exhaust system resources like default SAS sorting routines, in some cases it performs more than 200 times faster than default SAS sorting methods. Macro modularity moreover allows developers to select their favorite sorting routine and, for data-driven disciples, to build fuzzy logic routines that dynamically select a sort algorithm based on environmental and data set attributes. SAS integration with NoSQL database We are living in the world of abundant data, so called ldquobig datardquo. The term ldquobig datardquo is closely associated with any structured data ndash unstructured, structured and semi-structured. They are called ldquounstructuredrdquo and ldquosemi-structuredrdquo because they do not fit neatly in a traditional row-column relational database. A NoSQL (Not only SQL or Non-relational SQL) database is the type of database that can handle any structured data. For example, a NoSQL database can store any structured data such as XML (Extensible Markup Language), JSON (JavaScript Object Notation) or RDF (Resource Description Framework) files. If an enterprise is able to extract any structured data from NoSQL databases and transfer it to the SAS environment for analysis, it will produce tremendous value, especially from a big data solutions standpoint. This paper will show how any structured data is stored in the NoSQL databases and ways to transfer it to the SAS environment for analysis. First, the paper will introduce the NoSQL database. For example, NoSQL databases can store any structured data such as XML, JSON or RDF files. Secondly, the paper will show how the SAS system connects to NoSQL databases using REST (Representational State Transfer) API (Application Programming Interface). For example, SAS programmers can use the PROC HTTP option to extract XML or JSON files through REST API from the NoSQL database. Finally, the paper will show how SAS programmers can convert XML and JSON files to SAS datasets for analysis. For example, SAS programmers can create XMLMap files using XMLV2 LIBNAME engine and convert the extracted XML files to SAS datasets. DS2 Versus Data Step: Efficiency Considerations There is recognition that in large, complex systems the advantages of object-oriented concepts available in DS2 of modularity, code reuse and ease of debugging can provide increased efficiency. Object-oriented programming also allows multiple teams of developers to work on the same project easily. DS2 was designed for data manipulation and data modeling applications that can achieve increased efficiency by running code in threads, splitting the data across multiple processors and disks. Of course, performance is also dependent on hardware architecture and the amount of effort you put into the tuning of your architecture and code. Join our panel for a discussion of architecture, tuning and data size considerations in determining if DS2 is the more efficient alternative. Using Shared Accounts in Kerberized Hadoop Clusters with SASreg: How Can I Do That Using shared accounts to access third-party data servers is a common architecture in SASreg environments. SAS software can support seamless user access to shared accounts in databases such as Oracle, via group definitions and outbound authentication domains in Metadata. However, the configurations necessary to leverage shared accounts in Hadoop clusters with Kerberos authentication are more complicated. Not only must Kerberos tickets be generated and maintained in order to simply access the Hadoop environment, but those tickets must allow access as the shared account instead of the individual usersrsquo accounts. Methods for implementing this arrangement in SAS environments can be non-intuitive. This paper starts by outlining several general architectures of shared accounts in Kerberized Hadoop environments. It then presents possible methods of managing such shared account access in SAS environments, including specific implementation details, code samples and security implications. Finally, troubleshooting methods are presented for when issues arise. Example code and configurations for this paper were developed on a SAS 9.4 system running over Redhat Enterprise Linux 6. What just happened A visual tool for highlighting differences between two data sets. Base SAS includes a great utility for comparing two data sets - PROC COMPARE. The output though can be hard to read as the differences between values are listed separately for each variable. Its hard to see the differences across all variables for the same observation. This talk presents a macro to compare two SAS data sets and display the differences in Excel. PROC COMPARE OUT option creates an output data set with all the differences. This data set is then processed with PROC REPORT using ODS EXCEL and colour highlighting to show the differences in an Excel, making the differences easy to see. Tips and Tricks for Producing Time-Series Cohort Data Developers working on a production process need to think carefully about ways to avoid future changes that require change control, so its always important to make the code dynamic rather than hardcoding items into the code. Even if you are a seasoned programmer, the hardcoded items might not always be apparent. This paper assists in identifying the harder-to-reach hardcoded items and addresses ways to effectively use control tables within the SASreg software tools to deal with sticky areas of coding such as formats, parameters, groupinghierarchies, and standardization. The paper presents examples of several ways to use the control tables and demonstrates why this usage prevents the need for coding changes. Practical applications are used to illustrate these examples. The Power of the Function Compiler: PROC FCMP PROC FCMP, the user-defined function procedure, allows SAS users of all levels to get creative with SAS and expand their scope of functionality. PROC FCMP is the superhero of all SAS functions in its vast capabilities to create and store uniquely defined functions that can later be used in data steps. This paper outlines the basics as well as tips and tricks for the user to get the most out of this procedure. Creating Viable SASreg Data Sets From Survey Monkeyreg Transport Files Survey Monkey is an application that provides a means for creating online surveys. Unfortunately, the transport (Excel) file from this application requires a complete overhaul in order to do any serious data analysis. Besides having a peculiar structure and containing extraneous data points, the column headers become very problematic when importing the file into SAS. In fact, the initial SAS data set is virtually unusable. This paper explains a systematic approach for creating a viable SAS data set for doing serious analysis. Document and Enhance Your SASreg Code, Data Sets, and Catalogs with SAS Functions, Macros, and SAS Metadata Roberta Glass and Louise Hadden Discover how to document your SASreg programs, data sets, and catalogs with a few lines of code that include SAS functions, macro code, and SAS metadata. Do you start every project with the best of intentions to document all of your work, and then fall short of that aspiration when deadlines loom Learn how your programs can automatically update your processing log. If you have ever wondered who ran a program that overwrote your data, SAS has the answer And If you donrsquot want to be tracing back through a yearrsquos worth of code to produce a codebook for your client at the end of a contract, SAS has the answer Donrsquot Get Blindsided by PROC COMPARE For a statistical programmer in the pharmaceutical industry each work day is new. A project you have been working on for a few months can be changed at a momentrsquos notice and you need to implement changes quickly and accurately. To ensure that the desired changes are done quickly, and most especially accurately, if the task entails doing a find and replace sort of thing in all the SAS Programs in a directory (or multiple directories) a macro called ldquoReplacerrdquo could come to the rescue. Process Flow: First, it reads all the SAS programs in a directory one by one and converts every SAS program to a SAS dataset using grepline. After this, it reads all datasets, one by one. replacing an existing string with the now desired string using if then conditional logic. Finally, it outputs each updated SAS dataset as a new SAS program at a desired location which has been specified. This macro has multiple parameters which you can specify: the input directory the output directory and the from and to strings which gives the programmer more control over the process. A quick example of the practical use of the replacer macro is ndash when making the transition from a Windows to UNIX Server we needed to make sure we changed the path of our init. sas and changed all forward slashes() to backward slashes ().Letrsquos assume we have 100 programs and we decide to do this manually. It can be a cumbersome task and given time constraints, accuracy is not guaranteed. The programmer may end up spending a couple of hours to complete the necessary changes to each program before re-running all the programs to make sure the appropriate changes have taken place. Replacer can accomplish this same task in less than 2 minutes. Ditch the Data Memo: Using Macro Variables and Outer Union Corresponding in PROC SQL to Create Data Set Summary Tables Data set documentation is essential to good programming and for sharing data set information with colleagues who are not SAS programmers. However, most SAS programmers dislike writing memos which must be updated each time a dataset is manipulated. Utilizing two tools, macro variables and the outer union corresponding set operator in PROC SQL, we can write concise code that exports a single summary table containing important data set information serving in lieu of data memos. These summary tables can contain the following data set information and much more: 1) Report in the change in the number of records in a dataset due to dropping records, collapsing across IDs, removing duplicate records 2) summary statistics of key variables and 3) trends across time. This presentation requires some basic understanding of macros and SQL queries. File Management Using Pipes and X Commands in SASreg SAS for Windows can be an extremely powerful piece of software, not only for analyzing data, but also for organizing and maintaining output and permanent datasets. By employing pipes and operating system (lsquoXrsquo) commands within a SAS session, you can easily and effectively manage files of all types stored on your local network. Handling longitudinal data from multiple sources: experience with analyzing kidney disease patients Elani Streja and Melissa Soohoo Analyses in health studies using multiple data sources often come with a myriad of complex issues such as missing data, merging multiple data sources and date matching. Combining multiple data sources is not straight forward, as often times there is discordance or missing information such as dates of birth, dates of death, and even demographic information such as sex, race, ethnicity and pre-existing comorbidities. It therefore becomes essential to document the data source from which the variable information was retrieved. Analysts often rely on one resource as the dominant variable to use in analyses and ignore information from other sources. Sometimes, even the database thought to be the ldquogold standardrdquo is in fact discordant with other data sources. In order to increase sensitivity and information capture, we have created a source variable, which demonstrates the combination of sources for which the data was concordant and derived. In our example, we will show how to resolve address information on date of birth, date of death, date of transplant, sex and race combined from 3 data sources with information on kidney disease patients. These 3 sources include: the United States Renal Data System, Scientific Registry of Transplant Recipients, and data from a large dialysis organization. This paper focuses on approaches of handling multiple large databases for preparation for analyses. In addition, we will show how to summarize and prepare longitudinal lab measurements (from multiple sources) for use in analyses. An Array of Fun: Macro Variable Arrays Like all skilled tradespeople, SASreg programmers have many tools at their disposal. Part of their expertise lies in knowing when to use each tool. In this paper, we use a simple example to compare several common approaches to generating the requested report: the TABULATE, TRANSPOSE, REPORT, and SQL procedures. We investigate the advantages and disadvantages of each method and consider when applying it might make sense. A variety of factors are examined, including the simplicity, reusability, and extensibility of the code in addition to the opportunities that each method provides for customizing and styling the output. The intended audience is beginning to intermediate SAS programmers. Something Old, Something New. Flexible Reporting with DATA Step-based Tools The report looks simple enoughmdasha bar chart and a table, like something created with the GCHART and REPORT procedures. But, there are some twists to the reporting requirements that make those procedures not quite flexible enough. The solution was to mix quotoldquot and quotnewquot DATA step-based techniques to solve the problem. Annotate datasets are used to create the bar chart and the Report Writing Interface (RWI) to create the table. Without a whole lot of additional code, an extreme amount of flexibility is gained. The goal of this paper is to take a specific example of a couple generic principles of programming (at least in SASreg): 1. The tools you choose are not always the most obvious ones ndash So often, other from habit of comfort level, we get zeroed in on specific tools for reporting tasks. Have you ever heard anyone say, ldquoI use TABULATE for everythingrdquo or ldquoIsnrsquot PROC REPORT wonderful, it can do anythingrdquo While these tools are great (Irsquove written papers on their use), itrsquos very easy to get into a rut, squeezing out results that might have been done more easily, flexibly or effectively with something else. 2. Itrsquos often easier to make your data fit your reporting than to make your reporting fit your data ndash It always takes data to create a report and itrsquos very common to let the data drive the report development. We struggle and fight to get the reporting procedures to work with our data. There are numerous examples of complicated REPORT or TABULATE code that works around the structure of the data. However, the data manipulation tools in SAS (data step, SQL, procedure output) can often be used to preprocess the data to make the report code significantly simpler and easier to maintain and modify. Proc Document, The Powerful Utility for ODS Output The DOCUMENT procedure is a little-known procedure that can save you vast amounts of time and effort when managing the output of your SASreg programming efforts. This procedure is deeply associated with the mechanism by which SAS controls output in the Output Delivery System (ODS). Have you ever wished you didnrsquot have to modify and rerun the report-generating program every time there was some tweak in the desired report PROC DOCUMENT enables you to store one version of the report as an ODS Document Object and then call it out in many different output forms, such as PDF, HTML, listing, RTF, and so on, without rerunning the code. Have you ever wished you could extract those pages of the output that apply to certain ldquoBY variablesrdquo such as State, StudentName, or CarModel With PROC DOCUMENT, you have where capabilities to extract these. Do you want to customize the table of contents that assorted SAS procedures produce when you make frames for the table of contents with HTML, or use the facilities available for PDF PROC DOCUMENT enables you to get to the inner workings of ODS and manipulate them. This paper addresses PROC DOCUMENT from the viewpoint of end results, rather than provide a complete technical review of how to do the task at hand. The emphasis is on the benefits of using the procedure, not on detailed mechanics. There will be a number of practical applications presented for everyday real life challenges that arise in manipulating output in HTML, PDF and RTF formats. A SAS macro for quick descriptive statistics Arguably, the most required table in publications is the description of the sample table, fondly referred to among statisticians as ldquoTable 1rdquo. This table displays means and standard errors, medians and IQRs, and counts and percentages for the variables in the sample, often stratified by some variable of interest (e. g. disease status, recruitment site, sex, etc.). While this table is extremely useful, the construction of it can be time consuming and, frankly, rather boring. I will present two SAS macros that facilitate the creation of Table 1. The first is a ldquoquick and dirtyrdquo macro that will output the results for Table 1 for nearly every situation. The second is a ldquoprettyrdquo macro that will output a well formatted Table 1 for a specific situation. Controlling Colors by Name Selecting, Ordering, and Using Colors for Your Viewing Pleasure Within SASreg literally millions of colors are available for use in our charts, graphs, and reports. We can name these colors using techniques which include color wheels, RGB (Red, Green, Blue) HEX codes, and HLS (Hue, Lightness, Saturation) HEX codes. But sometimes I just want to use a color by name. When I want purple, I want to be able to ask for purple not CX703070 or H03C5066. But am I limiting myself to just one purple What about light purple or pinkish purple. Do those colors have names or must I use the codes It turns out that they do have names. Names that we can use. Names that we can select, names that we can order, names that we can use to build our graphs and reports. This paper will show you how to gather color names and manipulate them so that you can take advantage of your favorite purple be it lsquopurplersquo, lsquograyish purplersquo, lsquovivid purplersquo, or lsquopale purplish bluersquo. Much of the control will be obtained through the use of user defined formats. Learn how to build these formats based on a data set containing a list of these colors. Tweaking your tables: Suppressing superfluous subtotals in PROC TABULATE PROC TABULATE is a great tool for generating cross tab style reports. Its very flexible but has a few annoying limitations. One is suppressing superfluous subtotals. The ALL keyword creates a total or subtotal for the categories in one dimension. However if there is only one category in the dimension, the subtotal is still shown, which is really just repeating the detail line again. This can look a bit strange. This talk demonstrates a method to suppress those superfluous totals by saving the output from PROC TABULATE using the OUT option. That data set is then reprocessed to remove the undesirable totals using the TYPE variable which identifies the total rows. PROC TABULATE is then run again against the reprocessed data set to create the final table. Indenting with Style Within the pharmaceutical industry, may SAS programmers rely heavily on Proc Report. While it is used extensively for summary tables and listings, it is more typical that all processing is done prior to final report procedure rather than using some of its internal functionality. In many of the typical summary tables, some indenting is required. This may be required to combine information into a single column in order to gain more printable space (as is the case with many treatment group columns). It may also be to simply make the output more aesthetically pleasing. A standard approach it to pad a character string with spaces to give the appearance of indenting. This requires pre-processing of the data as well as the use of the ASISON option in the column style. While this may be sufficient in many cases, it fails for longer text strings that require wrapping within a cell. Alternative approaches that conditionally utilize INDENT and LEFTMARGIN options of a column style are presented. This Quick-tip presentation will describe such options for indenting. Example outputs will be provided to demonstrate the pros and cons of each. The use of Proc Report and ODS is required in this application using SAS 9.4 in a Windows environment. SASreg Office Analytics: An Application In Practice Data Monitoring and Reporting Using Stored Process Mansi Singh, Kamal Chugh, Chaitanya Chowdagam and Smitha Krishnamurthy Time becomes a big factor when it comes to ad-hoc reporting and real-time monitoring of data while the project work is on full swing. There are always numerous urgent requests from various cross-functional groups regarding the study progress. Typically a programmer has to work on these requests along with the study work which can become stressful. To address this growing need of real-time monitoring of data and to tailor the requirements to create portable reports, SASreg has introduced a powerful tool called SAS Office Analytics. SAS Office Analytics with Microsoftreg Add-In provides excellent real-time data monitoring and report generating capabilities with which a SAS programmer can take ad-hoc requests and data monitoring to next level. Using this powerful tool, a programmer can build interactive customized reports as well as give access to study data, and anyone with knowledge of Microsoft Office can then view, customize, andor comment on these reports within Microsoft Office with the power of SAS running in the background. This paper will be a step-by-step guide to demonstrate how to create these customized reports in SAS and access study data using Microsoft Office Add-In feature. Getting it done with PROC TABULATE From state-of-the-art research to routine analytics, the Jupyter Notebook offers an unprecedented reporting medium. Historically tables, graphics, and other output had to be created separately and integrated into a report piece by piece amidst the drafting of the text. The Jupyter Notebook interface allows for the creation of code cells and markdown cells in any kind of arrangement. While the markdown cells admit all the typical sorts of formatting, the code cells can be used to run code within and throughout the document. In this way, report creation happens naturally and in a completely reproducible way. Handing a colleague a Jupyter Notebook file to be re-run or revised is much easier and simpler than passing along at least two files: the code and the text. With the new SAS reg kernel for Jupyter, all of this is possible and more Clinton vs. Trump 2016: Analyzing and Visualizing Sentiments towards Hillary Clinton and Donald Trumprsquos Policies Sid Grover and Jacky Arora The United States 2016 presidential election has seen an unprecedented media coverage, numerous presidential candidates and acrimonious debate over wide-ranging topics from candidates of both the republican and the democratic party. Twitter is a dominant social medium for people to understand, express, relate and support the policies proposed by their favorite political leaders. In this paper, we aim to analyze the overall sentiment of the public towards some of the policies proposed by Donald Trump and Hillary Clinton using Twitter feeds. We have started to extract the live streaming data from Twitter. So far, we have extracted about 200,000 twitter feeds accessing the live stream API of Twitter, using a java program mytwitterscraper which is an open source real-time twitter scraper. We will use SASreg Enterprise Miner and SASreg Sentiment Analysis Studio to describe and assess how people are reacting to each candidatersquos stand on issues such as immigration, taxes and so on. We will also track and identify patterns of sentiments shifting across time (from March to June) and geographic regions. Donor Sentiment Analysis of Presidential Primary Candidates Using SAS In this paper, we explore advantages of using PROC DS2 procedure over the data step programming in SASreg. DS2 is a new SAS proprietary programming language that is appropriate for advanced data manipulation. We explore use of PROC DS2 to execute queries in databases using FED SQL from within the DS2 program. Several DS2 language elements accept embedded FedSQL syntax, and the run-time generated queries can exchange data interactively between DS2 and supported database. This action enables SQL preprocessing of input tables, which effectively allows processing data from multiple tables in different databases within the same query thereby drastically reducing processing times and improving performance. We explore use of DS2 for creating tables, bulk loading tables, manipulating tables, and querying data in an efficient manner. We explore advantages of using PROC DS2 over data step programming such as support for different data types, ANSI SQL types, programming structure elements, and benefits of using new expressions or writing onersquos own methods or packages available in the DS2 system. We also explore high-performance version of the DS2 procedure, PROC HPDS2, and show how one can submit DS2 language statements for execution to either a single machine running multiple threads or to a distributed computing environment, including the SAS LASR Analytic Server thereby massively reducing processing times resulting in performance improvement. The DS2 procedure enables users to submit DS2 language statements from a Base SAS session. The procedure enables requests to be processed by the DS2 data access technology that supports a scalable, threaded, high-performance, and standards-based way to access, manage, and share relational data. In the end, we empirically measure performance benefits of using PROC DS2 over PROC SQL for processing queries in-database by taking advantage of threaded processing in supported data databases such as Oracle. Social Media, Anonymity, and Fraud: HP Forest Node in SASreg Enterprise Minertrade You may encounter people who used SASreg long ago (perhaps in university) or through very limited use in a job. Some of these people with limited knowledgeexperience think that the SAS system is ldquojust a statistics packagerdquo or ldquojust a GUIrdquo, the latter usually a reference to SASreg Enterprise Guidereg or if a dated reference, to (legacy) SASAFreg or SASFSPreg applications. The reality is that the modern SAS system is a very large, complex ecosystem, with hundreds of software products and a diversity of tools for programmers and users. This poster provides a set of diagrams and tables that illustrate the complexity of the SAS system, from the perspective of a programmer. Diagramsillustrations that are provided here include: the different environments that program code can run in cross-environment interactions and related tools SAS Grid: parallel processing SAS can run with files in memory ndash the legacy SAFILE statement and big dataHadoop some code can run in-database. We end with a tabulation of the many programming languages and SQL dialects that are directly or indirectly supported within SAS. Hopefully the content of this poster will inform those who think that SAS is an old, dated statistics package or just a simple GUI. Leadership: More than Just a Position Laws of Programming Leadership As someone studying statistics in the data science era, more and more emphasis is put on illustrious graphs. Data is no longer displayed with a black and white boxplot. Using SASreg MACRO and the Statistical Graphics procedure, you can animate graphs to turn an outdated two variable graph into a graph in motion that shows not only a relation between factors but also a change over time. An even simpler approach for bubble graphs is to use a function in JMP to create colorful moving plots that would typically require many lines of code, with just a few clicks of the mouse. Sentiment Analysis of Opinions about Self-driving cars Swapneel Deshpande and Nachiket Kawitkar Self-driving cars are no longer a futuristic dream. In recent past, Google launched a prototype of the self-driving car while Apple is also developing its own self-driving car. Companies like Tesla have just introduced an Auto Pilot version in their newer version of electric cars which have created quite a buzz in the car market. This technology is said to enable aging or disable people to drive around without being dependent on anyone while also might affecting the accident rate due to human error. But many people are still skeptical about the idea of self-driving cars and thatrsquos our area of interest. In this project, we plan to do sentiment analysis on thoughts voiced by people on the Internet about self-driving cars. We have obtained the data from crowdflowerdata-for-everyone which contain these reviews about the self-driving cars. Our dataset contains 7,156 observations and 9 variables. We plan to do descriptive analysis of the reviews to identify key topics and then use supervised sentiment analysis. We also plan to track and report at how the topics and the sentiments change over time. An Analysis of the Repetitiveness of Lyrics in Predicting a Songrsquos Popularity In the interest of understanding whether or not there is a correlation between the repetitiveness of a songrsquos lyrics and its popularity, the top ten songs from the year-end Billboard Hot 100 Songs chart from 2002 to 2015 were collect. These songs then had their lyrics assessed to determine the count of the top ten words used. These words counts were then used to predict the number of weeks the song was on the chart. The prediction model was analyzed to determine the quality of the model and if word count is a significant predictor of a songrsquos popularity. To investigate if song lyrics are becoming more simplistic over time there were several tests completed in order to see if the average word counts have been changing over the years. All analysis was completed in SASreg using various PROCs. Regression Analysis of the Levels of Chlorine in the Public Water Supply in Orange County, FL This conference provides a range of events that can benefit any and all SAS Users. However, sometimes the extensive schedule can be overwhelming at first glance. With so many things to do and people to see, I have compiled the advice I was given as a novice WUSS and lessons Irsquove learned since. This presentation will provide a catalog of tips to make the most out of anyonersquos conference experience. From volunteering, to the elementary advice of sitting at a table where you do not know anyonersquos name, listeners will be excited to take on all that WUSS offers. Patients with Morbid Obesity and Congestive Heart Failure Have Longer Operative Time and Room Time in Total Hip Arthroplasty More and more patients with total hip arthroplasty have obesity, and previous studies have shown a positive correlation between obesity and increased operative time in total hip arthroplasty. But those studies shared the limitation of small sizes. Decreasing operative time and room time is essential to meeting the increased demand for total hip arthroplasty, and factors that influence these metrics should be quantified to allow for targeted reduction in time and adjusted reimbursement models. This study intend to use a multivariate approach to identify which factors increase operative time and room time in total hip arthroplasty. For the purposes of this study, the American College of Surgeons National Surgical Quality Improvement Program database was used to identify a cohort of over thirty thousand patients having total hip arthroplasty between 2006 and 2012. Patient demographics, comorbidities including body mass index, and anesthesia type were used to create generalized linear models identifying independent predictors of increased operative time and room time. The results showed that morbid obesity (body mass index gt40) independently increased operative time by 13 minutes and room time 18 by minutes. Congestive heart failure led to the greatest increase in overall room time, resulting in a 20-minute increase. Anesthesia method further influenced room time, with general anesthesia resulting in an increased room time of 18 minutes compared with spinal or regional anesthesia. Obesity is the major driver of increased operative time in total hip arthroplasty. Congestive heart failure, general anesthesia, and morbid obesity each lead to substantial increases in overall room time, with congestive heart failure leading to the greatest increase in overall room time. All analyses are conducted via SAS (version SAS 9.4, Cary, NC). Using SAS: Monte Carlo Simulations of Manufactured Goods - Should-Cost Models Should cost modeling, or ldquocleansheetingrdquo, of manufactured goods or services is a valuable tool for any procurement group. It provides category managers a foundation to negotiate, test and drive value addedvalue engineering ideas. However, an entire negotiation can be derailed by a supplier arguing that certain assumptions or inputs are not reflective of what they are currently seeing in their plant. The most straightforward resolution to this issue is using a Monte Carlo simulation of the cleansheet. This enables the manager to prevent any derailing supplier tangents, by providing them with the information in regards to how each input effects the model as a whole, and the resulting costs. In this ePoster, we will demonstrate a method for employing a Monte Carlo simulation on manufactured goods. This simulation will cover all of the direct costs associated with production, labor, machine, material, as well as the indirect costs, i. e. overhead, etc. Using SAS, this simulation model will encompass 60 variables, from nine discrete manufacturing processes, and will be set to automatically output the information most relevant to the category manager. Making Prompts Work for You: Using SAS Enterprise Guide Prompts with Categorization of Output Edward Lan and Kai-Jen Cheng In statistical and epidemiology units of public health departments, SAS codes are often re-used across a variety of different projects for data cleaning and generation of output datasets from the databases. Each SAS user will copy and paste common SAS codes into their own program and use it to generate datasets for analysis. In order to simplify this process, SAS Enterprise Guide (EG) prompts can be used to eliminate the need for the user to edit the SAS code or copy and paste. Instead, the user will be able to enter the desired directory, date ranges, and desired variables to be included in the dataset. In the event of large datasets, however, it is beneficial for these variables to be grouped into categories instead of having the user individually choose the desired variables or lumping all the variables into the final dataset. Using the SAS EG prompt for static lists where the SAS user selects multiple values, variable categories can be created for selection where groups of variables are selected into the dataset. In this paper for novice and intermediate SAS users, we will discuss how macros and SAS EG prompts, using EG 7.1, can be used to automate the process of generating an output dataset where the user selects a folder directory, date ranges, and categories of variables to be included in the final dataset. Additionally, the paper will explain how to overcome issues with integrating the categorization prompt with generating the output dataset. Application of Data Mining Techniques for Determining Factors Associated with Overweight and Obesity Among California Adults This paper describes the application of supervised data mining methods using SAS Enterprise Miner 12.3 on data from the 2013-2014 California Health Interview Survey (CHIS), in order to better understand obesity and the indicators that may predict it. CHIS is the largest health survey ever conducted in any state, which samples California households through random-digit-dialing (RDD). EM was used to apply logistic regression, decision trees and neural network models to predict a binary variable, OverweightObese Status, which determines whether an individual has a Body Mass Index (BMI) greater than 25. These models were compared to assess which categories of information, such as demographic factors or insurance status, and individual factors like race, best predict whether an individual is overweightobese or not. The Orange Lifestyle If you are like many SAS users you have worked with the classical quotoldquot SAS graphics procedures for some time and are very comfortable with the code syntax, workflow approach etc that make for reasonably simple creation of presentation graphics. Then all of a sudden, a job requires the capabilities of the procedures in SAS ODS graphics. At first glance you may be thinking --- quotOK, a few more procedures to learn and a little syntax to learnquot. Then you realize that moving yourself into this arena is no small task. This presentation will overview the options and approaches that you might take to get up to speed fast. Included will be decision trees to be followed in deciding upon a course of action. This paper contains many examples of very simple ways to get very simple things accomplished. Over 20 different graphs are developed using only a few lines of code each, using data from the SASHELP data sets. The usage of the SGPLOT, SGPANEL, and SGSCATTER procedures are shown. In addition, the paper addresses those situations in which the user must alternatively use a combination of the TEMPLATE and SGRENDER procedures to accomplish the task at hand. Most importantly, the use of ODS Graphics Designer as a teaching tool and a generator of sample graphs and code are covered. A single slide in the presentation overviewing the ODS Designer shows everything needed to generated a very complex graph. The emphasis in this paper is the simplicity of the learning process. Users will be able to take the included code and run it immediately on their personal machines to achieve an instant sense of gratification. The paper also addresses the quotODS Sandwichquot for creating output and the use of Proc Document to manipulate it. Exploring Multidimensional Data with Parallel Coordinate Plots Throughout the many phases of an analysis, it may be more intuitive to review data statistics and modeling results as visual graphics rather than numerical tables. This is especially true when an objective of the analysis is to build a sense of the underlying structures within the data rather than describe the data statistics or model results with numerical precision. Although scatterplots provide a means of evaluating relationships, its two-dimensional nature may be limiting when exploring data across multiple dimensions simultaneously. One tool to explore multivariate data is parallel coordinate plots. I will present a method of producing parallel coordinate plots using PROC SGPLOT and will provide examples of when parallel coordinate plots may be very informative. In particular, I will discuss its application on an analysis of longitudinal observational data and results from unsupervised classification techniques. Making SAS the Easy Way Out: Harnessing the Power of PROC TEMPLATE to Create Reproducible, Complex Graphs With high pressure deadlines and mercurial collaborators, creating graphs in the most familiar way seems like the best option. Using post-processing programs like Photoshop or Microsoft Powerpoint to modify graphs is quicker and easier to the novice SAS User or for onersquos collaborators to do on their own. However, reproducibility is a huge issue in the scientific community. Any changes made outside statistical software need to be repeated when collaborator preferences change, the data changes, the journal requires additional elements, and a host of other reasons The likelihood of making errors increases along with the time spent making the figure. Learning PROC TEMPLATE allows one to seamlessly create complex, automatically generated figures and eliminates the need for post-processing. This paper will demonstrate how to do complex graph manipulation procedures in SAS 9.3 or later to solve common problems, including lattice panel plots for different variables, split plots and broken axes, weighted panel plots, using select observations in each panel, waterfall plots, and graph annotation. The examples presented are healthcare based, but the methods are applicable to finance, business and education. Attendees should have a basic understanding of the macro language, graphing in SAS using SGPLOT, and ODS graphics. Customizing plots to your heartrsquos content using PROC GPLOT and the annotate facility This paper introduces tips and techniques that can speed up the validation of 2 datasets. It begins with a brief introduction to PROC COMPARE, then proceeds to introduce some techniques without using automation to that can help to speed up the validation process. These techniques are most useful when one validates a pair of datasets for the first time. For the automation part, QCData is used to compare 2 datasets and QCDir is used to compare datasets in the production directory against their corresponding datasets in the QC directory. Also introduced is ampSYSINFO, a powerful, and extremely useful macro variable which holds a value that summarizes the result of a comparison. Combining Reports into a Single File Deliverable In daily operations of a Biostatistics and Statistical Programming department, we are often tasked with generating reports in the form of tables, listings, and figures (TLFs). A common setting in the pharmaceutical industry is to develop SASreg code in which individual programs generate one or more TLFs in some standard formatted output such as RTF or PDF with a common look and feel. As trends move towards electronic review and distribution, there is an increasing demand for producing a single file as the final deliverable rather than sending each output individually. Various techniques have been presented over the years, but they typically require post-processing individual RTF or PDF files, require knowledge base beyond SAS, and may require additional software licenses. The use of item stores as an alternative has been presented more recently. Using item stores, SAS stores the data and instructions used for the creation of each report. Individual item stores are restructured and replayed at a later time within an ODS sandwich to obtain a single file deliverable. This single file is well structured with either a hyperlinked Table of Contents in RTF or properly bookmarked PDF. All hyperlinks and bookmarks are defined in a meaningful way enabling the end user to easily navigate through the document. This Hands-on-Workshop will introduce the user to creating, replaying, and restructuring item stores to obtain a single file containing a set of tables, listings, and figures. The use of ODS is required in this application using SAS 9.4 in a Windows environment. Getting your Hands on Contrast and Estimate Statements Many SAS users are familiar with modeling with and without random effects through PROC GLM, PROC MIXED, PROC GLIMMIX, and PROC GENMOD. The parameter estimates are great for giving overall effects but analysts will need to use CONTRAST and ESTIMATE statement for digging deeper into the model to answer questions such as: ldquoWhat is the predicted value of my outcome for a given combination of variablesrdquo ldquoWhat is the estimated difference between groups at a given time pointrdquo or ldquoWhat is the estimated difference between slopes for two of three groupsrdquo This HOW will provide a step by step introduction so that the SAS USER will get more comfortable programming ESTIMATE and CONTRAST statements and finding answers to these types of questions. The hands on workshop will focus on statements that can be applied to either fixed effects models or mixed models. Advanced Programming Techniques with PROC SQL Kirk Paul Lafler The SQL Procedure contains a number of powerful and elegant language features for SQL users. This hands-on workshop (HOW) emphasizes highly valuable and widely usable advanced programming techniques that will help users of Base-SAS harness the power of the SQL procedure. Topics include using PROC SQL to identify FIRST. row, LAST. row and Between. rows in BY-group processing constructing and searching the contents of a value-list macro variable for a specific value data validation operations using various integrity constraints data summary operations to process down rows and across columns and using the MSGLEVEL system option and METHOD SQL option to capture vital processing and the algorithm selected and used by the optimizer when processing a query. How to analyze correlated and longitudinal data United States Food and Drug Administration (FDA) requires an annotated Case Report Form (aCRF) to be submitted as part of the electronic data submission for every clinical trial. aCRF is a PDF document that maps the captured data in a clinical trial to their corresponding variable names in the Study Data Tabulation Model (SDTM) datasets. The SDTM Metadata Submission Guidelines recommends that the aCRF should be bookmarked in a specific way. A one-to-one relationship between the bookmarks and aCRF forms is not typical one form may have two or more bookmarks. Therefore, the number of bookmarks can easily reach thousands in any study Generating the bookmarks manually is a tedious, time consuming job. This paper presents an approach to automate the entire bookmark generation process by using SASreg 9.2 and later releases, Ghostscript, a PDF editing tool, and leveraging the linkages between forms and their corresponding visits. This approach could potentially save tremendous amounts of time and the eyesight of programmers while reducing the potential for human error. Did the Protocol Change Work Interrupted Time Series Evaluation for Health Care Organizations. Carol Conell and Alexander Flint Background: Analysts are increasingly asked to evaluate the impact of policy and protocol changes in healthcare, as well as in education and other industries. Often the request occurs after the change is implemented and the objective is to provide an estimate of the effect as quickly as possible. This paper demonstrates how we used time series models to estimate the impact of a specific protocol change using data from the electronic health record (EHR). Although the approach is well established in econometrics, it remains much less common in healthcare: the paper is designed to make this technique available to intermediate level SAS programmers. Methods: This paper introduces the time series framework, terminology, and advantages to users with no previous experience using time series. It illustrates how SAS ETS can be used to fit an interrupted time series model to evaluate the impact of a one-time protocol change based on a real-world example from Kaiser Northern California. Macros are provided for creating a time series database, fitting basic ARMA models using PROC ARIMA, and comparing models. Once the simple time-series structure is identified for this example, heterogeneity in the effect of the intervention is examined using data from subsets of patients defined by the severity of their presentation. This shows how the aggregated approach can allow exploring effect heterogeneity. Conclusions: Aggregating data and applying time series methods provides a simple way to evaluate the impact of protocol changes and similar interventions. When the timing of these interventions is well-defined, this approach avoids the need to collect substantial data on individual level confounders and problems associated with selection bias. If the effect is immediate, the approach requires a very moderate number of time points. Finding Strategies for Credit Union Growth without Mergers or Acquisitions In this era of mergers and acquisitions, community banks and credit unions often believe that bigger is better, that they cant survive if they stay small. Using 20 years of industry data, we disprove that notion for credit unions, showing that even small ones can grow slowly but strongly on their own, without merging with larger ones. We first show how we find this strategy in the data. Then we segment credit unions by size and see how the strategy changes within each segment. Finally, we track the progress of these segments over time and develop a predictive model for any credit union. In the process, we introduce the concept of quotHigh-Performance Credit Unions, quot which do actions that are proven to lead to credit union growth. Code snippets will be shown for any version of SASreg but will require the SASSTAT package. A Case of Retreatment ndash Handling Retreated Patient Data Sriramu Kundoor and Sumida Urval In certain clinical trials, if the study protocol allows, there are scenarios where subjects are re-enrolled into the study for retreatment. As per CDISC guidelines these subjects need to be handled in a manner different from non-retreated subjects. The CDISC SDTM Implementation Guide versions 3.1.2 (Page 29) and 3.2 (Section 4 - page 8) state: ldquoThe unique subject identifier (USUBJID) is required in all datasets containing subject-level data. USUBJID values must be unique for each trial participant (subject) across all trials in the submission. This means that no two (or more) subjects, across all trials in the submission, may have the same USUBJID. Additionally, the same person who participates in multiple clinical trials (when this is known) must be assigned the same USUBJID value in all trials. rdquo Therefore a retreated subject cannot have two USUBJIDs in spite of being the same person undergoing the trial phase more than once. This paper describes (with suitable examples) a method of handling retreated subject data in the SDTMs as per CDISC standards, and at the same time capturing it in such a way that it is easy for the programmer or statistician to analyze the data in ADaM datasets. This paper also discusses the conditions that need to be followed (and the logic behind them) while programming retreated patient data into the different SDTM domains. Why and What Standards for Oncology Studies (Solid Tumor, Lymphoma and Leukemia) Each therapeutic area has its own unique data collection and analysis. Oncology especially, has particularly specific standards for collection and analysis of data. Oncology studies are also separated into one of three different sub types according to response criteria guidelines. The first sub type, Solid Tumor study, usually follows RECIST (Response Evaluation Criteria in Solid Tumor). The second sub type, Lymphoma study, usually follows Cheson. Lastly, Leukemia study follows study specific guidelines (IWCLL for Chronic Lymphocytic Leukemia, IWAML for Acute Myeloid Leukemia, NCCN Guidelines for Acute Lymphoblastic Leukemia and ESMO clinical practice guides for Chronic Myeloid Leukemia). This paper will demonstrate the notable level of sophistication implemented in CDISC standards, mainly driven by the differentiation across different response criteria. The paper will specifically show what SDTM domains are used to collect the different data points in each type. For example, Solid tumor studies collect tumor results in TR and TU and response in RS. Lymphoma studies collect not only tumor results and response, but also bone marrow assessment in LB and FA, and spleen and liver enlargement in PE. Leukemia studies collect blood counts (i. e. lymphocytes, neutrophils, hemoglobin and platelet count) in LB and genetic mutation as well as what are collected in Lymphoma studies. The paper will also introduce oncology terminologies (e. g. CR, PR, SD, PD, NE) and oncology-specific ADaM data sets - Time to Event (--TTE) data set. Finally, the paper will show how standards (e. g. response criteria guidelines and CDISC) will streamline clinical trial artefacts development in oncology studies and how end to end clinical trial artefacts development can be accomplished through this standards-driven process. Efficacy Endpoint Analysis Dataset Generation with Two-Layer ADaM Design Model In clinical trial data processing, the efficacy endpoints dataset design and implementation are often the most challenging process to standardize. This paper introduces a two-layer ADaM design method for generating an efficacy endpoints dataset and summarizes the practices from past projects. The two-layer ADaM design method improves not only implementation and review, but validation as well. The method is illustrated with examples. Strategic Considerations for CDISC Implementation Amber Randall and Bill Coar The Prescription Drug User Fee Act (PDUFA) V Guidance mandates eCTD format for all regulatory submissions by May 2017. The implementation of CDISC data standards is not a one-size-fits-all process and can present both a substantial technical challenge and potential high cost to study teams. There are many factors that should be considered in strategizing when and how which include timeline, study team expertise, and final goals. Different approaches may be more efficient for brand new studies as compared to existing or completed studies. Should CDISC standards be implemented right from the beginning or does it make sense to convert data once it is known that the study product will indeed be submitted for approval Does a study team already have the technical expertise to implement data standards If not, is it more cost effective to invest in training in-house or to hire contractors How does a company identify reliable and knowledgeable contractors Are contractors skilled in SAS programming sufficient or will they also need in-depth CDISC expertise How can the work of contractors be validated Our experience as a statistical CRO has allowed us to observe and participate in many approaches to this challenging process. What has become clear is that a good, informed strategy planned from the beginning can greatly increase efficiency and cost effectiveness and reduce stress and unanticipated surprises. SDD project management tool real-time and hassle free ---- a one stop shop for study validation and completion rate estimation Do you feel sometimes it is like an octopus to work on multiple projects as a lead program or it is hard to monitor whatrsquos going on Perhaps you know about Murphyrsquos Law: Anything that can go wrong will go wrong. And you will want to be the first one to know it before anybody else. Whatrsquos its impact and whatrsquos the downstream process After pulling the study submission package up to SDD, we developed a working process which collects status information of each program and output. Then a SAS program will read in the status report of repository documents and update the tracker with bull Timestamp (last modified, last run) of: o Source and validation program. o Upstream documents (served as input of the program such as raw data or macros). o Downstream documents Features including bull Pinnacle 21 traffic lighting bull Pulling time variables from SDD and building the logic (rawltSDTMltADaM, SourceltValidation) bull Logscan in batch (time estimation on completion) bull Metadata level checking bull The workflow of all these above bull Scheduled job of running the sequenced above tasks bull Study completion report (and algorithm) Building Better ADaM Datasets Faster With If-Less Programming One of major tasks in building ADaM datasets is to write the SAS code to implement the ADaM variables based on an ADaM specification. SAS programmers often find this task tedious, time-consuming and even prone to error. The main reason that the task seems daunting is because a large number of variables have to be created with if-then-else statements in one or more data steps at the same time for each of ADaM datasets. In order to address this common issue and alleviate the process involved, this paper introduces a small set of data step inline macros that allow programmers to derive most of ADaM variables without using if-then-else statements. With this if-less programming approach, a programmer can not only make a piece of ADaM implementation code easy to read and understand, but also makes it easy to modify along with the evolving ADaM specification, and straight to reuse in the development of other ADaM datasets, or studies. Whatrsquos more, this approach can be applied to the derivation of ADaM datasets from both SDTM, and non-SDTM datasets. Whatrsquos Hot ndash Skills for SASreg Professionals Kirk Paul Lafler As a new generation of SASreg user emerges, current and prior generations of users have an extensive array of procedures, programming tools, approaches and techniques to choose from. This presentation identifies and explores the areas that are hot in the world of the professional SAS user. Topics include Enterprise Guide, PROC SQL, PROC REPORT, Output Delivery System (ODS), Macro Language, DATA step programming techniques such as arrays and hash objects, SAS University Edition software, technical support at support. sas, wiki-content on sasCommunity. orgreg, published ldquowhiterdquo papers on LexJansen, and other venues. Creating Dynamic Documents with SASreg in the Jupyter Notebook to Reinforce Soft Skills Experience with technology and strong computing skills continue to be among the most desired qualifications by employers. Programs in Statistics and other especially quantitative fields have bolstered the programming and software training they impart on graduates. But as these skills become more common, there remains an equally important desire for what are often called quotsoft skillsquot: communication, telling a story, extracting meaning from data. Through the use of SASreg in the Jupyter Notebook traditional programming assignments are easily transformed into exercises involving both analytics in SAS and writing a clear report. Traditional reports become dynamic documents which include both text and living SAS reg code that gets run during document creation. Students should never just be writing SAS reg code again. Contributing to SASreg By Writing Your Very Own Package One of the biggest reasons for the explosive growth of R statistical software in recent years is the massive collection of user-developed packages. Each package consists of a number of functions centered around a particular theme or task, not previously addressed (well) within the software. While SAS reg continues to advance on its own, SAS reg users can now contribute packages to the broader SAS reg community. Creating and contributing a package is simple and straightforward, empowering SAS reg users immensely to grow the software themselves. There is a lot of potential to increase the general applicability of SAS reg to tasks beyond statistics and data management, and its up to you Collaborations in SAS Programming or Playing Nicely with Others Kristi Metzger and Melissa R. Pfeiffer SAS programmers rarely work in isolation, but rather are usually part of a team that includes other SAS programmers such as data managers and data analysts, as well as non-programmers like project coordinators. Some members of the team -- including the SAS programmers -- may work in different locations. Given these complex collaborations, it is increasingly important to adopt approaches to work effectively and easily in teams. In this presentation, we discuss strategies and methods for working with colleagues in varied roles. We first address file organization -- putting things in places easily found by team members -- including the importance of numbering programs that are executed sequentially. While documentation is often a neglected activity, we next review the importance of documenting both within SAS and in other forms for the non-SAS users of your team. We also discuss strategies for sharing formats and writing friendly SAS coding for seamless work with other SAS programmers. Additionally, data sets are often in flux, and we talk about approaches that add clarity to data sets and their production. Finally, we suggest tips for double-checking another programmerrsquos code andor output, including the importance of confirming the logic behind variable construction and the use of proc compare in the confirmation process. Ultimately, adopting strategies that ease working jointly helps when you have to review work you did in the past and makes for a better playground experience with your teammates. A Brief Introduction to WordPress for SAS Programmers WordPress is a free, open-source platform based on PHP and MySQL used to build websites. It is easy to use with a point-and-click user interface. You can write custom HTML and CSS if you want, but you can also build beautiful webpages without knowing anything at all about HTML or CSS. Features include a plugin architecture and a template system. WordPress is used by more than 26.4 of the top 10 million websites as of April 2016. In fact, SASreg blogs (hosted at blogs. sas) use the wordPress platform. If you are considering starting a blog to share your love of SAS or to raise the profile of your business and are considering using WordPress, join us for a brief introduction to WordPress for SAS programmers. How to Be a Successful and Healthy Home-Based SAS Programmer in PharmaBiotech Industry Abstract Submission 10 min. Quick Tip Talk WUSS 2016 Educational Forum and Conference September 7-9, 2016 Grand Hyatt San Francisco on Union Square San Francisco, California How to Be a Successful and Healthy Home-Based SAS Programmer in PharmaBiotech Industry Daniel Tsui Parexel International Inc. With the advancement of technology, the tech industry accepts more and more flexible schedules and telecommuting opportunities. In recent years, more statistical SAS programming jobs in PharmaBiotech industry have shifted from office-based to home-based. There has been ongoing debates about how beneficial is the shift. A lot of room is still available for discussion about the pros and cons of this home-based model. This presentation is devoted to investigate these pros and cons as home-based SAS programmer within the pharmabiotech industry. The overall benefits have been proposed in a Microsoft whitepaper based on a survey, Work without Walls, which listed the top 10 benefits of working from home from the employee viewpoint, such as workhome balance, avoid traffic, more productive, less distractions, etc. However, to be a successful home-based SAS programmer in the pharmabiotech industry, some enemies have to be defeated, such as 24 hours on call, performance issues, solitude, advancement opportunities, dealing with family, etc. This presentation will discuss some key highlights. Lora Delwiche and Susan Slaughter SAS Studio is an important new interface for SAS, designed for both traditional SAS programmers and for point-and-click users. For SAS programmers, SAS Studio offers many useful features not found in the traditional Display Manager. SAS Studio runs in a web browser. You write programs in SAS Studio, submit the programs to a SAS server, and the results are returned to your SAS Studio session. SAS Studio is included in the license for Base SAS, is the interface for SAS University Edition and is the default interface for SAS OnDemand for Academics. Both SAS University Edition and SAS OnDemand for Academics are free of charge for non-commercial use. With SAS Studio becoming so widely available, this is a good time to learn about it. An Animated Guide: An introduction to SAS Macro quoting This cartoon like presentation expands materials in a previous paper (that explained how SAS processes Macros) to show how SAS processes macro quoting. It is suggested that the quotmap of the SAS Supervisorquot in this cartoon is a very useful paradigm for understanding SAS macro quoting. Boxes on the map are either subroutines or storage areas and the cartoon allows you to see quotquotedquot tokens flow through the components of the SAS supervisor as code executes. Basic concepts for this paper are: 1) the map of the SAS supervisor 2) the idea that certain parts of the map monitor tokens as they pass through 3) the idea of SAS tokens as rule triggers for actions to be taken by parts of the map 4) macro masking prevents recognition of tokens and the triggering of rules 5) the places in the SAS system where unquoting happens. Principles of equal-channel angular pressing as a processing tool for grain refinement Ruslan Z. Valiev a. 1. , Terence G. Langdon b. C. . . a Institute of Physics of Advanced Materials, Ufa State Aviation Technical University, Ufa 450000, Russian Federation b Materials Research Group, School of Engineering Sciences, University of Southampton, Southampton SO17 1BJ, UK c Departments of Aerospace amp Mechanical Engineering and Materials Science, University of Southern California, Los Angeles, CA 90089-1453, USA Received 16 January 2006, Accepted 20 February 2006, Available online 24 April 2006During the last decade, equal-channel angular pressing (ECAP) has emerged as a widely-known procedure for the fabrication of ultrafine-grained metals and alloys. This review examines recent developments related to the use of ECAP for grain refinement including modifying conventional ECAP to increase the process efficiency and techniques for up-scaling the procedure and for the processing of hard-to-deform materials. Special attention is given to the basic principles of ECAP processing including the strain imposed in ECAP, the slip systems and shearing patterns associated with ECAP and the major experimental factors that influence ECAP including the die geometry and pressing regimes. It is demonstrated that all of these fundamental and experimental parameters play an essential role in microstructural refinement during the pressing operation. Attention is directed to the significant features of the microstructures produced by ECAP in single crystals, polycrystalline materials with both a single phase and multi-phases, and metalndashmatrix composites. It is shown that the formation of ultrafine grains in metals and alloys underlies a very significant enhancement in their mechanical and functional properties. Nevertheless, it is demonstrated also that, in order to achieve advanced properties after processing by ECAP, it is necessary to control a wide range of microstructural parameters including the grain boundary misorientations, the crystallographic texture and the distributions of any second phases. Significant progress has been made in the development of ECAP in recent years, thereby suggesting there are excellent prospects for the future successful incorporation of the ECAP process into commercial manufacturing operations. Corresponding author. Address: Departments of Aerospace amp Mechanical Engineering and Materials Science, University of Southern California, Los Angeles, CA 90089-1453, USA. Tel. 44 2380 5947721 213 740 0491 fax: 44 2380 5930151 213 740 8071. 1 Tel. fax: 7 3472 733422. Copyright copy 2006 Elsevier Ltd. All rights reserved.

Comments