Author Archives: Todd Connelly

About Todd Connelly

Todd enjoys his job as a Statistical Analyst for Sierra Trading Post. He has been working there for over 5 years. Todd completed his M.S. in Applied Statistics and a B.A. in Economics at the University of Northern Colorado. After spending the day sorting through data issues, writing SQL and fitting models in R, Todd comes home to his loving Wife and children. His research interests include Reproducible Research, Missing Data, Logistic Regression and Issues Related to Big Data.

Making a code book in sql server - Part 2

We have already covered how to add a data dictionary to a sql server table by using extended properties (see here). We can go one step further and making a simple stored procedure that will give us quick access to the code book. After building the stored procedure below we can now double click a table and use a keyboard shortcut. I have mine shortcut as Ctrl+5.

You can see what my shortcuts look like in the picture below.

If we follow the example in the article for building a code book, we can create a table with the iris data set and add a quick code book. Then when we highlight the table [iris] and use our keyboard shortcut of Ctrl+5, you should see this:

Now we can quickly check the definition of a column without having to leave our SSMS window. If you are interested in the other helper stored procedures besides GetDataDictionary, then take a look at this post.

Leave a Comment

Filed under Productivity, Sql Server

Quick Bar Graph in Sql Server

Have you ever needed a bar graph and didn't want to leave Sql Server? Well if you don't have Sql Server 2016 yet I have some code for you. The code below creates a table with the numbers 1-10. We then sample repeatedly from that table and make a histogram of our sample. We should have something approaching a uniform distribution when we are done.

 

You should now have a nice little histogram showing you the distribution of your data. Another useful thing is to approach it like a pareto chart and order the data descending.

sqlhist

Leave a Comment

Filed under Sql Server

Vectorize a Function

I was recently working and decided to write a function to assist in the process. It assigns a label to a number based upon the value. My first attempt worked, but only for one value at a time.

This works.

This does not work.

I should have thought more about the end goal of the function before I started coding, but I didn't. I started searching for good ways to vectorize a function in R. I found there is a function in base R called 'Vectorize'. All I needed was to create a new function using 'Vectorize' and I was done.

This allowed me to easily add a column to my data frame containing individual KMO measures and associate a label. I reached out to my fellow blogger Jeremy and he gave me a quick re-write of my original function. Here is his approach.

We can do a quick check to make sure that we are getting the same output.

My next question, is there a major performance difference between the two?  I ran a short simulation which is summarized in the plot below and shows that there is not a large difference in performance for the samples tested.

Rplot

Have a better way to solve this problem? Post it in the comments below. If you are wondering what this KMO thing is all about, it is a measure of sampling accuracy (MSA) for conducting exploratory factor analysis (EFA). The cutoffs and names were taken from:

  1. Barbara A. Cerny , Henry F. Kaiser
    Multivariate Behavioral Research
    Vol. 12, Iss. 1, 1977

Leave a Comment

Filed under R_local, Statistics

Tips For Dealing with Large Datasets in Sql Server

Are you dealing with large (100 million row +) datasets that live in a sql database? Have you found your old methods not to be satisfactory? Here are a couple of tips from my own experience.

  1. Do not use count(*) to figure out how big of a table you have.
  2.  Do not use the max function to figure out when the last record was inserted

1.

I found this out the hard way when I actually got an error when I tried to run
Select Count(*) From Table  and received an arithmetic overflow error. I was puzzled at first and after some  searching I found the problem. A 'count' in sql server returns the datatype int, which means it can only be 2^31-1 or 2,147,483,647. The table that I was working with had more than 2.1 billion rows, so that caused a problem. Now, you may be thinking that you could just use a Count_Big instead, but that is probably not the right answer. Try using sp_spaceused instead. If you are interested in turning this into a shortcut for SSMS, look here.

2.

Sometimes I need to figure out when the latest record was inserted. Instead of taking a max on a datetime field. I can often get to my answer by using information about an index. Hopefully your table has a unique auto incrementing primary key that can aid you in finding the last record inserted. Make sure you understand the process that builds or alters your table. It could be that the maximum value of your primary key is not related to the most recent records.

Select max(LocalTimeStamp) From Table ( Slow )

vs

Select LocalTimeStamp From Table Where PrimaryKey = ( Select Max(PrimaryKey) From Table ) ( Quick )

Have some useful tips to add? Please post them in the comments.

Leave a Comment

Filed under Sql Server

Conference on Statistical Practice 2015 - Day 1

The first day started off with a great lecture on basic software engineering principles that all Statisticians should know. Paul Teetor, gave the talk "What Can We Learn from Software Engineers?". He covered some basic but very important principles including:

  • Coding Standards
  • Defensive Programming
  • Version Control
  • Unit Testing

I appreciated him introducing me to the difference between "programming in the small and programming in the large". Often I find myself writing code "in the small" and in hindsight, that is not great. Paul walked our group through building a basic R package and putting it under version control in less than 20 minutes! So far, this has been my favorite session. The only other thing that is worth mentioning was a conversation that I had with a representative from Wolfram Alpha. This gentleman was explaining the features of their products and how great their software is. I was listening to his pitch when he caught me off guard with an odd statistic about how many lines of code their software had. It went something like this:

Sales_Rep : We have more code than the human genome!

Me: You guys should really write more efficient code

He quickly explained that their code is efficient, but I had to leave shortly after that to avoid laughing out loud. We ended the first day with a poster session and some socializing. Overall, it was a good first day.

Leave a Comment

Filed under CSP2015

Checking For Duplicate Records in Sql Server Table

I often find myself needing to check if a table is unique with respect to a particular column. For instance, if I have a table of customer data I want to ensure that a customer only exists once. There are several ways that you can do this.  If I want to write a quick select if usually looks like this.

A while back I got tired of writing the above query over and over, so I created a stored procedure (SP). The SP below take two arguments, a table name and a column name and will determine if your column is unique. Currently it is written using a   count(*)  , which is not meant for large tables, i.e. > 2,000,000,000 rows .

There is one more method that I commonly use to determine if my table is unique and that is an index.

So, now you have three different ways to check your table to make sure that your column is unique. Keep in mind that adding an index will take up additional space, where methods 1 and 2 will not.

Leave a Comment

Filed under Sql Server

Keeping rows containing particular strings in R

I was recently presented with the need to filter out certain rows in my dataset based upon them containing the desired strings. I needed to retain any row that had a "utm_source" and "utm_medium" and "utm_campaign". Each row in my dataset was a single string. The idea is to parse the strings of interest. My approach was to use grep and check each string for each condition that I needed it to satisfy. I consulted with my co-blogger to see if he had a more intelligent way of approaching this problem. He tackled it with a regular expression using a look-ahead. You can see my 'checker' function below and Jeremy's function 'checker2'. Both seem to perform the required task correctly. So now it is simply a matter of performance.

I am not able to share the full dataset that I was using, due to privacy concerns. The dataset that I tested both functions against had 26,746 rows. The 'checker' function which I wrote took  on average 0.0801 seconds and Jeremy's approach took  0.1488 seconds. I decided to stick with my checker function, but that was not because of speed. I would have happily accepted the increased computation time for mine if the times had been reversed. The reason for this is that I find mine easier to read. This means that there is a chance that I could come back to this code in 6 months and have a clue about what it is suppose to be doing. Regular Expressions can sometimes be quite hard to come back to and say, " oh yeah, I wanted to check if all the characters that occupy prime digits in my string are vowels!". I think that my simplistic grep statement will be easier to change if that becomes needed in the future and so I will stick with the 'checker' approach. Do you have a better way to approach this using R? If so, make sure to post a comment.

Leave a Comment

Filed under R

Summary Function that is compatible with xtable

If you like to make nice looking documents using Latex, I highly recommend using the 'xtable' package. In most instances, it works quite well for producing a reasonable looking table from an R object. I however recently wanted a LaTeX table from the 'summary' function in base R. So naturally I tried:

Which gave me the following error:

Error in xtable.table(summary(foo)) :
  xtable.table is not implemented for tables of > 2 dimensions

So I decided to create a simple function that would return a dataframe which is easy to use with xtable. Here is what I came up with.

 

Now, when I try to use xtable I get the following output:

This should lead to an easier way to incorporate more summaries when you are writing your paper, using R and Knitr of course. If you do use knitr, make sure to try the results = 'asis' option with xtable from R.

Leave a Comment

Filed under R

Sending Email From R

When I am am working in Sql Server and need to send an email I use "sp_send_dbmail". So when I am working in R, I didn't know how to send an email. I often use this as notification that a process has finished. It also works nicely as a text to your cell phone. I had one additional reason why I wanted to be able to email from R. I wanted to send an email to my evernote account with just a few key strokes. The goal was to accomplish this by writing a simple wrapper function. Below is the solution that I came up with. It works, but there are serious security implications. I offer this merely as a proof of concept. Hopefully someone can show me a better way to handle passing your email password to the R function.

I often have an R terminal open, so when I have a great idea for a research project I can add a note to my evernote account simply. For instance:

en(s="Read up on current imputation methods",tags="#Research")

Then a new entry is added for me in evernote with the tag 'Research'. ( I have noticed that the tagging seems to only work if the tag previously exists in my evernote account.)  Often I have a task that just needs done that I don't want to forget about. I can issue a quick command and then I will have a record of it.

en(s="email adviser about research")

That is all I need to do and the note is added to my account. I have found this to be quite useful and hopefully you will as well.

Leave a Comment

Filed under email, R

Job Market For Statisticians

I have been forced to think about the job market lately. It started with a class assignment which was meant to simply open my eyes to current job market. I felt that I was already familiar enough but completed the assignment to be a good student. I completed the assignment and outlined the skills I need improve upon and so forth. With in day of completing my assignment I came across "A Guide and Advice for Economists on the U.S. Junior Academic Job Market: 2014-2015 Edition" after clicking through some links on facebook. I found it a great read and it caused me to starting thinking about a few things that will likely prove helpful down the road. I had originally intended to pursue a Ph.D. in Economics after finishing my M.S. in Statistics. However, life took a turn and I ended up working full time and then started working on my Ph.D. in Statistics part time while continuing to work. Make sure that you take a look at the salary tables that are included. The table below is for full time working White Males by which Ph.D. they obtained. There is more variation associated with the Economics degree, but not enough to not make it look better than Math or Statistics based solely upon salary.

For White Males Median Salary SE       95 % Range
Mathematics/Statistics  $100,000   1,500  (97,000 - 103,000)
Economics   $126,000   5,500 (115,000 -137,000)

(Data taken from: Table 50 , the 95% range is mine based on the assumption of a normal distribution.)

There is also an article in the American Statistician recently about career paths, "Which Career Path Will You Follow?". Between these three events that occurred within a week, I thought that it merited a post. Have some other useful job advice or interesting statistics that current graduate students should know? Post a comment.

Leave a Comment

Filed under Uncategorized