Fall is the favourite season for most Americans

#MakeOverMonday Week 35 2019

This week it is a small dataset that needs to be put in a visualization. (12 rows and 3 columns).
I first have recreated the original visualization, with the same layout.
Then I judged it at the following points

What is the original story:

While weather predictions can be unreliable, it is safe to say that a quarter of Americans are looking forward to the summer months. In a survey of the four seasons, summer rates high as 25% of Americans look forward to the warmer weather. Fall is the favourite season of the nation. They show us per season what is the favourite for each group and they conclude that fall is the favourite season of the nation. They also wrote down the percentage on each bar.

What could be improved

  • Do not use the percentage is the graph, because it does not add up to 100% in total, so the viewer starts to wonder why.
  • Show it in a different way, because it is not visible in one blink that fall has the highest value. From the standard Power BI the following options are possible.
    • Stacked Bar Chart, Stacked Column Chart, Stacked Area Chart, Ribbon Chart, Pie Chart, Donut Chart, Treemap split into the age groups
    • Bar Chart, Line Chart, Area Chart, Pie Chart, Donut Chart, Treemap without splitting into the age groups

I have chosen for the Stacked Column Chart. The Data labels are off and I also wanted to switch the tooltip off. This is to stop the confusion about the percentage. I wanted to keep the tooltip, so I made a new option in data ‘no preference’, I did this because of the fact that I’m missing data and how to fix this. This are 3 rows and it makes the total preference value per age group 100%.

Who is the audience?

It was an article on YouGov. YouGov is a British international Internet-based market research and data analytics firm. The company’s methodology involves obtaining responses from an invited group of Internet users, and then weighting these responses in line with demographic information. Reading this I conclude that they did on online survey and this is the result of the survey. The audience are people who go to this side to read what is there. So the general public.

What is the goal

It is a news article in June 2013 in the Lifestyle section.

Go to the visual

Practice Python analysis with a #MakeOverMonday dataset

#MakeOverMonday Week 33 2019; A bird’s-eye view of clinical trials

In this blog I have explored and visualized a dataset with Python.

Exploring the data

The dataset A bird’s-eye view of clinical trials is provided as an excel file. It is bigger than other weeks, this file has 13,748 rows and 11 columns.

First I need to know what is in the dataset and I use the .head() comment.

Then I want to know the dtypes.
There are 3 columns with numbers and 7 with text (categorical variables).

For the 3 columns with numbers I can use the function .describe(), for 2 out of 3 that will give a funny result, because it is no use to describe the ‘Start_Year’ and ‘Start_Month. The ‘Enrollment’ is the only one that can give useful answers to this request.
Now I need to get some domain knowledge, what does enrollment means? Ah found it in the text of the original visualization: Enrollment are the numbers of patients enrolled.
Result: The mean is 441 patients with a max of 84496 and a min of 0. The value 365 in the 75% quartile tells me more trails or done with less than the mean

Next up is counting the unique categories of the categorical variable’s.
For this I need to write a definition:

There are 10 different sponsors, 7 Phases, 9 different States. For in total 867 conditions/diseases.
There are no double NCT’numbers so every row is unique. There are 13.434 different titles and 13564 summaries.

Visualizing the data

Bar Charts

First, I will examine the frequency distributions of the categorical variables, ‘’Sponsor’, ‘Phase’ and ‘Status’ with bar charts. Visualizing the others with bar charts is crazy, because of the number of bars you will get.

The ‘Sponsor’ GSK has with almost 2.500 trails the highest amount of trails

Looking at the Phases, most trails have the label Phase 3

And logically the most trails have the ‘Status’ Completed.

Histograms

I made histograms of the numeric variables ‘Start_Year’, ‘Start_Month’ and ‘Enrollment’.

The histogram of ‘Start_Year’ shows that in the beginning there where not much trails and that from 2000 the amount of trails went up with a peak in 2005, 2006 and 2007.

The histogram of ‘Start_Month’ shows a more or less even distribution.

The histogram of the ‘Enrollment’ shows what was already written in the describe function of the column. Most trails have a low amount of participants. So, it is right skewed.

Kernel density estimation

For ‘Start_Year’ and ‘Enrollment’ I would like to know the kde (Kernel density estimation) to find out what the density of occurrence of the trails is.
For ‘Start_Year’ the graph does not give extra insight, for ‘Enrollment’ it does.

This kde shows that there are also some large trails with a bigger group of participants.

Scatter Plot

I have made a Scatter Plot. A Scatter Plot is used to find out if there is a relationship between two variables. I wanted to see what the relationship was between ‘Enrollment’ and ‘Start_Year’. In this Scatter Plot it is visible when the bigger trails have started.

Box Plot

With a Box plot I was able to make visible when what sponsor was doing its trails.

Now I have explored the data and I can start the next step. Telling a story with visualizations in power BI.

Uses of 8 different energy sources to generate electricity in the UK part 2

#MakeOverMonday Week 32 2019

I have left this challenge for a week and enjoyed summer holiday season with the kids. In the meantime some people have given me suggestions looking at my earlier blog. So lets see what I can do today.

First step I have changed the table in Power BI, I now have a table with a lot more rows and lesser columns. My columns are ID, Timestamp, Attribute and Value.

The relation between the tables is automatically detected at the ID numbers. Doing a check with creating a table shows there is a mistake in this relationship. Digging for the answer is just giving me more questions about the data. So lets stop digging and create a visual with one of the story’s i wrote in my first blog.

The story I pick is the change in use over the years of 3 different types of source: Nuclear, fossil based and renewables.

  • I made two Measures in Power BI to group the sources together.
  • I have made two stacked area charts.
    • One that shows everything over the years
    • One where you can breakdown to a year / month and see the daily use.
  • I did this with the first table I had loaded earlier this week and not with the second table I made today
  • For the colours I have chosen purple for Nuclear, was temped to do red because of radiation risk, but found that to much. Of course fossil became grey, connected to the colour of coal. And green for the renewables.
  • To make sure that the filter is only for the second graph I used the option edit interactions to disconnect the filters from the first graph.
  • Kept the text in neutral grey and only made the names of the 3 groups in the text bold.
  • Both charts got fixed axis. With the automatic Y-axe it did not start at zero with the result that the bottom source was smaller than it should be. The second chart also as a fixed end, in this way the y-axe is the same for every filter that is going to be applied. I found out that February 2012 was the month where the claim of sources was the highest.

Uses of 8 different energy sources to generate electricity in the UK part 1

#MakeOverMonday Week 32 2019

For this week we received a bigger dataset, so first of all, lets do some exploring with Python code for the practice.

The dataset has 796,453 rows and 12 columns. Mmm something to be aware of, 11 of the columns have a name that start with a space. So, if I want to refer to them, I need to use that space as well.

The information is from 01-01-2012 to 03-08-2019, so on charts you need to be aware that the sum om 2019 is not comparable with the sums of other years.

There are 8 types of energy sources in this dataset:

  • Biomass: run on imported timber or use sawmill waste.
  • CCGT: Combined Cycle Gas Turbines are gas turbines whose hot exhaust are used to drive a boiler and steam turbine. This two-stage process makes them very efficient in gas usage. They are also quite fast to get online, so they are used to cover peak demand and to balance wind output.
  • Coal
  • Hydro
  • Nuclear
  • Pumped: These are small hydro-electric stations that can use overnight electricity to recharge their reservoirs. Mainly used to meet very short-term peak demands.
  • Solar
  • Wind

The other columns are an ID number, a timestamp, the demand (the total Giga Watt demand of the entire UK) and the frequency (the grid frequency is controlled to be exactly 50Hz on average, but varies slightly)

The Frequency and the Solar columns are decimal numbers the others are whole numbers and the timestamp is an object.

There are no missing values at all. There must be some doubles, looking at the timestamp, because there are 796,401 unique timestamps in the 796,453 rows. But 52 rows on the whole dataset can be negligible. Every row did get an unique ID, because here I see the same row count as rows in the dataset.

I used the Quick Insights option of Power BI and got insights that do not tell me anything. So I started to make some myself.

First I looked at the uses by year (2012-2018) for each source.

  • Coal goas down
  • Nuclear stays around the same level
  • CCGT grew up to 2016 and now flattens out on the downside.
  • Wind is going up since 2016
  • Pumped, goes down since 2016
  • Hydro show some fluctuation, but stay’s around its average
  • Biomass: uses going up
  • Solar is showing an intense growth in 2017 and it stay’s high in 2018 and that growth is so much that when you plot it in a line chart with coal, with had the highest use in 2012 that the coal line goes flat in the bottom of the chart, so I do not consider the information in the solar column of a good quality.

The 3 mean sources in 2012 where Coal, CCGT and Nuclear and in 2018 that is CCGT, Nuclear and Wind.

The demand on GW has gone down over the years with 724,444,951 GW

Found insights, did not yet find a story to tell. Lets take some rest and continue an other day.

Asylum applications in Europa part 4

#MakeOverMonday Week 28 2019

Go to the power BI viz pag 3

In the facebook group I got feedback on iteration 2

Loads better! I felt part of the conversation and knew immediately what story you were trying to tell!
Fantastic job! Great blog post! I hope others are encouraged to try this after seeing your journey!

Questions

  1. Is there a legend for the first two graphs?
  2. Do they need to be multicolored or could you make them all the same color and then call out the years you want to point out in a highlight color?

My answers

  1. There is no legend for the first two graphs, if I give one it will be a list of 28 countries and that is a lot.
  2. Looking at answer 1 in combination with question 2 I’m going to make a stacked column cart with 3 layers, 26 countries together and the 2 that I’m talking about separated and than I can also make a legend for the two graphs.

How did I do this.
I wanted to make a measure in Power BI, where I calculate the 26 rows and do not calculate the values of Germany and Hungary. I could not get it done with a DAX formula, so I went back to the source, the excel file and made a new row with the right values and imported the file as a new file.

Who can tell me how to do it with a DAX formula?

Suggestion

Another graph that might be interesting is the variance between accepted and denied which you might be able to use a waterfall chart to show in Power BI.

I did not start working on that one yet.

My blog part of Power BI news

Today I got a message from twitter that there was a post with my name so I went to it. I was wondering how did I contribute to this news. So, I followed up the link and found a great website with a nice collection of Power BI news.

I started reading and at when I was at the topic “Business” I found a my blog about the asylum applications in Europa part 3.

I feel proud about this.

Asylum applications in Europa part 3

#MakeOverMonday Week 28 2019

Go to the power BI viz pag 1

Today and yesterday I have been working on a second report with the data given by #MakeOverMonday.

My biggest challenge was getting the data in the right format to create a normal line or bar chart, with the years on the x-as and the country’s being the values. I dove into one of the other reports and found out, that the format of the data had been changed. The years where no longer in separated columns, but moved into 1 column and that is what was in my mind also. So first I worked in Excel and created the format needed to make the report I wanted. It takes time to put it in the right format.

Lesson learned: if I feel the data is not in the right format, change it first before putting time in making a report and than need to restart again.

While I was busy in Excel I was like, this most be possible in Power BI with the Power Query Editor. So I went back to Power BI and have been reading carefully and found the option: Unpivot Columns in the Transform menu and it did the trick in a click

After the data was in power BI in the format, I wanted I did some cleaning of the data

  • Removing empty rows
  • Chancing the value “:” into an empty cell. This value was used several times, I did not mean 0 because there where also zero’s in the columns.
  • Changed “Germany (until 1990 former territory of the FRG)” into Germany, because all data was from 2009 and onwards.
  • Filtered out the information about the 28 European countries together
  • Changed “Total positive decisions” into “accepted”, because that fitted nicer in the graph of denied and accepted

This time I have build a 3 visual report

Visual 1 shows the total asylum requests in Europe over the year by country.
I noticed a peak in Hungary in 2015 and dove into news articles to find out when Hungary closed the borders for Middle east refugees with Croatia and read this was end 2015. So this news fact is visible in the graph.

Visual 2 shows how many decisions have been made. In Germany you can see that a request and a decision is not always in the same year.

Visual 3 shows the accepted and denied request. I have chosen for a clustered column chart instead of a stacked column chart. Because here it is easier to see what has been more.

For the 3 visuals I made the Y-ax the same scale.

Asylum applications in Europa part 2

#MakeOverMonday Week 28 2019

Feedback from #Make Over Monday review

This Wednesday there was the weekly review from the made visualization and here are the points I need to work on and be aware of for a next report

  • Make sure there is a title / header
  • Check the spelling of the tekst
  • When mentioned it is about people you do not have to specify that those are divided into two gender groups
  • The filter on the left hand side is big, better is to make it a dropdown list
  • The tables are not inviting to read them, remove them are use a kind of heatmap or only totals
  • Use a normal bar chart and put the number is full amounts, do not use K, because there are amounts below 1,000

Feedback from members of the facebook group BI Data Storytelling Mastery

  • What is the takeaway?
  • What story should I see?
  • I suggest considering something different than a stacked bar. I can see overall total but understanding difference in composition is hard for human brain.
  • In your blog can you tell why you chose your visuals, colors, fonts?
  • But most importantly the story you want to tell. Better story might be approved vs denied, overall trend. Is there a story in the reasons? Is there a story in the sex or age difference? Why should the viewer care about these numbers? Help us see the story from your point of view, the refugees point of view, the asylum counties pov.
  • Can you make the viewer want to investigate and find out more?

So I’m going back to an empty power BI sheet and rethink about the story that I want to tell with this data.

Asylum applications in Europa part 1

#MakeOverMonday Week 28 2019

Go to the power BI viz pag 2

This week we are working with data about asylum applications in Europa from 2008 until 2018. The data provided is part of a bigger dashboard. You can find the original at “International protection in the EU+: 2018 overview”. We received two datasets that was used for this two visuals

Looking at the data in Excel, the Age column is only giving total and not a differentiation in minor or adult. The decision column is also giving 4 other descriptions than the 4 used in the original graph. The two columns with totals are not giving the same results so it is not possible to combine the two sets. So not possible to find out how many of the applicant per year got asylum and what type of asylum. This is probably because of the length application.

I decided to make a visualisation where you can filter by country how many people are asking asylum and what the decision are by year. I could not connect the request and decision numbers to each other.

Alcohol Consumption By Country

#MakeOverMonday Week 26 2019

This week I want to participate in the weekly challenge of #MakeOverMonday lets see if I have enough spare time between jobapplications, family activities and Python for Data Science training. The last two weeks I could not find the time for the challenge.

This week it is a small dataset. It has 25 rows (country’s) and there alcohol consumption by capita. In the article I read that is by capita older than 15 years.
In the original visualization there is a bar chart of this 15 country’s.
On Wednesday I have seen the live webinar and was impressed with visuals other people made. Got a lot of inspiration and challenge to find out what I can do.

In my head grows the idea to at the capita by country, to show how many liters there is drunk by country. I found information about the capita per country for 2017 and have added this to the excel spreadsheet. Now I need to find how much of the population is above 15 years old.
No could not find it, so end of idea.

I just used the data I had a created some graphs showing what is in the data. The information I found about the capita by country in 2017 has been used.

I did submit it to #MakeOverMonday Challenge.

Data about sleeping hours in America

#MakeOverMonday Week 23 2019

Introduction

From MakeOverMonday I got data about sleep times per day, age and sex. With the question to work on the graph and remake it. This week I first took the data into Jupyter Notebook to analyse it with Python, after that I went to Power BI for the visualization.

Exploring the data

The data is having 945 rows with 8 columns
There is no missing data
It is information about 15 years (2003-2017)
The people are put in 7 age groups and there is a group with all the information together. They are also divided in 3 sex groups (both, men and women)
There are 2 different types of day’s (“Nonholiday weekdays” and “Weekend days and holidays”) and there is a group were all information has been put together.
In Python I left the data like it was, I did not alter it.

I made several histograms, to see what they were telling me. The histogram of Average hours per day sleeping gave me a nice right skewed distribution (0.4661909713080754)
The mean of this one is 8.069 and the median is 8.81

The correlation between the year and the Average hours per day sleeping is 0.15, this is small

I want to know if there is a relation between ‘Average hours per day sleeping’ and the categorical variables
I found something as can been seen in the two boxplots, it looks like 15 to 24 year old people sleep more hours

During the Weekend days and holidays, people sleep more hours

This is the graph how it was made to remake

https://www.bls.gov/tus/charts/chart16.jpg

Visualize the data in Power BI

I did not find it easy to pick out a topic to visualize, there was not really something that got my attention during explorating the date, beside that I found it a lot of hours that people spend sleeping. I think the title should be, hours spend in bed.

I decided to visualize the average of hours spend in bed by age group over the years, with the possibility to filter on man of women.

Make Over Monday Week 21 iteration 2

After the review by Eva Murray, Head of BI & Tableau Zen Master at Exasol and Jeff Shaffer, COO & Vice President, Unifund, Tableau Zen Master I made an iteration of my visualisation.

The suggestions where:

  • About the title; not to put information there that is not connected with the dataset
  • To change the bar chart into a column chart
  • To show the month by its abbreviation.
  • To make sure the scale does not change when using a filter (so fixed x and y axis)
    • This was a challenge, but I made it, YES!!
  • Gramma; use the word from instead of in (July)
  • and highlight the most fatal months

The interaction with the charts and the filters was seen as positive thing

Second time Makeover Monday

After two weeks of the Business Case from Techionista it is time to do a MakeOverMonday challenge again.

Today we are asked to work with data from Ali Sanne, she collected, prepared and distributed the data on data.world. It is data about deadly bear attacks on people. From 1900 till 2018.

Read the original report at fox.

My visualisation in Power BI

Webinar MakeOverMonday

Vandaag gekeken naar een Webinar van de visualisaties die andere mensen gemaakt hadden met dezelfde data als die ik gebruikt had. Een aantal mooie dingen gezien, weer wat bijgeleerd.

My first time at Makeover Monday

This was the given visualisation

There have been 216 spacewalks at the International Space Station since December 1998.

I downloaded the data (xls-file with 2 worksheets), saved this as two csv’s and pulled it into PowerBI.
There I did some transformation

  • In the table ISS Spacewalks I made sure that the “year” was a Whole Number and of the type Date and Year.
  • Also the “Number op Spacewalks” needed to be a Whole Number.
  • In the table Spacewalks I extracted the Year out of the “Date” column.
  • The ‘Duration (hours)’ is having two values with a -, this needed to be removed before I could transform the column into a Number.
  • I made a relation between the two tables using the year as a many to many relation.

After that my first challenge was to rebuild the given visualisation.

I’m proud of myself that I managed to create this.
I see that more can be done with this given dataset, but not for today. Maybe I will give it an other shot in the upcoming day’s.