Andrew Zimbroff is an Assistant Professor at The University of Nebraska-Lincoln, and the Director of Research at 3 Day Startup. He contributes to 3DS as a researcher, author of new curriculum, and program facilitator.

Data Collection at 3DS

A little over a year and a half ago, I wrote a blog post about data collections at 3DS. Since that post, the 3DS organization has been busy working to add new ways to monitor and improve our data-collection efforts. This post reviews some of the projects we have undertaken, and how it has positively affected the 3DS ecosystem.

At 3DS, we are always focused on ensuring an impactful and enjoyable student experience. Additionally, we like to pivot quickly to make these improvements, and data driven findings often prompt these changes. We have already acted upon collected data, and used it to improve our curriculum in various ways. For example, last fall, we noticed that participant learning of prototyping techniques was not as high as we aimed for. We examined our prototyping content, identified areas for improvement, and completely revamped our prototyping educational module. We now introduce additional prototyping techniques, like A/B testing, “Wizard of Oz” Prototypes, storyboarding, as well as examples from previous 3DS programs and startups. This allows us to see a direct benefit from collected data, and makes our collection efforts worthwhile.

prototyping slideSlide from updated 3DS prototyping module

Current Efforts for Data Collection

Currently, we attempt to survey every 3DS participant before and after participation in workshops. We use participant self-assessment to document student learning. For this method, we ask students to rate their own ability for startup-related skills taught during 3DS programs. By having students rate their knowledge of entrepreneurial topics before and after programming, we are able to measure learning that happens at a 3DS programming. We find statistically significant (p<.01) positive changes for all concepts measured. The graph below shows pre- and post-program responses from over 20 3DS programs delivered in the Spring 2016.


There is a significant academic interest in entrepreneurship education, and our research in this area has received interest from researchers and those who deliver entrepreneurship curriculum. We have presented this data at various academic conferences, including a Kauffman Research Symposium in Kansas City, USASBE Annual conference in San Diego, and the ICPTI conference in the United Kingdom. These opportunities to share our work have generated insightful discussion, and allowed others to try similar methods. More importantly, it allows us to get feedback on our work, and gives us new ideas to build on our current efforts. One suggestion that we frequently encounter is to move beyond self-assessment, and employ other methods to measure student learning. This was the inspiration for our most recent experiment in data-driven curriculum improvement and learning.

New Methods Piloted at 3DS Brasilia

I was recently able to help with the 3DS US/Brazil Startup Connection in Brasilia. This program, funded by the US State Department to further entrepreneurial learning and relationships between Brazil and the US, aimed to introduce immersive entrepreneurial learning across Brazil. This program was an amazing experience for many reasons including motivated participants, esteemed Brazilian partners, and a seasoned crew of facilitators from 3DS global. This program also offered an opportunity to pilot a new method for measuring participant learning.

As in other programs, we administered pre- and post-program surveys to participants to measure change in self-assessment of entrepreneurial skills. We wanted students to be further evaluated by an outside perspective. To achieve this goal, we built on a previous collaboration with Valid Eval–an organization focused on evaluating entrepreneurs funded by the Kauffman Foundation. We constructed rubrics (example below) based on 3DS curriculum to measure student performance, to be filled out by experts with entrepreneurial experience.


We had experts evaluate participants at three different stages throughout the weekend. First, on Saturday afternoon during mentoring sessions, the mentors evaluated the team’s Lean Canvas submissions. Next, we evaluated Intermediate pitches Saturday Evening. Finally we evaluated the final pitches on Sunday Evening. The rubrics for the intermediate and final pitches we very similar, and contained the same criteria for many fields (some content included in the final pitch is not emphasized in intermediate pitch, and was not included in the Intermediate pitch rubric). This allows us to directly compare team performance on Saturday and Sunday, and in a fashion similar to that of the pre- and post-program self-assessment surveys, measure learning resultant from 3DS programming.

overall int final

Once all data was inputted, we were able to see what material student teams were learning most effectively in detail (see example below). This helps us better understand where our curriculum is strong, and what needs to be emphasized in the future. Because we evaluated both intermediate and final pitches, we can also measure learning that occurred between these two milestones.

single int final

Additionally, we can examine results from all teams combined to better understand how participants responded to different curriculum topics. There are further opportunities to take this data from multiple programs to identify future curriculum improvements. Even though we are only in the preliminary stages of this data collection, it fascinating to consider the many potential ways to implement this data for the benefit of 3DS.

average all

The most exciting part of this experiment was the speed at which we were able to act upon this data. During the program, we used these rubrics for Intermediate Pitches on Saturday afternoon. We collected, consolidated, and analyzed data that evening, and less than 16 hours later, we used it to give tailored, data-driven feedback during Sunday morning mentoring sessions. Not only will this new assessment allow us to improve future curriculum, it also gives us an additional tool to help guide students during weekend programs.

final image

3DS Facilitator Diego Marafon, using graphical rubric data, to coach program participants (Left); Example of results used to mentor teams on Sunday (right)

In academia, it often takes months or even years to collect and analyze data and get actionable results. While these timeframes are necessary for some long-term and large research projects, not all research (and especially teaching and mentoring students) has to move at this pace.  At this recent program, we were able to apply the Lean Methodology — a core philosophy of 3 Day Startup, to data collection and teaching as well, to help improve our mentoring of students. We are excited to continue development of this new assessment method and find new ways it can benefit 3DS programming and curricula.