Some talks I’ve given in the past couple of years. These are PDFs of the PowerPoint deck with the speaker notes and in some cases videos or podcasts of of the same material. There is often related material either in my blog or in one of my publications.
More and more apps are build for phones and tablets with less powerful processors and limited battery life. If you want to develop for these devices then it’s important to consider performance when building apps that users will love. C++ has a reputation for been hard to read, let alone write. You paid for the performance C++ gave you with late nights chasing memory leaks and crashes. C++ has moved on in the last few years with new language features, libraries and programming idioms. These all make many of the pitfalls of C++ much easier to avoid. This talk will give an overview of the new features in C++11. Including; how to not worry about memory management (too much), use libraries for graphics, math and data structures, and build apps in a few hundred lines of readable code.
Maybe you want to write a programming book, maybe just a blog post or some documentation for your team or the users of an open source project. All the great code you wrote is no good if no one else can understand and use it. Learn about the beginning, middle and end of writing anything and what it takes to write great sample code to go along with all those words.
C++ AMP is Microsoft’s GPU programming technology. This presentation, by one of the authors of "C++ AMP: Accelerated Massive Parallelism with Microsoft Visual C++" (MSPress), gives an overview of C++ AMP’s features. The presentation will introduce C++ AMP’s algorithms and containers programming model and its two minor additions to the C++ language. By programming against a hardware agnostic data parallel accelerator model, rather than specific hardware, developers can future proof their applications to run on a variety of data parallel hardware. Several C++ AMP examples will be demonstrated, showing the array and array_view container types and the parallel_for_each algorithm. The examples will be extended so show how C++ AMP code can be optimized and then used with the Parallel Patterns Library on the CPU to take advantage of multiple GPUs and achieve further performance improvements with braided parallelism.”
This source code for the samples I wrote for the talk can be found on CodePlex.
Given at the GPU Technology Conference, San Jose CA, March 2013.
"Big data" refers to unstructured data sets so large that they cannot be analyzed using traditional database tools. Today, big data are becoming more common; it is prevalent not just in Web traffic, but also in industries like oil & gas, finance and manufacturing. Based on Microsoft Research’s Dryad project, LINQ to HPC is a programming model and distributed runtime for building analysis solutions for big data. It goes beyond MapReduce and leverages the LINQ programming model and HPC scheduler to execute optimized query graphs across a cluster of machines. In this session, you will learn how to use LINQ to HPC on both Windows Azure and an on-premise Windows cluster to build analytic apps that deal with big data. These apps will be able to scale out to hundreds of machines without having to deal with scheduling, data replication and node failure complexities generally associated with programming a large, distributed data-parallel system.
Given at //BUILD, Anaheim, CA Sept 2011.
Helping productivity programmers develop applications that run well on multicore hardware is one of major challenges to broad adoption of parallel programming. This talk covers some of the work going on at Microsoft to enable Windows developers write applications that target today’s multicore architectures. It gives an overview of the new frameworks, language features added in Visual Studio 2010 to support parallel programming and the patterns they enable.
Given at the Par Lab Seminar Series, UC Berkeley, March 2011.
Multi-core and HPC technologies are rapidly moving into the computing mainstream, allowing us to develop applications with improved performance, increased responsiveness, and reduced latency. The many established design patterns in this space can help developers and architects reuse proven approaches to solving many types of concurrency problems. This talk covers many of the key patterns and gives examples of how they can be implemented using the Microsoft .NET Framework 4.0 libraries of parallelism.
I gave this talk at TechEd 2010 in New Orleans (June 2010), where I was one of the featured speakers. I also presented similar talk at PDC 2009 in LA, at the Microsoft Research Faculty Summit (July 2010), the patterns & practices Symposium (Oct 2010), OreDev, Sweden (Nov 2010), TechEd China (Dec 2010), the p&p Symposium in Tokyo and a couple of internal events at Microsoft. You can access this deck and the video from the TechEd site.
I’ve also posted a newer version of the demo application I used in the talk. You can look at the code here.
Here is the Chinese version of the deck from my TechEd talk in Beijing.
Microsoft Enterprise Library is a collection of reusable application blocks that help address the common problems that software engineers face when developing enterprise applications. This session will provide an overview of the Enterprise Library and walk you through a demo of an application that gradually takes advantage of various application blocks. We will showcase popular features, such as logging, exception handling, policy injection, and Unity dependency injection container. We’ll discuss the underlying design and the architectural refactoring we undertook in v5.0 and give examples of how common scenarios are addressed. The session targets both developers and architects who are new to the Enterprise Library and those who have previously used it.
Here is the Chinese version of the deck from my TechEd talk in Beijing.
What happens when you take a seriously computationally hungry application and use the latest parallel programming features of C#, F# and C/C++ to improve its performance?
In this session we’ll work with a single application and look at some of the parallel features in C#, F# and C/C++, the importance of choosing the right algorithms and how to pick and mix languages and frameworks. The end result is an application running 5x, 20x or even 400x faster by fully utilizing multi-core CPU and GPGPU processors.
I gave this talk at Seattle Code Camp in April 2010. There’s a video of the talk here. You can learn more about it on my N-Body Modeling page. The source code for much of the application is also available.
The transition from single-core to multi-core technology is altering computing as we know it, enabling increased productivity, powerful energy-efficient performance, and leading-edge advanced computing experiences. Multi-core and HPC technologies are rapidly moving into the computing mainstream, allowing us to develop applications with improved performance, increased responsiveness, and reduced latency. This workshop is aimed at experienced software developers who are relatively new to the parallel computing space but expect it to become more important to their work. The workshop helps software developers understand the fundamental challenges of parallel computing, that span from the client to the cluster, such as synchronization, shared state, and moving from multi-core to multi-server. Learn how established software patterns can help developers building on both Microsoft’s Parallel Computing Platform—consisting of Task Parallel Library, PLINQ and Coordination Data Structures for .NET development, and Parallel Patterns Library and Concurrency Runtime for C++—and the HPC platform. The presenters describe the patterns in a bigger context, share their experience, and demonstrate implementations of these patterns in examples and demos. Learn how to add these patterns and new technologies to your toolbox.
I and Christof Sprenger organized this full day workshop for PDC10. I was lucky enough to have Herb Sutter (C++) , Stephen Toub (.NET) and Richard Ciapala (HPC Server) as co-presenters.
Most agile methodologies tend to assume that the team is co-located in a single team room. They give little guidance as to how to address team distribution although proven practices are starting to emerge within the community. The Microsoft patterns & practices team has been experimenting with distributed teams for several years, mining proven practices from the community and experimenting them out on numerous agile projects. This talk summarizes those learnings and proven practices and gives examples of their application – both good and bad – within our teams.
There have been several versions of this deck. This is the final version of the talk I gave at; Agile 2009 (Chicago), patterns & practices Summit 2009 (Redmond), Much Ado About Agile 2009 (Vancouver), NT Konferenca 2010 (Slovenia) and DevSum 2010 (Sweden) and the p&p Symposium in Tokyo. A white paper on the same topic is also available.
The deck includes speaker notes. There is also a video and podcast which cover a pervious version of the talk. You might want to check those out too.
Continuous Integration (CI) is the practice of building and testing the application under development. Usually right after each and every check-in. CI grew out of the agile software development community but can add value to almost any project. This talk will describe the basic approach to CI and also some other practices teams can adopt to get even more out of their investment in CI. The talk will also cover the Microsoft patterns & practices team’s experience with CI and show some of the likely cost savings of adopting this practice on your team.
It seems like everyone wants to scale their agile teams. The Agile approach to software development needs to scale up to larger team sizes as projects grow in scope. Agile also needs to scale out to handle geographically distributed teams. Both are challenging propositions for many teams. I talk about my experiences at Microsoft; scaling agile up on the Visual Studio Tools for Office team and scaling out on the radically distributed teams within the patterns & practices group.
This talk is based on two papers; Agility and the Inconceivably Large (2007) and Distributed Agile Development at patterns & practices (2008) and includes updated information from the Visual Studio 2008 release. For more resources on this topic see my post about the talk.
Presented at the San Francisco Agile Meetup (Feb2010), Agile Development Practices 2008 and as part of the University of Washington Certificate Program in Agile Development (with Mitch Lacey). I’ve also given variations of this talk to internal Microsoft groups and at Microsoft executive briefings.
Distributed development is a fact of life for many agile teams. Microsoft’s patterns & practices group has been following an agile, distributed development approach for several years. This talk outlines the challenges faced by distributed agile teams and details some of the best practices to address these issues and build successful distributed teams.
I was also interviewed about the white paper by David Starr, you can listen to the interview here on Elegant Code.
Our experiences at patterns & practices using continuous integration (CI) as part of a distributed agile team. The presentation and paper focus on analyzing the efficiency gains from CI and best practices around distributed development and CI.
Presented as an experience report at Agile 2008 and (in draft form) to the Seattle XP Users Group. The paper and accompanying errata are also available (the errata covers minor errors in the data analysis which came to light after I’d submitted the original paper to Agile 2008 and the IEEE.
A talk about the thinking behind p&p’s approach to building software.
Presented at the p&p Summit in Quebec. There is no accompanying paper for this talk but the deck includes lots of references to blog posts with supporting material. This is pretty general talk about how p&p formulated our approach to agile development, most of the other agile talks here focus on more specific aspects like distributed or large teams.
Scaling agile on very large software projects. Experiences with the Visual Studio Tools for Office team and it’s migration to a more agile Feature Crews based approach for the Visual Studio 2005 release.