Archives For programming

Programming Video Courses

December 4, 2011

I’ve recently come across a couple of free, public programming courses, as a series of videos, that may be of interest to those of you out there (I haven’t had the time to view many of the individual episodes, but they look promising).

The first is an Introduction to Computer Science and Programming, from an instructor at MIT. It’s definitely for beginners and although it uses Python as its primary language, the goal is to convey the fundamentals and the theories involved. It probably gets a bit too high-end for some, but worth taking a gander at regardless.

The second series is programming literacy’s Core units. This series is much more broad and covers a range of languages and topics. As I write this, the first six units have been completed and are available as YouTube videos, with downloadable PDFs (and other formats) for the slides and notes. On the other hand, the last one was finished about 20 months ago, so there may never be more in the series. Still, it’s approachable and I like that the materials are available for viewing separately. And the price is right!

The World of Programming Chart

December 21, 2010

I recently StumbledUpon The World of Programming chart (found at Smashing Magazine’s site, which is a really good resource in general). It’s a quick and interesting graphic that introduces some of the key people, technologies, and theories in the world of programming, going back to 1843 (yes!). Smashing Magazine has this thread on the design of the poster, that’s also worth checking out. I, for one, now know where the Ada programming language got its name…

Unlike many things I find online, I actually found the comments to the design post to be more or less worthless. Largely the comments are either: A) What about person or technology X? or B) Look how smart I am. Still, the post itself is worth a quick read.

Rubber Duck Debugging

December 4, 2010

I recently came across the concept of “Rubber Duck Debugging”, with which I was previously unfamiliar. I first saw the idea on an old Linux mailing list, although there’s a Wikipedia article on the subject as well (of course). You should read the first link (it’s short), but the basic premise is that you can debug code by walking through it, trying to explain the code to an inanimate object (or an animate one, if you want). The theory is that when you get to the point where your explanation doesn’t make sense, you’ve found the source of the problem. I’m personally a big advocate of the “walk away from your computer” debugging approach: when you’re having a really hard time debugging a program, step away from your computer and don’t think about it for a while. In my experience, you’ll either solve the problem fairly quickly after you stop thinking about it, or fix it shortly after returning to your computer with fresh eyes. In any case, you need as many debugging tools as possible in your programmer’s toolbox.

In the past decade I’ve also discovered an effective, yet impractical, way to improve one’s programming skills: write a book! We’ve all had times where an application or some bit of code did exactly what you wanted it to, even though you didn’t understand exactly why. And, really, that’s fine. Sometimes functioning is all that’s required, even if that’s not an ideal goal. The next level of knowledge is knowing why something works. And a higher level is knowing something so well that you can explain why it works to someone else (duck or not). More importantly, you understand why the code should do A instead of B, or you are able to clearly see that X isn’t necessary. I’ve found that writing a book on a subject hones those skills. In short, I know personally that I’m a better programming because of the writing, and teaching, I’ve done. Clearly, it’s not practical for everyone to write a book on everything they do, but I would recommend you occasionally take a crack at trying to explain some code—even code that’s working perfectly fine—to anyone or anything as a way of solidifying your own knowledge.

Programming Notation

November 29, 2009

I’ve written 17 books now, counting the various editions, and with every chapter of every book there are decisions to be made about what to discuss and what not to. In a way, publishing becomes an endorsement, although I’m really not an endorsement kind of person. Which is to say that it’s never my intention to sell people on things and I don’t believe that my way is the only way let alone the best way to do something. This may all sound odd, but it’s the approach I have, for better or for worse. (And, truth be told, there are still many times when I feel strongly that X is the way to do something and Y isn’t.) Really, what I feel my job as a writer is, is to take all the information I come across, from reading other sources, from listening to the experts, and from my own experiences, and synthesize it all into a coherent bundle of knowledge. Then, of course, convey that knowledge in an easy-to-follow manner. Anyway, my point in this forward is that there are many things that I haven’t written about and that I don’t personally do but that may be worth other people’s consideration. One such practice is the use of Hungarian notation in programming.

Simply put, Hungarian notation, or any similar kind of application notation, suggests that variable and function identifiers (i.e., names) reflect the kinds of data they store or return, or generally how they’ll be used. So, for example, a variable used as a flag might be a Boolean named bHasOne or an array of names would be stored in aNames (these are greatly simplified notation syntax; you can see more examples in the Wikipedia article). One primary benefit of using such a system is having a consistent naming scheme, which is always absolutely critical. Programming notation can also make your code easier to read, as the identifiers themselves become de facto comments. That being said, that same Wikipedia article, and several other places online, have big name people suggesting that Hungarian notation is redundant and unnecessarily tedious. While it may be useful for untyped languages, like PHP (where you can easily, inadvertently change a variable’s type), strongly typed languages and OOP specifically have no use for notation. Again, I don’t personally adhere to these conventions but thought it’d be worth mentioning for those unfamiliar with the concept.

(And, as an aside, if you’re looking for an interesting read, here’s a good, long article on all sorts of coding issues.)