We’re always on the lookout for new methodologies in the web stack, and Meteor is one we’ve been keeping an eye on for a while. Here’s what we like about it:
We love React for it’s simplicity and always use it for our web UIs. Declaritive UI programming is where it’s at.
Code next to Markup was a bad idea right? Not really. As it turns out, having your UI code in your views when they are small (components) makes a ton of sense.
Don’t take our word for it though. Give it a whirl for yourself There’s a decent chance you’ll agree.
We’re going to step through building a Hot or Not style of application tailored to sports plays.
The three main components are:
User should be able to submit a new play. The play will have a title, description, and Youtube URL.
User should be able to see two plays and vote for one of them.
Once user has voted for a play, they will be replaced with two new plays that they can then vote on.
There is no limit to the number of plays the user can vote on, however the user may not vote on the same play more than once.
User should see a list of top ten plays with the most votes, ordered by number of votes descending. This list will update in real time as votes are added.
Create our project:
meteor create tehgosu
This creates a basic Meteor application that we can run right away:
cd tehgosu
meteor
Then visit http://localhost:3000
Before we can start working on any views, we need to install React since that’s what we’ll be using instead of Blaze.
Adding libraries to Meteor is super simple:
meteor add react
We personally prefer CoffeeScript over JavaScript due to the readability of significant whitespace. There’s a popular opinion at the moment that CoffeeScript is obsolete now that we have ES6 and Babel. I disagree because I think browsers will eventually support WebAssembly. Once they do we’ll see even more JavaScript alternatives.
meteor add coffeescript
Now we’ll need to do some standard tweaks in order to have React and CoffeeScript play nice without excessive amounts of syntax. First we’ll create a lib folder and add a component.coffee library to it.
mkdir lib
touch lib/component.coffee
In component.coffee we’re going to add a function that we’ll be calling instead of React.createClass
1 2 3 |
|
Notice the @ symbol used to declare our Component object? CoffeeScript places our code in a closure so as not to pollute the global namespace. In Meteor we need to attach our object to the global namespace using this (@ = this.). This is a little counter intuitive compared to CommonJS style requires, and maybe some day we’ll have a better alternative.
For now @Component makes our object accessible throughout the application.
Now we can create a React component in CoffeeScript like so:
1 2 3 4 |
|
The equivalent in JSX without our library would look like this:
1 2 3 4 5 6 7 8 9 |
|
Now is as good a time as any to setup our basic structure for the application. Meteor has a convention where any code that’s placed within a directory named client will only run on the client. And naturally code in a directory named server will only run on the server.
We want the following directories under the root of the project:
public: Only served to the public. We’ll put our robots.txt and images in here.
mkdir lib client server public
Let’s remove the initial files that meteor created. We don’t need them.
rm tehgosu.*
Create a new HTML file in the client directory. Since we’re using React for our views, this will be the only HTML file we need.
vi client/index.html
In this HTML file we just need a div element which react will replace once it’s loaded.
1 2 3 4 5 6 7 |
|
Now we need to attach our React views. Create a new CoffeeScript file in the client directory.
vi client/index.coffee
In this CoffeeScript file we load up our React views and attache them to the DOM.
1 2 3 4 5 |
|
We’re referencing an object called App within the Render method, so we need to build that. Create a new CoffeeScript file in the client directory for it.
vi client/app.coffee
Our new app.coffee is going to hold our top level React component code.
1 2 3 4 5 6 |
|
We’re keeping it simple. All we’re doing is rendering an h1 tag with the text Teh Gosu!. Notice the @ symbol prefix to the App declaration. Again this is because of CoffeeScript’s automatic closure and the fact that Meteor needs the object to be on this to be accessible outside the file.
At this point we should have a working React + Meteor application. Run the server with:
meteor
Then visit http://localhost:3000
You should see Teh Gosu!.
Your directory structure should be:
client
app.coffee
index.coffee
index.html
lib
component.coffee
server
public
This concludes part one of our Meteor + React series. In part two we’ll add some data and the match view.
]]>The only ugly spot with ReactJS is JSX. I can see the appeal of using declarative HTML in templates for readability, but having switched to HAML (and Slim and Jade) long ago, writing HTML feels like a step backwards.
Luckily, using CoffeScript for my ReactJS components and eschewing JSX entirely, we can accomplish a syntax that’s very similar to HAML / Slim / Jade. If you’re not a fan of CofeeScript, HAML variants, or significant whitespace, there’s little chance I’ll be able to convince you otherwise. However if you are a fan of any of those, then it’s worth checking out.
This is the HTML we’ll be converting.
1 2 3 4 5 6 7 8 |
|
Converting it to Javascript using ReactJS looks like this. It’s pretty verbose.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Here’s the JSX version. Quite an improvement I think, but it mixes HTML and Javascript together and that seems a bit messy and most likely throws off your editor’s syntax highlighting.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Finally, here’s the CoffeeScript version of the component. At least as succinct as the JSX version, and no mixed syntax or editor issues.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
For the sake of completeness, here’s a CJSX version (CoffeeScript + JSX). Even more succinct, however again we’re mixing HTML with our CoffeeScript, making it a bit messy and giving you editor issues.
1 2 3 4 5 6 7 8 9 10 |
|
If you do opt for the straight CoffeeScript route, then there are a few gotchas to keep in mind. If you’ve been using CoffeeScript for a while, then they’re pretty obvious, but can cause grief for newcomers.
CoffeeScript allows you to omit Curly braces on hashes. This can cause readability issues for the next person who comes along to read your code.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
CoffeeScript allows you to omit commas between hash assignments and opt instead for indented new lines. Again this can cause readability issues, especially when combined with Gotcha #1 above.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
Ultimately if you are going to use CoffeeScript for your ReactJS components instead of JSX then it’s probably a good idea to agree upon some conventions with your team on when braces and commas are used. My preference has been to use braces for single line hash assignments, and I’m considering enforcing braces for multiple line attribute assignments w/ React to better separate them from the next element.
]]>While remote pairing solutions are becoming increasingly popular, as a coder it’s hard to beat the Vim + Tmux combination. It’s simple, fast, and there’s no client OS or application dependencies.
In this post we take you through all of the steps to setup an amazing remote pairing environment using an affordable cloud server (VPS). What this will allow you and your team to do:
I highly recommend adding voice to the mix whether it’s Skype, Google Voice, or a SIP provider.
For starters, we’ll need a linux box in the cloud. For the server we’re going to go Digital Ocean since it is one of the most affordable options at the time of this post. However, the steps are essentially the same with other hosts like Linode and EC2, so definitely check them out too.
Sign up for an account at Digital Ocean and then create a 512MB droplet running Ubuntu 12.04 x32. If you’re not sure about the hostname option, a good choice would be something like pair.yourcompanydomain.com. Make sure you chose a region that’s close to you and your team to minimize latency.
At this end of this tutorial you can shut you droplet down if you aren’t going to use it, and it’ll only end up costing you a few cents.
Once you’ve created the droplet, you should receive an email from Digital Ocean with your new boxe’s IP address and credentials. For the rest of this post I’ll use a fictional IP. Just substitute the IP you were given as needed.
Open up a terminal if you don’t already have one up and follow along with these commands to setup and install the basics.
# Log into your droplet and enter the provided password when prompted.
ssh root@198.199.xx.x
# Update the system. This will take a little while to complete.
aptitude update
aptitude safe-upgrade
# Install essential build tools, git, tmux, vim, and fail2ban.
aptitude install build-essential git tmux vim fail2ban
# For more details on configuration options for fail2ban start here:
# https://www.digitalocean.com/community/articles/how-to-protect-ssh-with-fail2ban-on-ubuntu-12-04
Next we’ll need to setup user accounts for our pair. You can of course setup as many users as you want and run multiple tmux sessions, but that’s the topic of a future post.
Follow along with these commands, substituting your preferred usernames for “dave” and “dayton”.
# Create the wheel group
groupadd wheel
visudo
# Add the following line to the bottom of the file
%wheel ALL=(ALL) ALL
# Save and quit. (:wq)
# Create our pair users
# You'll want to substitude your own usernames for dave and dayton
adduser dave
adduser dayton
# Add them to the wheel group
usermod -a -G wheel dave
usermod -a -G wheel dayton
Now that we have your users setup with full rights (this is something you may want to change down the road), we can disable the root account and instead use a pair account.
# Copy your shh key to the server
scp ~/.ssh/id_rsa.pub dave@198.199.xx.x:
# Login to your account
ssh dave@198.199.xx.x
# Enable ssh access using your rsa key
mkdir .ssh
mv id_rsa.pub .ssh/authorized_keys
# Now you should be able to ssh to the server using your key. Go ahead and try it.
exit
ssh dave@198.199.xx.x
# If you have to enter a password, something went wrong. Try these steps again.
# Edit the sshd config
sudo vi /etc/ssh/sshd_config
# Disable root login
PermitRootLogin no
# Save and quit. (:wq)
# Reload ssh
sudo reload ssh
Now we have a fairly secure server with our pair accounts using password-less access and it’s time to setup the pairing environment. We’re going to use wemux which is backed by tmux to manage the sessions.
# Install wemux
sudo git clone git://github.com/zolrath/wemux.git /usr/local/share/wemux
sudo ln -s /usr/local/share/wemux/wemux /usr/local/bin/wemux
sudo cp /usr/local/share/wemux/wemux.conf.example /usr/local/etc/wemux.conf
# Change the host_list value to your pair usernames
sudo vim /usr/local/etc/wemux.conf
host_list=(dave dayton)
# Save and quit (:wq)
You are now the proud owner of a remote pairing environment.
It’s time to take it for a spin and make sure everything’s copasetic.
# Launch a shared tmux session.
wemux
You should now be running in a shared tmux session. One of your other accounts (pair2, etc.) can login and use the same command to join your session.
You will definitely want to checkout the wemux documentation for all of the configuration options.
]]>So I started hunting for a solution.
The RubyMotion official project configuration documentation states that it look for and use the first provisioning profile it finds on your computer. This is false though, at least when you have multiple profiles, because even if each of your profiles contains the device UID you’re building to, this still won’t work.
The existing solutions are simply to explicitly refer to your provisioning profile in your Rakefile. That’s OK for solo development (but still annoying), however it’s not a good solution for team development.
See this stackoverflow Discussion
After a little light reading I discovered that the RubyMotion build will check for a default profile named “iOS Team Provisioning Profile”.
So we simply need to create a new provisioning profile via the iOS provisioning portal named “iOS Team Provisioning Profile” and containing the device(s) we want to be able to run development builds on.
]]>I’m in love with RubyMotion.
Here’s why:
I’ve tackled vim a few times over the years, but never fully committed to it. Since I believe you need to fully commit to something to do it well (including learning anything), I decided to pick up my touch typing first to really see the benefits of VIM.
Last week I achieved my typing goal of 75 words per minute, inspiration courtesy of Steve Yegge. I’ve since upped my goal to 90 wpm, but I think I’m at least quick enough now to really immerse myself into learning vim.
I’m starting with Peepcode’s Smash Into Vim, this Yehuda post, and of course the built in vim tutor.
Rock on!
]]>I really like this interactive format. You watch a video / screencast covering a topic or set of topics, then you’re required to code up some excersizes to review the content of the video’s material before you can move on to the next topic.
Overall the format worked really well. The gamification (points etc.) didn’t make a difference for me, but it might be a motivator for some people. I do think they’re on to something here, and from the looks of it (also from the workd marketplace in their tagline) they’ll be refining this as a platform for use with other third party content.
The online editor was actually really well done. It’s not vim or emacs obviously, but it’s not super cludgy like you’d expect, so it works well enough for the small amount of material you’re covering.
I think there’s an opportunity to add in some social features so students can help each other if they get some harder topics going.
As for the content of the “Rails Best Practices” course itself, you can get the same content from here; http://rails-bestpractices.com/, however the code school environment was enjoyable enough.
]]>Given the following conditions for an order placement application:
We have several different ways of accomplishing this with a relational database (document and KV stores are a different story).
Store all address information within the customer and order tables themselves. This is perhaps the easiest solution even though it’s not the most normalized. So you’d have fields like billing_city and shipping_city inside both the customers and the orders tables. The downside is that you’ve created duplicates of the same fields, which uses up a little more storage space (usually not an issue) and requires more work to maintain if you ever needed to change their schema (again, pretty rare occurence for address fields that are well known entities). The upside is it’s very simple to work with from a code perspective.
Store addresses in their own table and associate them to orders and customers using via polymorphic composite keys. In order for this to work you’ll need a composite key of 3 fields; address_type, addressable_type, addressable_id. So the shipping address for a customer would be something like: “Shipping”, “Customer”, 1232. and the billing address for an order could be: “Billing”, “Order”, 2873. etc. The downside is it’s a rather fancy assoication and will add complexity to your ORM code as you override some methods (since no ORM I know of is built to handle this oddball relationship out of the box). The upside is it’s very normalized and you can add new address types on the fly and new classes that can have addresses on the fly.
Store addresses in their own table, but simplify the association by using many-to-one foreign keys. For this to work we just have keys in the address table for each assoication. So in this case we have “billing_customer_id”, “shipping_customer_id”, “billing_order_id”, “shipping_order_id”. The downside is it’s not very normalized / DRY and you won’t be able to add new address types or addressable classes on the fly like you could using the plymorphic associations. The upside is very simple (almost all convention based) ORM code since you’re dealing with belongs_to type relationships.
Use an Address class to define your address fields, but serialize it to text fields wherever it’s used. So you’re ditching the relational style just for the addresses. For this to work you’d have two text fields in your orders table and your customers table; “billing_address” and “shipping_address”. Then you just serialize your address objects to these fields (yaml, xml, json, or whatever). The upside is the same simplicity as solution #1, but without all of the redundancy in your schema. The downside is the potential complexity of code needed to edit and manage the address information and get proper validations to work.
My preferred solution is #4. I think it’s worth the added complexity at the view level when using Rails 3 since it’s not too much extra work (although it could be a little cleaner).
]]>It seems that everytime a TDD evangelist speaks about non-test driven / traditional development they paint a completely exaggerated and unrealistic picture of what it means to not use TDD. It usually goes something like this: “You spend a year creating a specification, and then another year coding until you’ve built this monolithic application then you manually go through all of the functionality you built for another year fixing bugs etc.”. Seriously? I know we’ve all got horror tales, but come on… who in their right mind has ever worked like this even before all of the Test-first buzz back in 2000? This is a fallacy, and even coding in Fortran sounds better than being involved in this fantasy process.
I don’t actually have a problem with TDD the practice, or BDD as a practice, or even EDD (experiment) the practice. What I do have an issue with are the religious zealots that think it solves all of their problems and will criticize anyone who doesn’t share the same beliefs. Really, it doesn’t.
Here’s how Joe the Programmer who’s never bothered with TDD actually performs his work on a daily basis. He thinks about the big picture. Breaks it up into small accessible problems (basic problem solving). Dives right in and starts building out a solution to tackle one of these small problems. Then he manually tests his small solution to make sure it works and provokes more thought on how it fits into the big picture. Once he’s happy with it, he tackles the next small problem. All the time he’s constantly reevaluating the big picture, identifying new problems, speaking with the client, etc.
Sounds a lot more reasonable than “code for a year” doesn’t it? It almost sounds like it would work really well in most scenarios. It doesn’t help sell the latest tickets to your speech on TDD though, because honestly, how much would TDD actually improve his process?
Please, before you tell everyone how amazing the latest test / behaviour / experiment driven development methodology or tool is, watch this presentation from Rich Hickey on Hammock-Driven Development first and let it sink in. http://clojure.blip.tv/file/4457042/
]]>What makes it so clever is that it changes practically nothing from the normal flow of entering a password and stores nothing locally (so it doesn’t matter if you change browser or computer). You type the same password for everything and it instead submits a unique and incredibly strong password for every site. This is done by creating a one-way hash. One-way hashes is also how we encrypt passwords on the backend of websites before storing them in the database. So basically you’re original password is getting hashed twice for most websites.
How it works:
You install the PasswordMaker extension for your browser of choice. You go to sign up for a new website service (or change your password for an existing one). You type your typical password, let’s say it’s “b@ng3r5”. You should still pick something fairly strong (mix of characters, numbers, symbols, etc.), but even if you didn’t you’re much better off than most. When you submit the sign up form, the PasswordMaker extension creates a hash using the data you’re entering combined with the domain of the website. In other words it’s creating an encrypted version of your real password. This encrypted password is what’s submitted to the website. It may end up being something like “4#ae2!9ljh2vk*8c$21h7wh%s$lz” for example.
You come back to the site another day and are asked to login You type in the same typical password, “b@ng3r5” in this case. When you submit the login form, the PasswordMaker performs the hashing operation again, using the same password and the same domain. This means it will come up with exactly the same hash as it did when you signed up. The site’s server see your encrypted password, I.e. “4#ae2!9ljh2vk*8c$21h7wh%s$lz” which it then submits to it’s own authentication process (usually it also performs a one-way hash again using your password and a random string it generated when you initially signed up and compares that against the encrypted value it has stored against your account).
Benefits:
Some sites still store passwords in clear-text. You’re way safer if one of these sites is compromised since you’re password was already encrypted before it was sent to the site. Using the same password for everything in way safer now than it was without this encryption… it’s probably still a good idea to rotate passwords, but not as big of a deal as it was without the pre-encryption. We’re practically faking single sign-on.
I still think there may be some potential problems that you need to keep in mind.
If a clever hacker compromises a site that’s storing passwords in clear-text they could still potentially crack your password since it will stick out like a sore thumb within the rest of the cleartext passwords. Said hacker will know that yours is the only one that’s been encrypted and he may guess that it was encrypted using PasswordMaker. He would then know that your salt (part of the string being used to generate the hash) is the domain of the site and he can use that information to run dictionary attacks with the domain until he gets the same encrypted result.
Obviously this is pretty unlikely and not worth the effort since there’s so many other passwords requiring no effort, but still using a strong password to begin with will make this practically impossible. The only way I see this happening is if someone is specifically targeting you and the added effort is really worth it… So maybe 1 chance in a google?
I highly recommend you checkout this tool. It has multiple extensions / plugins for every major browser.
]]>I also happen to think the thick-client solutions (password managers) are far more practical. A password manager can maintain multiple profiles with unique auto-generated passwords so if any of the sites that stores your credentials is compromised (say they aren’t encrypting your passwords properly) the problem is contained within the compromised site since every other site uses a different generated password. This is far easier to manage as both consumer and service provider. By using a password manager you only need to remember 1 password, the one used to unlock your password manager.
Now what would be great is if you could combine the thick client password manager concept with a cloud based redundant solution (no single point of failure) and have full integration for auto-filling fields with all major browsers.
So let’s say you have a master account with a cloud provider like Google or Amazon etc. They provide you with a semi-thick client solution like a browser addon which maintains a connection witch can create profiles on demand for a given site and auto-fill username / password / email at your approval. The password is a very strong auto-generated key associated with the a resource identifier of the site. So that a site could do the standard email / password and include a simple meta-tag identifying it’s unique resource. This would make integration on your application optional at worst, and extremely simple at best.
Xmarks is pretty close. It’s missing a password generator though: http://www.xmarks.com/
]]>From the Tarsnap home page:
Tarsnap is a secure online backup service for BSD, Linux, OS X, Solaris, Cygwin, and can probably be compiled on many other UNIX-like operating systems. The Tarsnap client code provides a flexible and powerful command-line interface which can be used directly or via shell scripts.
Here’s a quick and easy guide to get you up and running backup up all of your MySQL databases.
In your home directory:
mkdir backups && cd backups
Download: http://sourceforge.net/projects/automysqlbackup/ to the backups directory you just created.
Rename it:
mv automysqlbackup.sh.2.5 automysqlbackup.sh
Make it executable:
chmod u+rwx automysqlbackup.sh
Edit it:
nano automysqlbackup.sh
Fill out your database name, password, and the names of the databases you want to backup.
Look for the commented POSTBACKUP line. Add these two lines right below it (replace username with your username).
1 2 |
|
Follow the instructions on the Tarsnap getting started page: http://www.tarsnap.com/gettingstarted.html
You should have donwloaded, paid for, and installed Tarsnap before continuing.
We’re going to use the tarsnap sample config (cache dir and key location).
cd /usr/local/etc
cp tarsnap.conf.sample tarsnap.conf
Now run your backup script:
sudo ./automysqlbackup.sh
Check to see if you’re backup was created and stored remotely:
sudo tarsnap --list-archives
Now we’re going to create a cronjob to run the script on a daily basis (or you could move it to your /etc/cron.daily).
1 2 3 |
|
Now you’re databases are being backed up daily on a rotation keeping weekly and monthly dumps and storing them both on and off-site (encrypted).
]]>As a freelancer you have a lot more control over the quality of your work and it’s essential to enjoying your profession. We inevitably have to deal with some unreasonable clients now and then, who want everything done yesterday and aren’t willing to compensate you appropriately for your time. So how do we deal with it?
For me, with fixed quote work, once I agree to take a project on I simply put compensation out of my mind. If I was unable to correctly scope and quote building a quality application, it’s not going to stop me from doing my best anyways. One of the luxuries of freelancing is the authority to decide that you will do what it takes to deliver something you’re proud of. In my mind it’s just not acceptable to deliver less than my best effort, and it would stress me out not to do so. Using an agile process can also help to limit losses you may incur by misquoting projects.
This means all of my client’s will get a great deal, even those I may have misquoted. I’ll sometimes take a loss, but for me it’s not worth the stress of building something you aren’t proud of. At the core, it’s really that simple. I get to feel good about my work and I’m still able to put food on the table even when I take a loss. I’ll learn new technologies and techniques which may eventually balance out the occasional losses.
So to take this a little further, whenever I’m building software, there’s usually quite a few moving parts behind the scenes that the client is blissfully unaware of. That doesn’t mean they’re unimportant. In fact when I take on the work I’m essentially saying, yes I have the expertise to do this, and yes I will take care of all of the details including the more complex hidden challenges.
For example, let’s say I’m building a storefront web application. I’m extremely focused on conversion and put together a great payment flow to reduce the friction associated with purchasing online. I’ve implemented SSL properly wherever customer information is being submitted. I’m storing their details using opt-out rather than opt-in. The customer is always being sent to the most relevant next step, and in general I’m presenting them with as little data entry fields as possible. The customer feels safe and they are able to make their payment quickly. Returning customers are delighted that it’s even faster the next time they make a purchase.
So far this sounds like a pretty great job. The client is in total agreement. Their sales are up, CSR calls are down, and I feel like my work is appreciated.
What the client is unaware of is how I’m storing credit card information and passwords. I could be storing them in clear text or using weak encryption (salt-less). Let’s say that I also enforced very few if any restrictions when customers choose a password, since that would increase payment friction. So we have weak or unencrypted passwords, they’re easy to crack via dictionary attacks; like “joe”, “password”, “12345”, plus they’re all associated with unencrypted credit card information.
Now the client may never care so long as nothing bad happens. They’re certainly not going to pay me to implement a far more secure solution if I didn’t make room for it in my initial quote. Good enough? Well sort of… most of the time this probably goes unnoticed. Meaning the customer is happy and they didn’t have to pay me for the extra time to nail down the security (however, for this example at least, PCI enforcement is going to change that). So what’s the problem? The problem is I’m not happy. I know that some very critical mistakes have been made. Therefore I would never deliver this application until I corrected these mistakes. This isn’t actually a real world scenario, but it serves as a decent enough example of hidden complexity.
There are many programmers like myself, who’s personality and commitment to a quality solution will simply not let them stop at good enough. Even if the client is happy and completely unaware of the insecurity of their system and the liability associated with it. These programmers will be compelled to finish the job, even if they are over budget.
There are times though where a passionate and committed programmer can blow out a project’s budget without the customer’s best interest at heart. I’ve heard a lot of criticism in programming circles about this type of programmer. However, I’ve noticed that almost always these criticisms come from either programmers with very limited experience, or MBA types who’ve never written a line of code in their life. There is absolutely no job satisfaction in delivering inferior products. Period. The passionate and committed programmer may seem to make some things a little more complicated than they need to be, but they will learn from it, and in the end you will end up with a better and more efficient product.
As a self-professed programming perfectionist of many years, my opinion is that there are many applications out there that I’m simply not suited to work on. I accept that this attention to details can be a strength in some scenarios and a weakness in others. There’s probably thousands of programmers that can do the same work cheaper, with less questions, and produce an equally satisfactory product in some scenarios. In fact most of them probably don’t even need to be programmers, in that they have no real interest in programming. Many projects are simply glueing together existing libraries and frameworks and require very little creative problem solving.
I would argue that there’s nothing wrong with either of these programmer personalities. I think the market has room for both of us. There are many prototyping and proof of concept projects out there with extremely frugal owners / managers that simply aren’t willing to invest either the time or money into a great solution and are happy enough with a mess of libraries glued together behind a pretty user interface. In fact, unless the stakeholder is willing to invest their own time and effort in the project, then they’re unlikely to be happy with any solution they get, and would just frustrate the more passionate and committed developers anyways.
In the end, I’m happy to be writing quality software that will live for a while and need to be supported and improved. I enjoy refactoring my code. There’s something extremely satisfying about it.
As a client, if the project is your baby, and you’re very committed to it, then you will want an equal level of commitment from whoever works on it.
]]>