Scott Hanselman has a good rant today about how software developers increasingly use their own customers to do quality-assurance (QA) testing:
Technology companies are outsourcing QA to the customer and we're doing it using frequent updates as an excuse.
This statement isn't specific to Apple, Google, Microsoft or any one organization. It's specific to ALL organizations. The App Store make it easy to update apps. Web Sites are even worse. How often have you been told "clear your cache" which is the 2015 equivalent to "did you turn it on and off again?"
I see folks misusing Scrum and using it as an excuse to be sloppy. They'll add lots of telemetry and use it as an excuse to avoid testing. The excitement and momentum around Unit Testing in the early 2000s has largely taken a back seat to renewed enthusiasm around Continuous Deployment.
But it's not just the fault of technology organizations, is it? It's also our fault - the users. We want it now and we like it beta.
It would be nice, of course, if shops made sure show-stopping bugs didn't make it to production. But otherwise I'm OK with the balance of new features and frequent updates. Things evolve.
More things I haven't read yet:
And a customer technician spent 90 minutes over two days worth of conference calls denying that something obviously his responsibility was not, in fact, his responsibility, until a network tech from his own company said it was.
Canadian Julia Cordray created an app described as a "Yelp for people," and apparently failed to predict the future:
Except of course it took the rest of the world about two seconds to figure out that filtering the world to only include those with positive feelings was not exactly realistic, and all the app was likely to do was invite an endless stream of abuse, bullying, and stalking.
It wasn't long before people were posting Cordray's personal details online – seemingly culled from the Whois information for domain names she owns. Just to highlight how out of control these things can get, one heavily quoted tweet providing her phone number and home address actually provided the wrong information.
Meanwhile, the company's website at ForThePeeple.com has fallen over.
We'll have this app, of course. I'm interested to see how U.S. and U.K. libel laws deal with it. Or not.
Update: Just looking at their Facebook page, I can't help but wonder if this is just a parody. But no, these women are delusional, and their app is not a new idea—just one that no one before them has ever had the immorality to produce.
Sadly, I think it will be a success.
I noted earlier that this code base I'm working with assumes all file stores look like a disk-based file system. This has forced me to do something totally ugly.
All requests for files get pre-pended with a hard-coded string somewhere in the base classes—i.e., the crap I didn't write. So when I want to use the Azure storage container "myfiles", some (but not all) requests for files will use ~/App_Data/files/myfiles (or whatever is configured) as the container name. Therefore, the Azure provider has to sniff every incoming request and remove ~/App_Data/files/ or the calls fail.
Don't even get me started on how the code assumes HttpContext.Current will exist. That has made unit testing a whole new brand of FFS.
I've been playing around with BlogEngine.NET, and I've hit a snag making it work with Microsoft Azure.
BlogEngine.NET was built to store files inside the application's own file system. So if you install the engine in, say, c:\inetpub\wwwroot\blogEngine, by default the files will be in ~/App_Data/files, which maps to c:\inetpub\wwwroot\blogEngine\App_Data\files. All of the file-handling code, even the abstractions, assume that your files will have some kind of file name that looks like that.
You must never store files locally in an Azure cloud service, because at any moment your virtual machine could blow up and be reconstituted from its image. Boom. There go your files.
You really want to use Azure storage. In its purest form, Azure storage organizes files in containers. A container can't have sub-containers. You access a container by its name only; paths are meaningless.
But because BlogEngine.NET assumes that all file stores use path names (which even works for the database file store plug-in, for reasons I don't want to go into), creating an Azure Storage provider for this thing has been really annoying. I've even had to modify some of the core files because I discovered that it applied a default path to any file request no matter what storage provider you used.
Don't even get me started on the bit of BlogEngine.NET's architecture that pulls all files around through the UI instead of allowing them to live in a CDN...
Daily WTF editor Remy Porter has a (rare) rant up today about software development processes. I'd like all my project management friends to read it:
[L]et’s just say the actual truth: Process is important, and it doesn’t have to suck. And let’s add onto that: process is never a cure for a problem, but it might be a treatment.
Let’s be honest, managing developers is like herding cats, and you need to point them all in the same direction by giving them some sort of guidance and organizing principle. Processes are a way to scale up an organization, a way to build towards consistent results, and a way to simplify the daily job of your developers. With that in mind, I want to talk about development processes and how organizations can make process work for them, with the following guidelines.
It's a rant, to be sure, but a good one.
In the last 48 hours, I've upgraded my laptop and surface to Office 2016 and my phone to Android 5.0 and 5.1. Apparently T-Mobile wants to make sure the Lollipop update works before giving you all the bug fixes, which seems strange to me.
All four update events went swimmingly, except that one of my Outlook add-ins doesn't work anymore. Pity. I mean, it's not like Outlook 2016 was in previews for six months or anything...
I just Googled a problem I'm having setting up a continuous-integration build, because I've had this problem before and wanted to review how I solved it before. Google took me to my own blog on the second hit. (The first hit was the entry I cross-posted on my old employer's blog.)
Why even bother with my own memory?
I'm still doing some R&D with BlogEngine.NET, and I keep finding strange behaviors. This is, of course, part of the fun of open-source software: with many contributors, you get many coding styles. You also don't get a lot of consistency without a single over-mind at the top.
My latest head scratch was about how labels work. I won't go into too many details, except to say, re-saving a code file with no changes in it shouldn't change the behavior of the code file. I'm still puzzling that out.
In any event, it's possible that I may have a stable-enough build with all of the features I want ready in a couple of weeks.
Of course, there's this little matter of 4,941 posts to migrate... That should be fun.
Because Microsoft has deprecated 2011-era database servers, my weather demo Weather Now needed a new database. And now it has one.
Migrating all 8 million records (7.2 million places included) took about 36 hours on an Azure VM. Since I migrated entirely within the U.S. East data center, there were no data transfer charges, but having a couple of VMs running for the weekend probably will cost me a few dollars more this month.
While I was at it, I upgraded the app to the latest Azure and Inner Drive packages, which mainly just fixed minor bugs.
The actual deployment of the updated code was boring, as it should be.