Posts archived in IT

There’s been a trend for many years now to serve up “mobile optimised” sites, usually by redirecting users to a different domain like “http://m.example.com”.

Please, for crying out loud, I beg of you – STOP this practice. Not only is it bad for your site, it’s bad for your users too.

It screws up your site’s search engine ranking

All SEO folks will tell you to ensure your site has one, and only one domain. That is: pick either “www.example.com” or “example.com” and stick with it, redirect all your traffic onto that one domain. The same goes for mobile sites – pushing users off to some other domain means links they share will go to your mobile site.

You’re probably going to screw it up in some subtle way anyhow

You’ll test like crazy, and hey – it works. For you. On that version of the mobile phone software, with that screen size, on that internet connection.

Change some of those variables and suddenly the page layout is screwed up because a Carrier decided to rewrite your site to be ‘mobile friendly’.

A new device comes out with a larger screen – say, something like a Tablet/iPad. To your site’s code that looks like a mobile device, but to the poor sucker using it – your site looks terrible, is missing features/functionality.

And if you don’t get redirection right, You’ll be completely stuffing up the users who try to access the ‘desktop’ link, and instead get sent to the front page of your mobile site, or at worst a 404 page.

Links from the mobile site are useless for the desktop

Someone on a phone or tablet emails a link like say http://en.m.wikipedia.org/wiki/Coffee - ah, crap, it’s the mobile version, and is missing a bunch of stuff. How do I get the proper page? I have no idea, there’s probably a link somewhere though.

Mobile probably isn’t what you meant

You probably actually mean ‘small screen sizes’ or ‘low bandwidth’.

Folks who are on modern smartphones or tablets actually have a pretty fast internet connection (either 3G or Wifi). They also have browsers as capable as your desktop, or nearly so anyway. All up, redirecting is a pretty good way to just waste people’s time and cause frustration.

Just serve them the same content

(Disclaimer: I’m not a UI designer or developer)

There’s techniques like CSS Media Queries which let you serve up the one page, which will re-arrange or completely remove elements based on screen size.  The good thing is that this means new devices with oddly shaped screens will work automatically. See Scott Hanselman’s site for an example of this done right – make your window wider or narrower, and elements turn on/off.

I realise there’s exceptions to this – if you’re mainly aiming at users in markets where smartphone penetration is low and 3G is non-existent then having a low bandwidth, WAP site is a necessity.
For most sites, though, you don’t.

 

</rant>

 

Edit 10AM Friday 27th July: Server-side browser detection isn’t a solution 

In case I wasn’t clear – server-side detection of browser isn’t a solution (imo). WordPress “mobile” themes, for example, are ugly on tablets and other large screen “mobile” devices.

It should be left to the user-agent (browser) to determine how to lay out the page.  Trying to detect this on the server will eventually fail – either because technology has moved on in ways you didn’t cater for, or you didn’t test it on all the devices in the world.

This leaves users with the need to either switch off the theme, or put up with a design that looks terrible for their device.

There’s a huge stable of sites that do things like what I’ve described here. A few examples I can think of off the top of my head:

  • Gawker Media’s sites (eg IO9) – Redirects to a mobile domain
  • Delimiter – Uses WordPress Mobile themes, which are terrible on tablets
  • Sydney Morning Herald – redirects to a mobile domain.
  • Wikipedia – reidrects to a mobile domain.

It’s been years since people started pointing out how Facebook and other sites are encouraging bad security habits. Yet Facebook still continues to encourage handing over really private credentials that should never be shared. 

Obviously it’s working, two of the folks I’m friends with on Facebook have used it recently.

failfailfail

This little gem I found in the sidebar of my profile.

Everywhere I go on Facebook there’s a prompt asking me to hand over passwords for email and other services.

image 

People have their online identities (not to mention anything important: like internet banking services) associated with these email addresses – why on earth would anyone willingly hand those details over?

Oh, that’s right, there’s a little blue padlock and a nice reassuring “Facebook won’t save your password” message. We should all rest easy knowing the bright sparks at Facebook have our best intentions at heart. Because no-one has ever accidentally turned on data logging. Especially not anyone working for a big reputable and trustworthy totally-not-evil company.

Carry on then.

For the last three years (give or take a month or so) I’ve been working with the great folks at Massive.
Time has come for me to move on and find other great opportunities.

If you know of any interesting .NET Jobs opening up in Sydney (or Melbourne), please let me know.
I’ll be available to start from about the 16th of November. Update: I’m no longer looking for work, thanks.

Things I’m currently interested in:

  • Playing with cool gadgets: Android & Windows Phone 7
  • Multithreading & Parallel Processing in .NET 3.5 and 4.0. (ParallelFx, Rx, etc)
  • IPTV / Streaming Media Delivery
  • ASP.NET MVC

Things I’ve been working with most recently:

  • ASP.NET Web Apps (.NET 3.5/WebForms)
  • RESTful WCF Services
  • Building Custom Serialisation/Deserialistion Adaptors & APIs for third party services.
  • Build Management

To all the folks at Massive: It’s been a great ride; Working with some of the smartest folks I know. Thanks for all the fish.

(Originally Posted by me on AuTechHeads, April 27th, 2010, and preserved here)

Why Cloud Computing Services have huge stumbling blocks to their adoption for the projects I work on.

One of the projects I’m working on has a need to switch to an service bus / message queue system.

We’re after something that’s fairly light-weight. Ideally something we can package into our existing distribution and manage the configuration as part of our existing application’s configuration.
We also need some level of reliability – we’re not expecting clients to go yanking servers out of the rack, but if we send a message, we want to know that it’s going to be delivered.

A few people have suggested Cloud based queue systems as a potential solution. Amazon SQS, Azure AppFabric, and Linxter have all been mentioned a few times.

Unfortunately, no cloud solution is going to pass even a preliminary inspection.

When you use a cloud based architecture, you get to offload some of the responsibility of ensuring the solution is up and running. But at the same time you take a big dependency on the security and stability of not just that cloud provider’s infrastructure – but the entire route from your client to the provider.

If some script kiddie decides that today’s the day he’s going to packet Facebook, and your cloud provider happens to be on the wrong side of a congested router, well – your solution better work fine without it’s message queue.

Any time you add another provider, it complicates the solution. Even if the configuration of the service is as simple as adding a DLL, and two lines of code.

The complexity comes not from configuration files or deployment instructions – but in numerous other peripheral things. Just getting someone to hand over a corporate credit card to pay for the damn thing, even if it IS only pocket change is in and of itself a huge ordeal.  If it’s an ongoing cost, then it’s often nigh on impossible.
Lets not even get into the things you need to do to get firewall ports opened to be able to communicate out, the need for complying with the EU Data Protection Directive, or any concerns a client has about what data is passed over the internet.

Unless there’s a particularly good fit for a client, I doubt we’ll be looking much at cloud based services in the near future.

(Originally posted by me on AuTechHeads, April 26th 2010, and preserved here)

Why a fancy resume is useless if you have no enthusiasm for technology.

One of the things rarely discussed in guides on how to get a job in IT is enthusiasm for technology. I am of the opinion that first and foremost, you need to be a technology geek if you want to work in IT.

Don’t confuse being a technology geek with being the stereotypical pimply-faced, pale skinned, greasy haired dweeb. I talk simply of people who have an innate understanding of some area of IT. The kind of person that hears about some new thing and gets a little (or lot) excited.

Personally, I can’t understand why would anyone choose a job in IT if they didn’t like tech.
It would be like me choosing a career in marketing or interior design. Areas in which I really have zero interest.

Yet when interviewing candidates for software development positions, I find far too many of these people. They tend to express no particular interest in any part of software development in particular, or technology in general.
My only conclusion is that they are in IT because it’s a reasonably well paying job.

The not-so-funny thing about it is that these people have fantastic resumes, with years of experience across all sorts of tehnologies. Often they’ve worked for big name companies.

Perhaps these people work well in development teams where there’s a large corporate structure that treats developers like cogs. But for smaller development shops – you really need your wits about you.
Situations inevitably arise where nobody on the team has experience with some particular technology.

A technology geek will be able to solve the problem (either on their own, or with the assistance of the team) and draw on their own experiences gained while reading about or experimenting with other related technology.

That, in my opinion is why you must be a technology geek to work in IT.

So, readers: What is your opinion – is there a role for non-tech-geek folks in IT?

Atlassian already have some documentation on how to integrate IIS and JIRA.

Unfortunately it requires installing some ISAPI components, and a whole lot of fiddling around.

I wanted to see if I could get Application Request Routing to do the same job. Turns out, yes, you can – here’s how.

1. Make sure JIRA is installed and working on your server.

Let’s say that it’s at http://example.com:8080/

I want to access JIRA via: http://jira.example.com/ – but IIS7 is already using port 80 on that server.

2. Alter your conf/server.xml file in JIRA.

Find the /Server/Service/Connector element, and add two attributes:

proxyName=”jira.example.com”

proxyPort=”80″

The Connector element should now look something like

 <Connector port="8080" enableLookups="false" proxyName="jira.example.com" proxyPort="80">

3. Restart the JIRA Service.

4. Install, if you havn’t already, Application Request Routing 2.0, along with URL Rewriting 2.0

5. Enable Proxying on ARR:

  • From the IIS7 Console, click on {ServerName}.
  • Open Application Request Routing.
  • From the Actions pane on the right hand side, Select  ‘Server Proxy Settings’
  • Check ‘Enable Proxy’
  • Set HTTP Version to ‘HTTP/1.1′

6. Add a new site ‘jira.example.com’, with bindings for http://jira.example.com

7. Add a new URL Rewrite Rule for jira.example.com

  • From the IIS7 Console, click on jira.example.com
  • Open URL Rewrite
  • From the Actions pane on the right hand side, select ‘Add Rules’
  • Choose ‘Blank Rule’
  • Set Match Rule to:
  • Requested URL Matches the Pattern
  • Using Regular Expressions
  • Pattern: (.*)
  • Ignore Case: checked
  • Set Action to:
  • Action Type: Rewrite
  • Rewrite URL: http://example.com:8080/{R:1}
  • Append query string: checked
  • Stop processing of subsequent rules: checked

8. Now, with any luck – you should be able to access JIRA via http://jira.example.com  - if not, something isn’t set correctly.

Setting up Fisheye is almost as simple.

Say Fisheye is set up on http://example.com:8060 and I want to access it via http://fisheye.example.com

Repeat steps 4-8 above, substituting ‘fisheye’ for ‘jira’, and then verify you can access fisheye from http://fisheye.example.com

If you’re also doing .NET Development, or have .cs/.aspx/.asmx files in your repository, then you’ll also need to do the following.

Edit the web.config for fisheye.jira.com

Add the following to just before </system.webServer>

<handlers>
 <remove name="WebServiceHandlerFactory-ISAPI-2.0-64" />
 <remove name="WebServiceHandlerFactory-ISAPI-2.0" />
 <remove name="PageHandlerFactory-ISAPI-2.0" />
 <remove name="PageHandlerFactory-ISAPI-2.0-64" />
 <remove name="PageHandlerFactory-Integrated" />
 <remove name="WebServiceHandlerFactory-Integrated" />
 <remove name="SimpleHandlerFactory-ISAPI-2.0-64" />
 <remove name="SimpleHandlerFactory-ISAPI-2.0" />
 <remove name="SimpleHandlerFactory-Integrated" />
 <remove name="CGI-exe" />
 <remove name="ISAPI-dll" />
</handlers>
<staticContent>
 <mimeMap fileExtension=".cs" mimeType="text/plain" />
</staticContent>
<security>
 <requestFiltering>
  <fileExtensions>
   <remove fileExtension=".config" />
   <remove fileExtension=".csproj" />
   <remove fileExtension=".cs" />
   <add fileExtension=".cs" allowed="true" />
   <add fileExtension=".csproj" allowed="true" />
   <add fileExtension=".config" allowed="true" />
  </fileExtensions>
 </requestFiltering>
</security>

If there are any additional filetypes that are in your Fisheye repository that generate 404 errors when navigating, then add them to the fileExtensions section. First as a ‘Remove’, and then an ‘Add’ with allowed=true. You’ll also need to probably add a mimeMap entry too.

Thanks to @OhCrap for the pointers on enabling .cs serving with IIS7.

0 comments

My Android

On Friday my Google Dev Phone 1 (aka HTC Dream / T-Mobile G1) arrived.

It’s about AUD$800 delivered to Australia (USD$399 + USD$50 Shipping + USD$25 Dev Signup). Google recently discovered that Australia wasn’t on Mars, and dropped the shipping cost from USD$150 to USD$50 or so.

Here’s my notes so far:

Device
- Slide out qwerty keyboard – works well, takes a little getting used to the layout, but it’s good enough for reasonable length of text entry.

- Trackball – seems a little gimmicky, but for some apps it’s useful.

- Construction – Feels reasonably solid – the back cover might be a problem later. The only fault is that apparently the battery does come loose from it’s position on some phones (James has this issue). Easily fixed by using paper shims, but it’s not the best experience.

- Screen – It’s fairly bright, but it’s difficult to use in direct sunlight like every other LCD out there. Also, this isn’t a multi-touch device (the hardware supports it – it’s a software / patent issue afaik) so some things like Zooming don’t work like on the iphone

- Sound I havn’t tested much – the speakers are the usual tinny things used in anything smaller than a laptop. The biggest disappointment is a lack of 3.5″ headphone port. It runs (like other HTC Devices) through an adaptor plugged into the single mini-USB port. The same port is used for charging too – so you’ll need a double adaptor (See eBay) if you want to do both at once. The quality seems decent enough for a mobile.

Android Software

- Gmail or death. There is no option to use the device WITHOUT a Gmail account. Don’t like it – tough luck. Until someone implements full Exchange support (including remote wipe), I’d avoid using it for business purposes.

- Over-the-air everything. From Installing/updating apps, to checking email and syncing contacts – it all happens over whatever your internet connection is. There is currently no software to install on your PC.

- Multitasking ftw. Every app runs in it’s own VM, and when you switch tasks the state is suspended and (potentially) saved to storage. This keeps your foreground app running nice and fast. Apps can still run tasks in the background (eg for IM, PUSH Email, etc) – so you can still get notifications. The phone will keep multiple tasks in memory, in the suspended state – but if the phone needs room it’ll dump the least recently used apps to storage.

- Notifications – Background tasks notify through a central Notifications panel – this is a pull-down from almost anywhere on the phone that lets you quickly switch back to other apps.

Market.
- VERY easy to use and install waaaay too much stuff at once.

- I love that you can see what permissions apps are requesting when you go to install them.

- There’s a built in comments/rating system – when you select an app from the Market, it shows this commentary.

- Completely over the air – browsing, downloading, installing and upgrading apps is done over the air.

- App coverage is decent for something with very little market penetration and mostly for geeks. My favorite app is “Zombie, Run!” which harnesses Google Maps and GPS integration to overlay where Zombies are around you. Said Zombies shamble towards you based on three speeds.

Contact / Data Sync:
- Uses GMail Contacts as the sync backend. Because there’s no PC Sync functions, you can’t sync with Outlook.

- You can import from CSV, but this is very error prone (at least, for me and James), and ends up with orphaned, ignored, just plain empty Contacts.

- Won’t connect over Bluetooth with a N95 to transfer contacts (Attempts to connect and fails) – so can’t send all the contacts as business cards.

- Overall Contact management is very disappointing and not well thought out (sure, adding one at a time is fine – but time consuming).

Multiple Account Support:
- Like every other smart phone out there – only supports one account in any sane manner. You CAN set up the other accounts via imap, but this isn’t the best experience (no PUSH, for instance).

Overall summary so far:
Good for gadget freaks and devs looking to launch on the Android platform.

Android is very obviously missing some major pieces of functionality though. I can live without Exchange email, but I can’t live without the Sync’ed contacts. (Exporting back and forth is a PITA). Symbian/Nokia got this right with the Exchange app which, while slow, can manage to sync all the contacts in the Address book with Exchange and vice versa.

The Market functionality is neat, and because apps can run in the background (and have tighter integration with the hardware) unlike the iPhone – has a lot of potential.

Update:
Forgot to mention – Gmail on the Android is done via PUSH – so you get notification of new email as it arrives – just like Exchange with Outlook/iPhone/Blackberry.

So, today I discovered an issue which related to me doing two calls something a little like this:

- Execute dc.sp_Proc1
- If some condition exists, execute dc.sp_Proc2, and then Execute dc.sp_Proc1 again with the same parameters.
- Insert some records into the database.

The problem is, the first time you execute the sproc, it caches the result. This would be okay for most instances, but in mine – I’m actually after the updated result.

A quick bit of googling revealed this post by Chris Rock. This approach of “turn off object tracking” works Ok if you don’t need to insert records on that Data Context.

My quick, dirty, and (possibly) really wrong approach was just to spin up a new Data Context, and re-execute that sproc.

I promise I’ll find a more sane way of fixing this :)

This is the first in (hopefully) a series of quick things I’ve picked up whilst tackling the previously mentioned project

So, I have a table something like this:

CREATE TABLE [dbo].[Product](
 
[ProductID] [int] IDENTITY(1,1) NOT NULL,
 
[Name] [nvarchar](100) NOT NULL,
   [Price] [int] NOT NULL,
    [LastSaveTimestamp] [datetime] NOT NULL CONSTRAINT [DF_Product_SaveTimestampDEFAULT (getutcdate())
) ON [PRIMARY]

The key here is the default value on the column: LastSaveTimestamp.

If I then try to, say insert a new column into this table, for example using this code:

  DatabaseContext dc = new DatabaseContext();
  Product product = new Product();
  product.Name = “test product”;
  product.Price = 50;
  dc.Products.InsertOnSubmit(product);
  dc.SubmitChanges(System.Data.Linq.ConflictMode.FailOnFirstConflict);

Then I’d get an exception like:

System.Data.SqlTypes.SqlTypeException: SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM..

The fix is actually really simple – In the table designer / DBML, you need to tell it that the column is auto-generated. Unfortunately this doesn’t seem to be automatically detected. It’s one of a few ‘just plain weird’ situations. 

AzamSharp has the fix details, with a handy-dandy screenshot over on his blog.

I’m looking for a few folks to group together to get a dedicated Windows server.

Server Details:

  • CPU: Intel Xeon 3060 (Dual Core)
  • RAM: 2GB
  • HDD: 2x 250GB (not RAID)
  • Network Port: 100Mbit
  • Bandwidth Quota: 2500GB per month
  • OS: Windows Server 2003 R2  (x32)
  • Other Software: .NET 1.1, plus .NET 2.0 to  .NET 3.5.  MS SQL Server 2005 (Express),

The server would be hosted by The Planet (unless you know of a better place?) in the US.

Because there’s 10 IPs allocated, the way I thought it would be set up would be to have one IP for any shared web hosting, etc – plus remote access in. And one IP would be dedicated to a Linux VM Server (for any apache + php + mysql things you want to run).

Then the rest of the IPs would be split up between the  folks sharing the server – for any other things you wanted to do (FTP server,  etc)

Bandwidth, Diskspace and RAM wouldn’t be strictly controlled, but if the performance of the server is suffering, we’re out of disk space, or we’ve got an over-usage charge, then those who’re using far more than their quota will need to pay up (for bandwidth) or reduce their usage (for diskspace and RAM)  

You’d also be expected to know how to use manage IIS properly, and if you’re hosting stuff on the Linux VM, Apache too.  Oh, and also how to use common sense not to stuff with other people’s settings without their OK.

I shouldn’t need to mention this, but you’ll also be responsible for ensuring that you’re not doing anything illegal under US or Australian laws. So – no torrent downloads, thanks.

Total cost per month for the server setup above is USD$230/month. I’m prepared to pay about USD$80/month, so I’m looking for 3-4 people willing to split about USD$150/month.

So, for about $10/month you’d get an allocation of about 100GB of bandwidth quota, and 20GB/disk space (10GB per drive).  IPs would be divvied up based on % of contribution, after I’ve got enough people onboard, but you’d get at least one IP.

So, if you’re interested – add me on MSN – will@hughesfamily.net.au and let me know.

Update: I now have two other people who’re onboard, and another who’s interested…  I need another four people who’re interested in putting in about USD$30/month each.

If that doesn’t happen, then I guess we’ll have to look at trying to get a smaller server, but this is pretty much as small as it gets before things stop being useful.