Okay, I learned a lot since yesterday!
First, a bit of context: I’m building an iPhone webapp for a cleaning company; they’re sending inspectors in the field check the work done by the cleaning ladies (yes, the cleaners are 100% female, while the inspectors are 50% male).
This used to be done on paper: inspectors printed Excel stylesheets with lines for every criteria; at the end of the inspection, the person whose work was being assessed had to sign the paper, to prove that she was there (that the inspection was done with her knowledge and consent).
Doing this with an iPhone is much more efficient: no more paper, no more copying from the paper onto the system, and an immediate availability of the data.
But a question remained: how do we get the people to sign on the iPhone?
I first looked for a ready-made library for “freehand drawing” and found none; I eventually found these two very interesting links though:
“Drawing” on a machine involves two things:
(Those two things are distinct; it’s possible to record strokes without ever displaying them — if one wants to paint blind).
To record strokes, the only thing needed is to know when each stroke starts and where it’s going; for this we add event listeners to the element that will be drawn onto.
At the same time, we display those strokes onto the canvas, using lineTo() and stroke().
That’s basically what the first link above shows how to do, using a mouse.
But of course on the iPhone we’re not using a mouse but fingers; that’s why we need touch events.
Touch events are a little different from mouse events in that there can be multiple “touches” in a single “touch” event. So we need to deal with the touch that actually interests us (the first one, which will be the only one if only one finger was used).
That’s done using the touches array of the touch event.
Of course, we also want to inhibit the normal behavior of a touch event on a webpage so that the canvas the user is trying to draw onto doesn’t move around instead! And that’s done with evt.preventDefault()
The last part is saving the drawing; the approach used on the first example above is to save the whole canvas element as a bitmap; it’s straighforward but won’t scale if we want to display drawings at a higher resolution.
I chose instead to record paths and points, which enable me to redraw the thing at any scale if need be (and probably results in smaller data sets too — although I’m not sure).
(TODO: save to SVG).
So here’s the resulting code:
var canvas = document.getElementById("canvasDraw");
var ctx = canvas.getContext("2d");
ctx.lineWidth = 5;
ctx.strokeStyle = "#2EAE51";
var points = {};
var paths = -1;
functiongetCoord(evt) {
var x = evt.touches[0].pageX - canvas.offsetLeft;
var y = evt.touches[0].pageY - canvas.offsetTop;
return [x, y];
}
functiontouchstart(evt) {
evt.preventDefault();
var coord = getCoord(evt);
paths = paths + 1;
points[paths] = [];
points[paths].push(coord);
ctx.beginPath();
ctx.moveTo(coord[0], coord[1]);
}
functiontouchmove(evt) {
evt.preventDefault();
var coord = getCoord(evt);
points[paths].push(coord);
ctx.lineTo(coord[0], coord[1]);
ctx.stroke();
}
document.getElementById("canvasDraw").addEventListener("touchmove", touchmove, true);
document.getElementById("canvasDraw").addEventListener("touchstart", touchstart, true);
document.getElementById("saveSign").addEventListener("click", function() {
var drawn = JSON.stringify(points);
sessionStorage.setItem("drawing", drawn);
});
(assuming a canvas element with id “canvasDraw” and a button somewhere with id “saveSign”).
Not only does it work fine, it’s also surprisingly pleasant to use and play with.
iPhone freehand drawing is done by recording touch events (coordinates of the first element of the touches array); user feedback is provided using the canvas HTML5 element.