As with many of my fellow fledgling philosophy students, I was awed by Kant's so-called "Copernican Revolution": in order to reconcile the epistemological conflict between rationalism and empiricism, Kant determined that we experience the world from a point-of-view, as dictated by a priori categories of space, time, causality, etc. Thus, our unique ability to know and learn about the world as it is given to perception comes at the expense of the naïve belief that we could somehow discern its essence.
Just as it's only a loose (possibly even backwards) metaphor for the dawn of modern Western philosophy, we're taking some liberties with both the Renaissance astronomer's hypothesis and its Kantian canonization here. Where Computer Numerical Control (CNC) devices have long been restricted by the size of a multiple-axis stage, a team of engineers and designers are looking to put digital fabrication tools squarely in the hands of the users. Don't the let academic title fool you: "Position-Correcting Tools for 2D Digital Fabrication" by Alec Rivers (MIT CSAIL), Ilan E. Moyer (MIT MechE) and Frédo Durand (MIT CSAIL) might just represent the next step for digital fabrication. Per the abstract:Many kinds of digital fabrication are accomplished by precisely moving a tool along a digitally-speciï¬ed path. This precise motion is typically accomplished fully automatically using a computer controlled multi-axis stage. With that approach, one can only create objects smaller than the positioning stage, and large stages can be quite expensive.
We propose a new approach to precise positioning of a tool that combines manual and automatic positioning: in our approach, the user coarsely positions a frame containing the tool in an approximation of the desired path, while the device tracks the frame's location and adjusts the position of the tool within the frame to correct the user's positioning error in real time. Because the automatic positioning need only cover the range of the human's positioning error, this frame can be small and inexpensive, and because the human has unlimited range, such a frame can be used to precisely position tools over an unlimited range.
In other words, they're looking to combine the best of both worlds: "our goal is to leverage the human's mechanical range, rather than decision making power or guidance, to enable a new form factor and approach to a task that is currently fully automated."
Before we dig into the short but dense paper [PDF] that Rivers, Moyer and Durand published for SIGGRAPH 2012, here's the video:
A bit of nitty-gritty after the jump...Our central idea is to use a hybrid approach to positioning where a human provides range while a tool with a cheap short-range position adjustment enables precision. Given an input 2D digital plan such as the outline of a shape, the user manually moves a frame containing a tool in a rough approximation of the desired plan. The frame tracks its location and can adjust the position of the tool within the frame over a small range to correct the human's coarse positioning, keeping the tool exactly on the plan (Figure 1). A variety of tools can be positioned in this manner, including but not limited to a router (which spins a sharp bit to cut through wood, plastic, or sheet metal in an omnidirectional manner) to cut shapes, a vinyl cutter to make signs, and a pen to plot designs.
The challenge is twofold: the device must determine the current position of the tool ('localization') and correct the tool's position ('actuation').
A map of the material is ï¬rst built by passing the device back and forth over the material to be cut; then, images from the camera are compared to this map to determine the device's location. This approach was chosen for a variety of reasons: it can achieve very high accuracy; it always remains calibrated to the material, as the markers are on the material itself (as opposed to external beacons, which can become uncalibrated); it does not require excessive setup; the hardware required is relatively inexpensive; and it can be implemented using standard computer vision techniques. Building the map is fast and easy.
The map is plotted by the QR-like tape that serve as markers for the 'third eye' camera, and once the surface has been surveyed in full, the user need only to "register a plan onto the scanned map of the material" and cut away. Here, the paper proceeds to describe the hardware itsel (i.e. 'actuation') in some detail—in short, as long as the human operator moves the tool along a coarse path within a 0.5” margin of error, the device will make adjustments at a finer level per the desired path.
Besides the fact that the device is accurate to a remarkable degree of resolution, it can be used for '2.5D' applications:In principle, the tool can follow any 2D path. In the application of routing, this means that it can cut out any 2D shape in a single pass, or more complex 2.5D (heightmap) shapes using multiple passes at different depths. Multiple passes can be taken with or without adjusting the plan between passes. For example, we have used multiple passes with the same plan to cut through material thicker than can be cut in a single pass; we have also used different plans to engrave text to a shallow depth on a piece that is then cut out.
Learn more at the project page.
Create a Core77 Account
Already have an account? Sign In
By creating a Core77 account you confirm that you accept the Terms of Use
Please enter your email and we will send an email to reset your password.
Comments
Nice post!