It’s been a long time since I posted here, but wanted to come back on to promote my upcoming Grasshopper workshop at the Facades+ Conference in Chicago on July 25!
It will be a really exciting opportunity to push the boundaries of Grasshopper as a collaborative tool – we’ll be exploring strategies to build multi-user grasshopper models for design and analysis. Register now while there are still spots to be had!!
Filed under: Uncategorized | Leave a Comment
Since I haven’t been posting much lately I thought it would be fun to give a little recap of this last year of my life in terms of art, architecture, design, and technology.
January to May – Thesis
For the spring semester, I was hard at work on my thesis project at Cornell. Entitled “Case Study in Translation,” the project attempted a parametric reinterpretation of the Case Study House program of the American Mid-Century. Beginning with parametric analysis of the precedent houses, the project attempted to understand the formal and functional logics at work in the houses, and produce algorithmic translations of those logics in order to bring them into a contemporary context, adapting them for the cultural conditions of the present day. You can read a bit more about my thesis here, and you can flip through my thesis book on Issuu.
May and June – Graduation and a new Job
I graduated with a Bachelor of Architecture from Cornell in May, and the following month moved to Seattle, WA to start work at NBBJ as a Design Computation specialist. I’ve been enjoying the job; it’s a great place to work, with lots of exciting projects and knowledgeable and talented people. I’ve been learning a ton and loving Seattle. NBBJ has been doing innovative work with computation for a long time and I’m thrilled to be a part of the team.
August – HDT Utilities
In August, I released HDT Utilities, a suite of Grasshopper components for expanding Grasshopper’s ability to reference and operate on objects in Rhino, as well as some components for manipulating Data Trees. “HDT” was meant to stand for Heumann Design/Tech, but I’m not sure anybody got that. A colleague at work has suggested I change the name of the tools to “Human,” keeping in line with the animal naming convention in the Rhino ecosystem (and punning on my name, of course). The latest version of the tools has been downloaded more than 600 times, and I’m planning an update in the near future.
October – ACADIA SF
In October, I attended the ACADIA 2012 conference in San Francisco, and co-taught a workshop on Parametric Case Studies with Andrew Kudless of CCA and Matsys. For the conference I also wrote an expanded version of the post that first appeared on this blog, Michael Graves, Digital Visionary: What Digital Design Practice Can Learn From Drawing. Some discussions I had with Kudless at the conference grew into a series of Grasshopper studies around self-diagramming algorithmic processes, which in turn grew into….
November – Tweet2Form
In November I launched Tweet2Form, a Grasshopper-powered Twitter bot that produces a diagrammed formal process based on a series of commands you send it. More info on that project in this post. One of these days I’d like to produce a post explaining the technical mechanisms behind the bot in greater detail. If this is of interest to you, let me know, it will help me get motivated to put something together.
December and January – Wallpaper* and CLOG: BRUTALISM
There are a few exciting things happening right now. I was selected for Wallpaper* Magazine’s 2012 Graduate Directory, so grab a copy of the January issue and flip to page 128 to see the blurb, or just visit the online version here (I’m the first one under the architecture section).
Finally, I’m excited to announce that I have a piece on generative brutalism in the upcoming CLOG: BRUTALISM. Keep your eyes peeled!
I’ll conclude with a few of my favorite pieces from this last year on my tumblr:
Filed under: Uncategorized | Leave a Comment
I just finished putting together a beta version of a little project. It’s a tweetbot powered by a grasshopper definition. The bot transforms a cube according to a series of operations you specify, and then tweets a picture of the resulting form. There are currently 11 formal operations that the bot understands:
shearA (Shear angular)
shearD (shear with displacement)
quadscale (scale about 3d quadrants)
The bot can sequence up to 10 of these operations based on your tweet. Here are some examples:
@tweet2form split bridge
@tweet2form bend shearD
@tweet2form shearD stretch stretch stretch
@tweet2form fold shearA stretch scale twist split bridge
The parameters of each operation are randomized based on each unique tweet ID, so even sending the same series of operations multiple times will result in different forms.
Right now I have the bot running on my personal laptop, so it may not always be “listening” for updates. It may take up to 30 seconds for the bot to respond to your tweet. I have no idea if I will find a permanent home for it but check it out in the meantime!
Send a tweet to @Tweet2Form and see what comes out!
Filed under: Uncategorized | 5 Comments
About a week ago, Michael Graves’ article entitled “Architecture and the Lost Art of Drawing,” on the continued importance of drawing to architectural practice, was published in The New York Times. In the piece Graves laments the perceived death of drawing at the hands of “the computer.” He is willing to concede the utility of digital tools for the production of what he calls “definitive drawings”—final architectural documents for presentation or construction—but maintains that manual drawings remain the appropriate medium for the purposes of recording or generating ideas.
His position is a conservative one, but is common in the profession, especially among the generations of architects trained before computers largely replaced hand drawing in professional practice. On the other end of the spectrum is the position to which Graves is reacting: that manual drawing has been entirely replaced by digital tools as the medium for architectural conception, espoused by the likes of Patrik Schumacher and others in the digital avant-garde.
In practice, however, it is my experience that most young architects today take for granted that the process of design requires both sets of tools—that by far the most productive means of working involves cycling between digital and analog techniques of representation and conception. Indeed, this seemed to be the broad consensus of the Yale University symposium “Is Drawing Dead” this past February, featuring both Graves and Schumacher as participants. (Graves’ article appears to be a refinement of the talk he gave at the Yale symposium, with no noticeable adjustment to his position.) The firm of Lewis Tsurumaki Lewis gives a compelling example of this kind of hybrid process in their 2008 book “Opportunistic Architecture.”
A hybrid approach, utilizing both analog drawing and digital modeling and/or scripting, is to my mind the best of all possible worlds given the current state of technology. Schumacher and others who proclaim the death of drawing fail to give a convincing account of how digital tools can encompass the creative capacities Graves attributes to the hand drawing. On the other hand, the fatal flaw of Graves’ position is his insistence that the limitations he identifies are inherent limitations of the digital: that computers will never be appropriate tools for the kinds of “sketchy” processes he claims are best suited to hand drawing.
What if between these two extremes—drawing is dead, and long live drawing—we located a roadmap for future software development? What would it look like if a digital toolset fully encompassed all the capacities and possibilities of manual drawing that Graves identifies? Would these necessitate hardware or software changes, or merely changes in the way users operate the tools? Are such digital tools and processes already extant, or at least possible?
Although there are more, I read in Graves’ article four primary aspects of manual drawing that he (rightly) sees as lacking in contemporary digital process:
- “The interaction of our minds, eyes, and hands” – the embodied physicality of drawing
- The “Referential Sketch” – drawing to remember or to know what one observes
- The “Preparatory Study” – Sketching and tracing to revise and refine designs in a non-linear process
- Drawing to “Stimulate the imagination” – the drawing as conceptual apparatus
For each of these, we can refute Graves in two ways: first, by identifying ways that existing software might accommodate the same aims; and second, by speculating about future software-hardware-practice combinations that might do so better.
“The interaction of our minds, eyes, and hands”
Graves points out the obvious: that a mouse and keyboard lack the deep connection to our embodied intelligence that drawing with a pencil has. Read a bit generously, this critique brings to mind the observations of N. Katherine Hayles in “How We Became Posthuman” that discourses foregrounding information as ontologically primary tend to neglect the complexities of embodied being. Designers experienced in drawing need not think about every line or gesture made; there is a fluid, natural translation from observation to intention to notation.
Graves suggests that this kind of fluidity cannot happen on a digital platform. To him, certainly, manipulating a mouse or typing a command must feel like an unnatural barrier between conception and execution. However, this is far from the case for those who grew up attached to digital interfaces. That a keyboard is no barrier between intention and action should be obvious to anyone who has ever absentmindedly summoned Facebook or Twitter in a browser window without consciously deciding to do so. Having been a user of Adobe Photoshop since the age of 11, I have the physical gestures of keyboard commands deeply embedded in my reflexes. Indeed, when asked what the keyboard shortcut for a particular command is, I need to position my hands on a keyboard in order to summon the combination of keys. When I call up a command, my hands make the motion without thinking “command + shift + alt + e,” any more than Michael Graves thinks “place pencil on paper, move pencil, lift pencil” in the process of fluidly sketching. At the Yale conference, Greg Lynn described a similar embodied facility with the 3d animation software he uses, and noted that the way he operates the software even impacts the way he sketches, his hand drawings set up with digital operations in mind.
I believe that future interface paradigms are likely to increase the ease with which these kinds of reflexes develop, encouraging embodied, physical links between intention and action. The rapidly improving technologies for touch- and gesture-based interfaces promise to further improve the intuitive, natural character of digital drawing. Developers of design software should keep this in mind; command lines, keyboard shortcuts, mouse gestures, menus, and button panels have differing degrees of “readiness-to-hand.”
The “Referential Sketch”
Graves spent the majority of his Yale lecture showing images of his referential sketch images, made on site during his travels in Rome and elsewhere. It is a commonplace in the discipline of architecture that one of the primary means for students to learn about building is to draw precedents from images or from life. According to Graves, these drawings are “fragmentary and selective,” and capture ideas as much as impressions. “That visceral connection,” he insists, “that thought process, cannot be replicated by a computer.”
On the contrary, the act of transcribing a precedent in a digital medium can be a tremendously stimulating and idea-rich process, going well beyond the mere transcription of dimensions and geometry. Building a 3D model of an existing building—whether one decides to do it accurately or “sketchily”—requires the analysis of proportion and construction, and a considered selection of the important qualities of a building. Moreover, attempting to “sketch” a precedent, not as a static model but as a dynamic, procedural parametric system, requires a deep, highly subjective interpretation and analysis of the building: its hierarchies, relationships, forms, and the processes by which it may have been conceived. It is inherently more time-consuming, but I would argue that it constitutes a richer transcription of ideas than a notebook sketch.
The difficulty, of course, is portability. While some might be bold (or geeky) enough to try it, I find it hard to imagine anyone setting up camp outside the Pantheon with a laptop and mouse. When some descendent of the iPad weighs no more than a moleskine sketchpad, though, the idea is not so ridiculous. It is possible to conceive of an interface that would produce an augmented reality overlay, allowing a student to “sketch” in 3d space in real time, perhaps even aided by the real geometry before her.
The “Preparatory Study”
Graves identifies (or at least hints at) the way successive layers of trace paper, the typical tool for an architect to work through design problems, forces the constant reevaluation of design decisions, with each successive layer of trace pulling the lines below into a more resolved form. “Like the referential sketch,” he says, “it may not reflect a linear process. (I find computer-aided design much more linear.)” Some who work digitally may protest at this characterization of computer aided design as linear. Surely, every decision can be undone and remade, any vertex in a Rhino model can be repositioned, any surface or curve tweaked in an infinite number of ways. The trouble is that remaking decisions in any digital modeling platform requires a certain amount of extra effort. On a new sheet of trace, modifying the underlying line takes exactly as much time and energy as tracing it exactly as it was. This means that all changes and new intentions constantly have the opportunity to subtly alter the entire drawing.
Working digitally, one is discouraged from rebuilding the entire model from scratch unless absolutely necessary. This is especially true of parametric models, which have a tendency to accumulate features and processes as a model develops, but by virtue of their linked dependencies make it daunting to rethink entire processes. Parametric models build in flexibility in their selection of variables, but this variability is of a limited sort. The specific values – counts, dimensions, angles, fixed points and vectors can all be modified, but altering the way these variables are interpreted and translated into form requires a much greater investment of energy. This is precisely the digital linearity that Graves identifies.
A designer aware of these limitations may simply work through the pain, forcing a complete re-work of a form or parametric system from scratch several times in order to reap the benefits that this kind of process affords to design work. In the context of BIM, where fluidity from conception all the way to fabrication is held up as the Holy Grail, promising efficiency, speed, and lower costs, designers should remember the value of breaks in the process. Decoupling a model from its variables, breaking it out of its fixed dimensions and dependencies, may be antithetical to the efficiencies of BIM but may in fact be essential to the creative process of design.
While this in particular seems like the domain of practice rather than software, I can imagine a small 3d sketch program that at regular intervals locks the model away from the designer’s access, allowing it to be seen visually but not modified, copied, or snapped to. While this is likely impractical in real world problem-solving situations it would be interesting to test as a way to enforce the kind of deep non-linearity inherent to a stack of trace.
Drawing to “Stimulate the Imagination”
While Graves is a bit vague about precisely how manual drawing does this, I think he is extremely astute to identify it as a limitation in digital tools. One of the crucial aspects of manual drawing is its capacity for expression and continuous interpretation, reinterpretation, and misinterpretation. A drawing is always an abstraction; the space between that abstraction and a realized artifact or architecture is precisely the site where design thinking happens. In the article, Graves describes a small game during a boring Princeton faculty meeting, passing a drawing back and forth with a colleague. However, he omits an aspect of the anecdote that he provided in his Yale lecture—that there was a third participant (Tony Vidler) who joined in on the drawing game and drew a stair at entirely the “wrong” scale. I can’t count the number of times over my academic career when the most productive ideas around a project stemmed from a critic misreading a drawing as suggesting something entirely other than the student’s intention. Drawings, as approximate projections and views, imperfect notations of geometry, always contain the capacity to autonomously generate misreadings, “stimulating the imagination” in a way that transcends the mere transcription of ideas. The key quality that enables this is their inherent vagueness. Vagueness is not something computers do particularly well.
In my own practice the only remedy for this has been through “post-processing,” translating 3D models into 2D images to be further manipulated in Adobe Illustrator or Photoshop, for example. This frees a model from its dimensional notations in 3D space, allowing fuzzy, imperfect, imprecise, approximate manipulations to be made, which may not even logically resolve back into a 3D space.
Perhaps there might be a modeling interface that simply hides dimensional coordinate information from the user. Systems like Z-Brush for example seem to deemphasize the dimensional nature and precision of 3d in favor of a
“2.5D” approach which is a bit fuzzier, operating on the threshold between vector and raster manipulation techniques. While Z-Brush projects seem to take on qualities of literal fuzziness, or furriness, or lumpiness, while this need not necessarily be the case. Future design applications aiming at facilitating sketchiness would do well to build on the continual remapping from 3D to 2D to 3D, from vectors to pixels or voxels and back.
Much if not all of the intelligence of manual drawing methods could conceivably be meaningfully imported into digital practices. This however would not signify, as Graves worries, the “Death of Drawing”: it would simply mean that digital processes could rightly claim to be forms of drawing in the full sense that Graves intends. Designers taking advantage of digital tools are wise in the meantime to utilize hybrid processes in order to escape the limitations and tendencies of specific systems. Moreover, digital processes themselves might evolve to take on the fluidity, flexibility, and vagueness that manual drawing naturally offers. I don’t mean to suggest that we should actively try to eliminate drawing on paper in favor of digital substitutes; my goal is merely to offer possibilities for extending the capacities of digital design platforms as creative media. Increasingly, distinctions between the digital and the analog, the online and the offline, the virtual and reality are ceasing to be meaningful. Inevitably technology will continue to expand into more and more facets of the process of design. Software developers designing tools for architects—and architects developing software tools for themselves—should not let technology’s tendencies overwhelm the conditions under which creative thinking works best.
Filed under: Architecture, Digital Art, Grasshopper, Photoshop, Theory | 4 Comments
Lately I’ve been playing around a bunch with automatically placing Grasshopper components on to the canvas. I did some experiments in this vein a few months ago, but I was inspired to pick it back up by this post by Thibault Schwartz. I published a few small (and fairly useless) experiments, placing sliders or panels automatically with specified values, positions, colors, sizes, connections, etc.
One one of these images, Michael Pryor made an interesting suggestion: that this technique could be used to automatically instantiate something like a diagram of the data tree, allowing access to the data at any part of the tree. This idea stuck in my brain and I wasn’t satisfied until I had produced an implementation. Check it out in the video below:
Ultimately, I think this is probably more of a curiosity than anything else, but perhaps someone will find it useful. I’m generally excited by the possibility of Grasshopper definitions that write Grasshopper definitions. If I had unlimited time, programming ability, and patience, my next trick would be to link this technique into a Markov chain analysis of a corpus of existing definitions, and let it loose writing clever definitions all its own :)
Let me know if you have other ideas for real-world cases in which an automatically generated group of GH components could be useful!
Filed under: Uncategorized | Leave a Comment
In fairly typical fashion, I spent some valuable hours leading up to my thesis review (it’s today, wish me luck!) working on something entirely unrelated.
Inspired by a question on the Grasshopper message board I put together a definition that allows you to record the motion of a set of sliders and then replay that motion with a single slider in order to save an animation.
The definition is available here, and below is a video walking through the definition. The first 3 minutes are all you need to watch if you just want to use it, and the rest describes how it’s set up.
Filed under: Grasshopper | 1 Comment
Warning: Serious GH geekery ahead.
Since David Rutten added to GH the ability to set a default text panel background color, I’ve had my panels set to be white instead of the default bright yellow. But in all my old definitions, the panels remain yellow, and it drives me nuts. I was going through and changing them manually when it occurred to me that there was a much better way.
In a given definition, you can copy and paste all the components, and paste them into a text document. This gives you the XML representation of your definition, which contains all components, properties, and connections. I then hunted for the code that represented the background color of a text panel:
<item name="CustomBackColour" type_name="gh_drawing_color" type_code="36">
I then did a find and replace to switch out 255;255;250;90 with 255;255;255;255, a nice clean white. Copying all the text and pasting back into the GH definition converts everything back into components.
Moreover, if your definitions are saved in GHX format (which used to be the default), you can actually open them directly in a text editor and do the same thing (although note that older GHX’s may have slightly different code than you get from copying and pasting components from an open definition with the latest version of GH). The power of this is that, if you have a folder of GHX documents, you can do a batch find and replace and change some property across all the documents. Voila, no more yellow text panels in my old definitions.
This technique could easily extend to other properties and other components; with some clever find-and-replace, you could add margins to all your text panels, change the color of groups or scribbles, and if you’re able to find and replace with wildcards or regular expressions, even more complicated things are possible, such as setting the value of all sliders in a document to 0.0.
A word of warning: it is very easy to screw up your definition this way! I take no responsibility if your find-and-replacing results in the breaking of your definition. If the XML is not very precisely formed, pasting it into Grasshopper will not work at all.
Filed under: Grasshopper | Leave a Comment