Though it may in the end feature in some of my thesis work, I consider this largely a separate investigation, and potentially one with more relevance to more of the readers of this blog. I have continued a line of investigation into imitating manual drawing techniques with automated processes. Taking advantage of the new “Pen” display mode in Rhino 5 as well as techniques I’ve developed with Grasshopper for doing automated hatching, I have been exploring various ways of faking pencil and pen-based architectural graphics. Below is a scattering of examples:

A self portrait done as an early attempt at automated, shading-based cross hatching. Directly produced from a photograph.

Another, slightly more sophisticated attempt at the same technique.

This one combines Rhino 5's anti-aliased Technical "Pen" display view with some automated shading based hatches, as well as some fairly subtle Grasshopper-produced "wiggle" in the lines to prevent them from looking too automatic.

A not altogether satisfying first attempt at a "stippled" effect.


I haven’t yet decided how I want to manage the social-media-blogging component of my thesis process, which will begin in January when I return for my 10th and final semester. In a way I am inclined to be somewhat schizophrenic about it, with different aspects appearing different places. If that’s the logic I follow, it will probably feature major milestones/completions here on this blog, images/fragments/visual ideas on Thetic, my tumblr, and in-depth theoretical discussions in a private circle on Google+ (if you are interested in contributing there, let me know and I’ll add you to the circle.)

However, considering the end of my pre-thesis semester a “substantial completion” of some kind, I’ll post a bit here about the project I am proposing to pursue.

My thesis is an attempt to tie together a number of threads with which I have been preoccupied over the last several years. Some of those threads have already emerged on this blog: the relationship between the digital process and the creative process, Grasshopper as a tool for drawing rather than modeling, composition and generative formmaking (in an anti-blob vein), and morphogenesis more generally.

Here’s the 2-minute pitch:

My thesis stems from a disconnect I perceive between the current state of digitally-enabled architecture and other cultural productions (music, literature, art, television, film, etc). Whereas digital architecture seems to look unflinchingly toward the “new,” with form dictated by the latest available algorithms, the other manifestations of digital culture have a curious, nostalgic relationship with the past and the future, characterized by various observers as atemporality, hauntology, retro-mania, and off-modern.

Moreover, much of contemporary digital architecture seems to have sacrificed certain embedded intelligences that were present in pre-digital methods of working, in the service of forms characterized by complexity, which at best deal with genuine societal complexity only metaphorically.

I propose with my thesis project to develop a new vocabulary of algorithmic techniques and procedures, using pre-digital architecture as source material. These techniques, first developed in the process of analysis and imitation, will then be put to use in the design of an Academy in Los Angeles. The architectural source material will be selected from the Case Study Houses of 1945-66.

It is my hope that in the process of generating new work by combining algorithmic procedures derived from historical precedents, I will produce an architecture which has nostalgic resonance without resorting to post-modern pastiche, and which proves that parametric techniques need not follow a “Parametricist” stylistic regime.

As a first experiment in this process, I attempted a parametric reverse-engineering of Case Study House #8, better known as the Eames House. See some results from that process here: https://plus.google.com/photos/112558481121248068712/albums/5686851698427814193?authkey=COGfhNHR1dTe-QE

Some of these results quite clearly resemble the original Eames House:

Whereas others are clear deviations, suggestive of rather different formal+spatial qualities, but nevertheless retaining a certain capacity to evoke the source material.

My intention is to produce a number of such generative analyses, which will in turn be decomposed and recombined in service of the particular design project discussed above. It is in this process — the recombination and application of these generative procedures — where I insist that non-deterministic, non-procedural, non-parametric processes must be introduced in order to engage with human creativity by offering an opening for reinterpretation, lateral thinking, and intuitive leaps. I firmly believe that only by making an allowance for this can a digitally-driven design process ever approach the effectiveness of pre-digital design methods in responding to the contingencies of program, site, and culture.

Ultimately, I am setting out to address a set of questions about architecture, computation, the design process, and culture:

1. What would a digitally-driven architecture look like if it eschewed computationally simple, pre-packaged algorithms in favor of procedures derived from architecture itself?

2. Can architectural design achieve greater accessibility and cultural relevance by participating in the kinds of historical mining/imitating that pervade other modes of cultural production?

3. What should be the relationship between automated, ultimately deterministic generative processes, and human creativity in design?

 

 

As my thesis proceeds, in response to advisors, critics, and the necessary ups and downs inherent in the design process, I will continue to post updates here and in the other forums mentioned at the beginning of this post. I welcome your feedback, criticism, and participation in this investigation!


I haven’t updated in a long time, as I’ve been busy with school and thesis prep. Here are a few brief updates on the latest things I’ve been working on:

1. Studio

My studio this last semester was a traveling studio taught by Professor Alex Mergold of Austin Mergold and Alexander Brodsky, the Russian artist and architect. Over the course of the semester we designed 3 pavilions, the last of which was a collaboration across the whole studio, resulting in a built installation.

Since this is a blog about design + technology, I’ll focus on the technological processes that went into my first pavilion, as well as my contribution to the final group pavilion.

The first pavilion was fairly abstract. It was an abstract “catalog of russian tower typologies,” although at first glance it might not appear to have any representational content whatsoever.

This seeming jumble of suspended elements actually functioned as an anamorphic device. From a set of precise view points around the mass of elements, they would coalesce into the shape of a tower from Russian history.

(You can kind of make out the outline of an “onion dome”.) This was worked out using Grasshopper to construct the seemingly random elements from a set of pre-drawn profiles.

The final project of the semester converted a shipping container left over from the construction of OMA’s new Milstein Hall into a pavilion to house spatial memories from our trip to Moscow. It was an exciting and very collaborative design process, and while its end appearance doesn’t have the typical formal expression of a “parametric” project, it utilized a number of Grasshopper-powered parametric elements during the design and fabrication process.

Photo Credit: Elease Samms

The supports for the vertical slats were calculated in Grasshopper and CNC fabricated in order to precisely align and angle the slats. They changed angle gradually, so that from certain angles the “wrapper” read as a solid wall, and from others it was very transparent. It made the fairly rigid overall volume function much more dynamically, changing as an observer moved around and through it.

Inside the container was a suspended “tube” which contained 24 false-perspective dioramas, each of which depicted a spatial memory of some kind from our Moscow trip. Each diorama was slightly different in its dimensions and angles, in order to preserve the artificial sensation of looking into a much larger space when viewed through the peep hole, which was fitted with a wide angle door viewer. Though cut by hand, the cardboard diorama boxes were all measured out by projecting directly onto the sheets with a grasshopper definition which had calculated the precise geometries of each diorama within the overall scheme.

A diorama on its way to being installed

The inside of my diorama, with 3d-printed, perspective-distorted chairs and other fixtures

The view through the peephole


Since January, I have had the pleasure and privilege to work for the artist Sarah Oppenheimer, aiding her in the process of realizing a number of installation pieces. Sarah hired me to apply Grasshopper to a number of problems in the process of preparing her pieces for fabrication, from initial conception and design all the way to the production of detailed fabrication documents. She and her Studio Director Uri Wegman were already using Rhino to develop her designs, so it was a natural jump to Grasshopper to add to their capacities and automate a range of processes.

The first project for Sarah that I worked on in January is currently being fabricated at Kunstbetrieb Basel, where we are lucky to work with a team of extremely talented engineers. Now that the project is nearing completion, I wanted to share a little bit about the process and the ways I’ve been applying Grasshopper on this project.

Prior to my involvement, Sarah had realized a version of the piece at the VonBartha Garage in Switzerland. This version was skinned with wood veneer panels.


The goal was to fabricate a new version of the piece, but this time out of aluminum sheet. Sarah and Uri had developed a clever system of perforating sheets to facilitate folding them into complex forms.


However, with this method, each crease could only be folded so far. This required a re-modeling of the piece geometry in order to ensure that each crease was under the maximum fold angle. For this purpose I made a GH definition that color coded every edge based on the number of additional facets necessary to “smooth out” the crease in question. This visual feedback made the process of reshaping the piece much faster, allowing us to create a number of variations and quickly test their feasibility.

The next constraint to tackle was the available aluminum sheet size we had to work with. Due to the size of the piece, the skin had to be broken up into a number of smaller pieces that would then be wrapped around a frame. To facilitate this process, I developed a definition that “sliced” the piece along user-specified planes, unfolded each resulting segment, and tested for fit within our limited sheet size.

Up until this point, the models we were working with were polysurfaces without thickness. Having settled on a scheme by which to subdivide the piece, the next major definition handled a detailed, rigid unfolding of each segment, with material thickness taken into account. This directly gave us the necessary channel widths to be milled into the piece for a given fold angle.

Finally, I developed a definition to process the results from this unfolding and automatically generate the curves to drive the CNC mill for the sheet perforations.

It is thrilling for a student of architecture and parametric design to see a project finally make it off the screen and become actual. The photos below show various stages of the pieces being fabricated. Images courtesy of Sarah Oppenheimer, Kunstbetrieb and Galerie Von Bartha.

The ribbed structural frame

Beginning to assemble the skin

The bottom piece

The top piece

The tip of the top piece

Seam and Crease Details

As my work with Sarah on this and other pieces continues I will post more updates!


Hey all – 

 

While working on a project I came upon the need to do some unit conversion within GH, so I decided to write a script to handle this. I’ve attached it as a user object. It takes two strings, defining the input and output unit systems, in the form of “mm” or “in^2” or “yd^3” etc. It will then output the necessary conversion factor. It also optionally accepts a list of values, and will convert them for you if they are present. 

 

A word of caution: I have not tested this thoroughly, so it may produce some weird results – but so far it seems to work for me. If you catch a bug, please let me know! Also, please pay attention to the “out” stream – it will alert you to an error if there is a problem with your input. Even if there are errors, the component will produce numerical output, so if you blindly trust the results without checking the error output you may be working with erroneous values. 

 

Finally, if you take a look at the code, please don’t judge too harshly! I am an entirely self-taught programmer, so I am sure my code is inefficient and messy and misses a lot of best practices. If you do see something that could be improved upon, let me know; I am always eager to learn.

 

Download it here: Unit%20Conversion.ghuser

 

cross posted at my blog: https://heumanndesigntech.wordpress.com


I was inspired by a recent post on Lebbeus Woods’ fantastic blog to write a rather lengthy response on my understanding of the difference between manual and digital drawing. Since it is rare that I can get my thoughts together enough to post ideas on this blog in addition to images and techniques, I thought I would cross-post my reply here. This is really a central question for me, as a parametric practitioner of architecture who wants to find a working method that taps the capacities of computation without becoming a slave to algorithms and deterministic (often “optimized”) results.

Here is the quote from his blog:

I have no interest or intention of reopening old discussions of the pros and cons of hand versus computer drawings—they simply go nowhere. I’m willing to grant, for the sake of exploration, that one day a computer will be able to draw exactly like Masahiko Yendo. I repeat, exactly, with all the infinitely varied tonality and all the nuance of texture, shading, and illusion of light and darkness. For that to happen, of course, the pixels of the computer drawing would have to be infinitely small, creating the actual spatial continuity of the hand drawing. Assuming that this technological feat could be achieved, what difference would there be between the hand and the computer drawing?

Absolutely none—if we consider only the drawing itself, as a product, as an object, which—in our present society—is our habitual way of perceiving not only drawings, but also the buildings they describe.

I repeat: absolutely none. IF, however, we think of drawings—even the most seductively product-like ones shown here—as evidence of a process of thinking and making, the difference is vast. Indeed, there is no way to close the gap between them. In the hand-drawn image, every mark is a decision made by the architect, an act of analysis followed by an act of synthesis, as the marks are built up, one by one. In the computer-drawn image, every mark is likewise a decision, but one made by the software, the computer program—it happens in the machine, the computer, and does not involve the architect directly. In short, in the latter case, the architect remains only a witness to the results of a process the computer controls, learning only in terms of results. In the former case, the architect learns not only the method of making, but also the intimate connections between making and results, a knowledge that is essential to the conscious development of both.

LW

And here is my response:

Your last paragraph leaves an open question for me. Who designed this program? Who wrote the algorithms that “decide” where to make the marks? If the architect is the one who wrote it, can’t we read the end result as “evidence of a process of [his/her] thinking and making”?

At one extreme, we can imagine a program that reports back to the architect after every mark for a new set of instructions about how to produce a new mark. In this situation, the drawing, in my view, is basically equivalent to hand-drawing – every mark is the result of a decision by the architect, though executed by a machine. However, such a program is clearly impractical. Instead the program might engage in some, slightly more “automatic” operations, but continuing to rely on feedback from the architect. Here the question gets tricky – is there a real difference between the guided-but-partially-automatic drawing and one done entirely by hand?

I would respond to this not by defending the computer but by questioning the hand-drawing – is EVERY mark really the result of a considered decision, involving analysis and synthesis? Wouldn’t it be closer to the truth to characterize sets of marks (as results of sets of gestures) as the level at which decisions get made? When I draw by hand, I don’t say to myself “I am going to make this line here” so often as “I am going to shade this region here with a series of parallel hatches.” In this, isn’t the execution of that series of marks somewhat “automatic”? Aren’t I relying on an algorithm or a non-conscious process to really “decide” where each individual mark goes?

What I am trying to say is that your characterization of a computer program really represents only the far end of a wide spectrum, from “pure control” where every nuance of every step is decided by the architect, to “pure automation” where one time, an algorithm is written, executed, and the results collected. For me the fertile ground of digital practice and digital drawing lies in the middle, where aspects of the process are automatic, and aspects are guided by human decisions. Moreover I would argue that even hand drawing and other manual processes can be seen to fit this description. At some level, the act of drawing is ALWAYS relying on an “automatic” process – even if we say that every mark is considered, we can even break it down further: is every infinitesimal moment of the motion of the pencil (To make this curved line, I will move it here, then here, then here, then here…) a decision?

This may sound like an argument that computer drawing and hand drawing are one and the same. In fact, I do not believe this – but I think you’ve put the distinction on the wrong grounds. First, you said that they could be considered one and the same when evaluating the end result object of the process. Then you said that the distinction lies between an architect making the decisions as he draws, against an architect letting the computer make all the decisions. I am as opposed to this notion of a fully autonomous drawing machine that can “draw like Yendo” as you are, but I think this mischaracterizes the present and future of digital drawing practice.

Instead, I think the grounds for the distinction are one level deeper, and lie in the mechanism by which the “automatic” or “non-decided” portion of the drawing is executed. When I draw by hand, the execution of a sub-decision level task (e.g. moving the pencil 0.02 mm to the left) is guided by my intuition – which is the result of all kinds of unconscious (though intelligent) processes in the brain and body. When I draw “by computer” I am relying on an algorithm to execute the sub-decision level tasks, and given the same inputs it will always produce the same outputs. It is not in consciousness that the difference between digital and manual drawing lies – it is in the contribution of the sub- or un-conscious. Furthermore, in the digital mode there is still room for intuition and subconscious effects – just not at the level of the “sub-decision,” and only insofar as those things guide and influence conscious decisions.

In short, there is a difference between digital and manual drawing, even if the products are seen to be the same – but it is limited to the execution of the portions of the drawing that lie beneath the level of conscious execution. Digital drawing can be just as much the product of an architect’s decisions — just as much the evidence of a process of thinking and making — as a hand drawing can be.


As an exercise, I decided to try to implement the Catmull-Clark subdivision algorithm in Grasshopper alone. This means no scripts, and no 3rd-party components (such as weaverbird). This is not designed to be a utility – by all means, if you have to subdivide a mesh in this way, just use Weaverbird. I am always interested in the way traditional coding approaches translate into a non-textual language like Grasshopper, and this was a fun way to push a bit at GH’s data management to achieve comparable results to scripting approaches.

The algorithm as implemented can handle closed meshes with quad faces only. I will attempt a version that can handle tri-faces eventually.

I based my approach on the pseudocode available at Rosetta Code.

DOWNLOAD DEFINITION HERE: GH_Catmull_Clark.gh