2025-05-09 12:00:00
Screens get a lot of blame these days. They’re accused of destroying attention spans, ruining sleep, enabling addiction, isolating us from one another, and eroding our capacity for deep thought. “Screen time” has become shorthand for everything wrong with modern technology and its grip on our lives. And as a result, those of us in more design and technology-focused spheres now face a persistent propaganda that screens are an outmoded interaction device, holding us back from some sort of immersive techno-utopia. They are not, and that utopia is a fantasy.
The screen itself is obviously not to blame — what’s on the screen is. When we use “screen” as a catch-all for our digital dissatisfaction, we’re conflating the surface with what it displays. It’s like blaming paper for misleading news. We might dismiss this simply as a matter of semantics, but language creates understanding and behavior. The more we sum up the culture of what screens display with the word “screens,” the more we push ourselves toward the wrong solution. The most recent version of this is the idea of the “screenless interface” and the recurring nonsense of clickbait platitudes like “The best interface is no interface.”
What we mean when we talk about the “screen” matters. And so it’s worth asking, what is a screen, really? And why can’t we seem to get “past” screens when it comes to human-computer interaction?
For all our talk of ambient computing, voice interfaces, and immersive realities, screens remain central to our digital lives. Even as companies like Apple and Meta pour billions into developing headsets meant to replace screens, what do they actually deliver? Heavy headgear that just places smaller screens closer to our eyes. Sure, they can provide a persistent immersive experience that a stationary panel cannot. But a headset’s persistent immersion doesn’t make a panel’s stationary nature a bug. What makes a screen especially useful is not what it projects at you, but what happens when you look away from it. It is then that a screen serves a fundamental cognitive purpose that dates back to the earliest human experiences and tools.
A screen is a memory surrogate. It’s a surface that holds information so we don’t have to keep it all in our heads. In this way, it’s the direct descendant of some of humanity’s most transformative devices: the dirt patch where our ancestors scratched out the first symbols, the cave wall that preserved their visions, the clay tablet that tracked their trades, the papyrus that extended their memories, the parchment that connected them across distances, the chalkboard that multiplied their teaching.
Think of Einstein’s office at Princeton, with its blackboards covered in equations. Those boards weren’t distractions from his thought — they were extensions of it. They allowed him to externalize complex ideas, manipulate them visually, and free his mind from the burden — the impossibility — of holding every variable simultaneously.
Our digital screens serve the same purpose, albeit with far greater complexity and interactivity. They hold vast amounts of information that would overwhelm our working memory. They visualize data in ways our minds can grasp. They show us possibilities we couldn’t otherwise envision. They hold them all in place for us, so that we can look away and then easily find them again when we return our gaze.
Comparing screens to Einstein’s chalkboards, of course, is a limited metaphor. Screens also display endless streams of addictive content designed to capture and hold our attention. But that’s not an inherent property of screens themselves — it’s a consequence of the business models driving what appears on them. The screen isn’t the attention thief; it’s merely the scene of the crime. (And yes, I do think that future generations will think of today’s attention economy in the same way that we think of other past norms as injustices.)
The connection between screens and attention matters, of course, because our brains have evolved to emphasize and prioritize visual processing. We can absorb and interpret visual information with remarkable efficiency; simply scanning a screen can convey more, faster, than listening to the same content read aloud. Visual processing also operates somewhat independently from our verbal reasoning, allowing us to think about what we’re seeing rather than using that cognitive capacity to process incoming language. We can scan at the speed of thought, but we can only listen at the speed of speech.
This is why efforts to create “screenless” interfaces often end up feeling limiting rather than liberating. Voice assistants work beautifully for discrete, simple tasks but become frustrating when dealing with complex information or multiple options. Information conveyed in sound has no place to be held; it can only be repeated. The screen persists because it matches fundamental aspects of human cognition by being a tool that, among other things, offers us persistence: a place to hold information.
None of this is to dismiss legitimate concerns about how we currently use screens. The content displayed, the contexts of use, the business models driving development — all deserve critical examination. But blaming the screen itself misses the point, misdirects our efforts to build healthier relationships with technology, and wastes our time on ridiculous technological fetch-quests for the next big device.
Perhaps instead of dreaming about moving “beyond screens,” we should focus on creating better screens and better screen experiences. “Better screens” is a problem of materials, longevity, energy consumption, light, and heat. There’s so many things we could improve! “Better screen experiences” is a matter of cultural evolution, a generational project we can undertake together right now by thinking about what kind of information is worth being held for us by screens, as opposed to what kind of information is capable of holding our gaze captive.
The screen isn’t the problem. It’s one of our most powerful cognitive prosthetics, a brain buffer. Our screens are, together, a platform for cultural creation, the latest in a long line of surfaces that have enriched human existence. De-screening is not just a bad idea that misunderstands how brains work, and not just an insincere sales pitch for a new gadget. It’s an entirely wrong turn toward a worse future with more of the same, only noisier.
2025-05-02 12:00:00
Let me begin with a disambiguation: I’m not talking about AI as some theoretical intelligence emerging from non-biological form — the sentient computer of science fiction. That, I suppose, can be thought about in an intellectual vacuum, to a point. I’m talking about AI, the product. The thing being sold to us daily, packaged in press releases and demo videos, embedded in services and platforms.
AI is, fundamentally, about money. It’s about making promises and raising investment based upon those promises. The promises alone create a future — not necessarily because they’ll come true, but because enough capital, deployed with enough conviction, warps reality around it. When companies raise billions on the promise of AI dominance, they’re not just predicting a future; they’re manufacturing one.
Venture capital, at the highest levels, tends to look from the outside like anti-competitive racketeering than finance. Enough investment, however localized in a handful of companies, can shape an entire industry or even an entire economy, regardless of whether it makes any sense whatsoever. And let’s be clear: the Big Tech firms investing in AI aren’t simply responding to market forces; they’re creating them, defining them, controlling them. Nobody asked for AI; we’ve been told to collaborate.
Which demonstrates that capitalism, like AI, is no longer a theoretical model about nice, tidy ideas like free markets and competition. The reality of modern capitalism reveals it to be, at best, a neutral system made non-neutral by its operators. The invisible hand isn’t invisible because it’s magical; it’s invisible because we’re not supposed to see whose hand it actually is.
You want names though? I don’t have them all. That’s the point. It’s easy to blame the CEOs whose names are browbeat into our heads over and over again, but beyond them is what I think of as The Fear of the Un-captured Dollar and the Unowned Person — a secret society of people who seem to believe that human potential is one thing: owning all the stuff, wielding all the power, seizing all the attention.
We now exist in what people call “late-stage capitalism,” where meaningful competition only occurs among those with the most capital, and their battles wreck the landscape around them. We scatter and dash amidst the rubble like the unseen NPCs of Metropolis while the titans clash in the sky.
When capital becomes this concentrated, it exerts power at the level of sovereign nations. This reveals the theater that is the so-called power of governments. Nation-states increasingly seem like local franchises in a global system run by capital. This creates fundamental vulnerabilities in governmental systems that have not yet been tested by the degeneracy of late-stage capitalism.
And when that happens, the lack of power of the individual is laid bare — in the chat window, in the browser, on the screen, in the home, in the city, in the state, in the world. The much-lauded “democratic” technology of the early internet has given way to systems of surveillance and manipulation so comprehensive they would make 20th century authoritarians weep with envy, not to mention a fear-induced appeasement to the destruction of norms and legal protections that spreads across our entire culture like an overnight frost of fascism.
AI accelerates this process. It centralizes power by centralizing the capacity to process and act upon information. It creates unprecedented asymmetries between those who own the models and those who are modeled. Every interaction with an AI system becomes a one-way mirror: you see your reflection, while on the other side, entities you cannot see learn about you, categorize you, and make predictions about you.
So when a person resists AI, don’t assume they’re stubbornly digging their heels into the shifting sands of an outmoded ground. Perhaps give them credit for thinking logically and drawing a line between themselves and a future that treats them as nothing more than a bit in the machine.
Resistance to AI isn’t necessarily Luddism. It isn’t a fear of progress. It might instead be a clear-eyed assessment of what kind of “progress” is actually being offered — and at what cost.
Liberty in the age of AI requires more than just formal rights. It demands structural changes to how technology is developed, deployed, and governed. It requires us to ask not just “what can this technology do?” but “who benefits from what this technology does?”
And that conversation cannot happen if we insist on discussing AI as if it exists in a political and economic vacuum — as if the only questions worth asking are technical ones. The most important questions about AI aren’t about algorithms or capabilities; they’re about power and freedom.
To think about AI without thinking about capitalism, fascism, and liberty isn’t just incomplete — it’s dangerous. It blinds us to the real stakes of the transformation happening around us, encouraging us to focus on the technology rather than the systems that control it and the ends toward which it’s deployed.
Is it possible to conceive of AI that is “good” — as in distributed, not centralized; protective of intellectual property, not a light-speed pirate of the world’s creative output; respectful of privacy, not a listening agent of the powers-that-be; selectively and intentionally deployed where humans need the help, not a leveler of human purpose? (Anil Dash has some great points about this.) Perhaps, but such an AI is fundamentally incompatible with the system in which the AI we have has been created.
As AI advances, we face a choice: Will we allow it to become another tool for concentrating power and wealth? Or will we insist upon human dignity and liberty? The answer depends not on technological developments, but on our collective willingness to recognize AI for what it is: not a force of nature, but a product of flawed human choices embedded in vulnerable human systems.
2025-04-25 12:00:00
Are we entitled to technology?
A quick thought experiment: A new technological advance gives humans the ability to fly. Does it also confer upon us the right to fly? Let’s say this isn’t a Rocketeer situation — not a jetpack — but some kind of body-hugging anti-gravitic field, just to make it look and feel ever so much more magical and irresistible. Would that be worthy of study and experimentation? I’d have to say yes. But would it be a good idea to use it? I’d have to say no.
We’ve learned this lesson already.
Are we entitled to access to anyone, anytime? That’s a tough one; it tugs on ideas of access itself — what that means, and how — as well as ideas of inaccess, like privacy. But let’s just say I’m walking down the street and see a stranger passing by. Is it my right to cross the street to say hello? I would say so. And I can use that right for many purposes, some polite — such as introducing myself — and some not so — like abruptly sharing some personal belief of mine. Fortunately, this stranger has the right to ignore me and continue on their way. And that’s where my rights end, I think. I don’t have the right to follow them shouting.
It turns out that’s what Twitter was. We got the jetpack of interpersonal communication: a technology that gives us the ability to reach anyone anytime. With it came plenty of good things — good kinds of access, a good kind of leveling. A person could speak past, say, some bureaucratic barrier that would have previously kept them silent. But also, it allowed people with the right measure of influence to inundate millions of other people with lies to the point of warping the reality around them and reducing news to rereading and reprinting those lies just because they were said.
Leave something like this in place long enough, and the technology itself becomes an illegitimate proxy for a legitimate right. Free speech, after all, does not equal an unchallenged social media account.
Steeper and slicker is the technological slope from can to should to must.
–
Today I learned that before Four Tet, Kieran Hebden was the guitarist for a group called Fridge. I listened to their second album, Semaphore, this morning and it’s a fun mix of noises that feels very connected to the Four Tet I’ve known.
The reason I mention this, though, is that it represents a pretty important principle for us all to remember.
Don’t assume someone knows something!
I’ve been a Four Tet fan ever since a friend included a song from Pause on a mix he made for me back in 2003. Ask me for my top ten records of all time, and I’ll probably include Pause. And yet it was only today, over two decades later, after watching a Four Tet session on YouTube, that I thought to read the Four Tet Wikipedia page.
–
I’ve been staring at Pavel Ripley’s sketchbooks this week. It has been especially rare for me to find other people who use sketchbooks in the same way I do — as a means and end, not just a means. If you look at his work you’ll see what I mean. Just completely absorbing.
My bud Blagoj, who has excellent taste, sent this Vercel font called Geist my way a while back. It has everything I like in a font — many weights, many glyphs, and all the little details at its edges and corners. USING IT.
These hand-lettered magazine covers are so good.
I’m vibing with these cosmic watercolors by Lou Benesch.
An Oral History of WIRED’s Original Website is worth reading (paywall tho), and especially in an ad-blocking browser (I endorse Vivaldi), because as much as I love them, WIRED’s website has devolved into a truly hostile environment.
“As a middle-aged man, I would’ve saved loads on therapy if I’d read Baby-Sitters Club books as a kid.” SAME.
Richard Scarry and the art of children’s literature.
If you’re reading this via RSS, that’s really cool! Email me — [email protected] — and let me know!
2025-04-24 12:00:00
I often find myself contemplating the greatest creators in history — those rare artists, designers, and thinkers whose work transformed how we see the world. What constellation of circumstances made them who they were? Where did their ideas originate? Who mentored them? Would history remember them had they lived in a different time or place?
Leonardo da Vinci stands as perhaps the most singular creative mind in recorded history — the quintessential “Renaissance Man” whose breadth of curiosity and depth of insight seem almost superhuman. Yet examples like Leonardo can create a misleading impression that true greatness emerges only once in a generation or century. Leonardo lived among roughly 10-13 million Italians — was greatness truly as rare as one in ten million? We know several of his contemporaries, but still, the ratio remains vanishingly small. This presents us with two possibilities: either exceptional creative ability is almost impossibly rare, or greatness is more common than we realize and the rarity is recognition.
I believe firmly in the latter. Especially today, when we live in an attention economy that equates visibility with value. Social media follower counts, speaking engagements, press mentions, and industry awards have become the measuring sticks of design success. This creates a distorted picture of what greatness in design actually means. The truth is far simpler and more liberating: you can be a great designer and be completely unknown.
The most elegant designs often fade into the background, becoming invisible through their perfect functionality. Day to day life is scattered with the artifacts of unrecognized ingenuity — the comfortable grip of a vegetable peeler, the intuitive layout of a highway sign, or the satisfying click of a well-made light switch. These artifacts represent design excellence precisely because they don’t call attention to themselves or their creators. Who is responsible for them? I don’t know. That doesn’t mean they’re not out there.
This invisibility extends beyond physical objects. The information architect who structures a medical records system that saves lives through its clarity and efficiency may never receive public recognition. The interaction designer who simplifies a complex government form, making essential services accessible to vulnerable populations, might never be celebrated on design blogs or win prestigious awards.
Great design isn’t defined by who knows your name, but by how well your work serves human needs. It’s measured in the problems solved, the frustrations eased, the moments of delight created, and the dignity preserved through thoughtful solutions. These metrics operate independently of fame or recognition.
Our obsession with visibility also creates a troubling dynamic: design that prioritizes being noticed over being useful. This leads to visual pollution, cognitive overload, and solutions that serve the designer’s portfolio more than the user’s needs. When recognition becomes the goal, the work itself often suffers. I was among the few who didn’t immediately recoil at the brash aesthetics of the Tesla Cybertruck, but it turns out that no amount of exterior innovation changes the fact that it is just not a good truck.
There’s something particularly authentic about unknown masters — those who pursue excellence for its own sake, refining their craft out of personal commitment rather than in pursuit of accolades. They understand that their greatest achievements might never be attributed to them, and they create anyway. Their satisfaction comes from the integrity of the work itself.
This isn’t to dismiss the value of recognition when it’s deserved, or to suggest that great designers shouldn’t be celebrated. Rather, it’s a reminder that the correlation between quality and fame is weak at best, and that we should be suspicious of any definition of design excellence that depends on visibility. This is especially so today. The products of digital and interaction design are mayflies; most of what we make is lost to the rapid churn of the industry even before it can be lost to anyone’s memory.
The next time you use something that works so well you barely notice it, remember that somewhere, a designer solved a problem so thoroughly that both the problem and its solution became invisible. That designer might not be famous, might not have thousands of followers, might not be invited to speak at conferences — but they’ve achieved something remarkable: greatness through invisibility.
Design greatness is not measured by the recognition of authorship, but in the creation of work so essential it becomes as inevitable as gravity, as unremarkable as air, and as vital as both.
2025-04-16 12:00:00
Our world treats information like it’s always good. More data, more content, more inputs — we want it all without thinking twice. To say that the last twenty-five years of culture have centered around info-maximalism wouldn’t be an exaggeration.
I hope we’re coming to the end of that phase. More than ever before, it feels like we have to — that we just can’t go on like this. But the solution cannot come from within; it won’t be a better tool or even better information to get us out of this mess. It will be us, feeling and acting differently.
Think about this comparison: Information is to wisdom what pornography is to real intimacy.
I’m not here to moralize, so I compare to pornography with all the necessary trepidation. Without judgement, it’s my observation that pornography depicts physical connection while creating emotional distance. I think information is like that. There’s a difference between information and wisdom that hinges on volume. More information promises to show us more of reality, but too much of it can easily hide the truth. Information can be pornography — a simulation that, when consumed without limits, can weaken our ability to experience the real thing.
When we feel overwhelmed by information — anxious and unable to process what we’ve already taken in — we’re realizing that “more” doesn’t help us find truth. But because we have also established information as a fundamental good in our society, failure to keep up with it, make sense of it, and even profit from it feels like a personal moral failure. There is only one way out of that. We don’t need another filter. We need a different emotional response to information. We should not only question why our accepted spectrum of emotional response to information — in the general sense — is mostly limited to the space between curiosity and desire, but actively develop a capacity for disgust when it becomes too much. And it has become too much.
Some people may say that we just need better information skills and tools, not less information. But this misses how fundamentally our minds need space and time to turn information into understanding. When every moment is filled with new inputs, we can’t fully absorb, process, and reflect upon what we’ve consumed. Reflection, not consumptions, creates wisdom. Reflection requires quiet, isolation, and inactivity.
Some people say that while technology has expanded over the last twenty-five years, culture hasn’t. If they needed a good defense for that idea, well, I think this is it: A world without idleness is a truly world without creativity.
I’m using loaded moral language here for a purpose — to illustrate an imbalance in our information-saturated culture. Idleness is a pejorative these days, though it needn’t be. We don’t refer to compulsive information consumption as gluttony, though we should. And if attention is our most precious resource — as an information-driven economy would imply — why isn’t its commercial exploitation condemned as avarice?
As I ask these questions I’m really looking for where individuals like you and me have leverage. If our attention is our currency, then leverage will come with the capacity to not pay it. To not look, to not listen, to not react, to not share. And as has always been true of us human beings, actions are feelings echoed outside the body. We must learn not just to withhold our attention but to feel disgust at ceaseless claims to it.
2025-04-08 12:00:00
Technology functions as both mirror and lens — reflecting our self-image while simultaneously shaping how we see everything else. This metaphor of recursion, while perhaps obvious once stated, is one that most people instinctively resist.
Why this resistance? I think it is because the observation is not only about a kind of recursion, but it is itself recursive.
The contexts in which we discuss technology’s distorting effects tend to be highly technological — internet-based forums, messaging, social media, and the like. It’s difficult to clarify from within, isn’t it? When we try to analyze or critique a technology while using it to do so, it’s as if we’re critiquing the label from inside the bottle. And these days, the bottle is another apt metaphor; it often feels like technology is something we are trapped within.
And that’s just at the surface — the discussion layer. It goes much deeper.
It’s astounding to confront the reality that nearly all the means by which we see and understand ourselves are technological. So much of modern culture is in its artifacts, and the rest couldn’t be described without them. There have been oral traditions, of course, but once we started making things, they grew scarce. For a human in the twenty-first century, self awareness, cultural identification, and countless other aspects of existence are all, in some way or another, technological.
It’s difficult to question the mirror’s image when we’ve never seen ourselves without it.
The interfaces through which we perceive ourselves and interpret the world are so integrated into our experience that recognizing their presence, let alone their distorting effects, requires an almost impossible perspective shift.
Almost impossible.
Because of course it can be done. In fact, I think it’s a matter of small steps evenly distributed throughout a normal lifestyle. It’s not a matter of secret initiation or withdrawing from society, though I think it can sometimes feel that way.
How, then, can one step outside the mirror’s view?
I’ve found three categories of action particularly helpful:
One option we always have is to simply not use a thing. I often think about how fascinating it is that to not use a particular technology in our era seems radical — truly counter-cultural. The more drastic rejecting any given technology seems, the better an example it is of how dependent we have become upon it. Imagine how difficult a person’s life would be today if they were to entirely reject the internet. There’s no law in our country against opting out of the internet, but the countless day-to-day dependencies upon the it nearly amount to a cumulative obligation to be connected to it. Nevertheless, a person could do it. Few would, but they could.
This kind of “brute force” response to technology has become a YouTube genre — the “I Went 30 Days Without ____” video is quite popular. And this is obviously because of how much effort it requires to eliminate even comparatively minor technologies from one’s life. Not the entire internet, but just social media, or just streaming services, or just a particular device or app.
Elimination isn’t easy, but I’m a fan of it. The Amish are often thought of as simply rejecting modernity, but that’s not an accurate description of what actually motivates their way of life. Religion plays a foundational role, of course, but each Amish community works together to decide upon many aspects of how they live, including what technologies they adopt. Their guiding principle is whether a thing or practice strengthens their community. And their decision is a collective one. I find that inspiring. When I reject a technology, I do so because I either don’t feel I need it or because I feel that it doesn’t help me live the way I want to live. It’s not forever, and it isn’t with judgement for anyone else but me.
These are probably my most radical eliminations: most social media (I still reluctantly have a LinkedIn profile), streaming services (except YouTube), all “smart home” devices of any kind, smartwatches, and for the last decade and counting, laptops. Don’t @ me because you can’t ;)
What I have in mind here is curation of information, not of technologies. Since it is simply impossible to consume all information, we all curate in some way, whether we’re aware of it or not. For some, though, this might actually be a matter of what technologies they use — for example, if a person only uses Netflix, then they only see what Netflix shows them. That’s curation, but Netflix is doing the work. However, I think it’s a good exercise to do a bit more curation of one’s own.
I believe that if curation is going to be beneficial, it must involve being intentional about one’s entire media diet — what information we consume, from which sources, how frequently, and why. This last part requires the additional work of discerning what motivates and funds various information sources. Few, if any, are truly neutral.
The reality is that as information grows in volume, the challenge of creating useful filters for it increases to near impossibility. Information environments operated on algorithms filter information for you based upon all kinds of factors, some of which align with your preferences and many of which don’t. There are many ways to avoid this, they are all more inconvenient than a social media news feed, and it is imperative that more people make the effort to do them. They range from subscribing to carefully-chosen sources, to using specialized apps, feed readers, ad and tracking-blocking browsers and VPNs to control how information gets to you. I recommend all of that and a constant vigilance because, sadly, there is no filter that will only show you the true stuff.
Finally, there’s optimization — the fine-tuning you can do to nearly anything and everything you use. I’ve become increasingly active in seeking out and adjusting even the most detailed of application and device settings, shaping my experiences to be quieter, more limited, and aligned with my intentions rather than the manufacturers’ defaults.
I spent thirty minutes nearly redesigning my entire experience in Slack in ways I had never been aware were even possible until recently. It’s made a world of difference to me. Just the other day, I found a video that had several recommendations for altering default settings in Mac OS that have completely solved daily annoyances I have just tolerated for years. I am always adjusting the way I organize files, the apps I use, and the way I use them because I think optimization is always worthwhile. And if I can’t optimize it, I’m likely to eliminate it.
None of these approaches offers perfect protection from technological mediation, but together they create meaningful space for more direct control over your experience.
But perhaps most important is creating physical spaces that remain relatively untouched by digital technology.
I often think back to long trips I took before the era of ubiquitous computing and connection. During a journey from Providence to Malaysia in 2004, I stowed my laptop and cell phone knowing they’d be useless to me during 24 hours of transit. There was no in-cabin wifi, no easy way to have downloaded movies to my machine in advance, no place to even plug anything in. I spent most of that trip looking out the window, counting minutes, and simply thinking — a kind of unoccupied time that has become nearly extinct since then.
What makes technological discernment in the digital age particularly challenging is that we’re drowning in a pit of rope where the only escape is often another rope. Information technology is designed to be a nearly wraparound lens on reality; it often feels like the only way to keep using a thing is to use another thing that limits the first thing. People who know me well have probably heard me rant for years about phone cases — “why do I need a case for my case?!” These days, the sincere answer to many peoples’ app overwhelm is another app. It’s almost funny.
And yet, I do remain enthusiastic about technology’s creative potential. The ability to shape our world by making new things is an incredible gift. But we’ve gone overboard, creating new technologies simply because we can, without a coherent idea of how they’ll shape the world. This makes us bystanders to what Kevin Kelly describes as “what technology wants” — the agenda inherent in digital technology that makes it far from neutral.
What we ultimately seek isn’t escape from technology itself, but recovery of certain human experiences that technology tends to overwhelm: sustained attention, silence, direct observation, unstructured thought, and the sense of being fully present rather than partially elsewhere.
The most valuable skill in our digital age isn’t technical proficiency but technological discernment — the wisdom to know when to engage, when to disconnect, and how to shape our tools to serve our deeper human needs rather than allowing ourselves to be shaped by them.
“It does us no good to make fantastic progress if we do not know how to live with it.”
– Thomas Merton