Recent entries from the blog.
-
-
This is a handy set of guidelines for using AI at work, acknowledging that if you toss unexamined AI-output onto your colleagues for review, they will not thank you for it. The principles here are to be responsible for whatever you are sharing—whether you used AI or not—and to be transparent about when you did use AI. I would use these guidelines not as-is but as starting point for a discussion with your own teams about how you are working together, and how that work is changing. Already I am seeing examples of the gendered and racial dynamics of how AI-generated code and docs are being shared: cishet white men are much more likely to share workslop without first doing any reading, while women and people of color are more likely to be on the receiving end of a request to review that slop. If you recognize yourself in the first part of that dynamic—well, here’s your invitation to stop being an asshole; if you recognize yourself in the latter, you can use these guidelines to get a discussion going, but at some point, you’re going to have to refuse.
-
A group of tech workers wants to help you organize around AI policies in your workplace. Good resources here, including an AI workers inquiry toolkit, tips for getting started with organizing, and—my personal fave—AI workplace bingo, where you can keep track of brilliant invectives including, “You’re just not using it right,” and “We’re not paid to worry about social harms.” Don’t play alone.
-
Jason Koebler reports on a study that defines “work slop” (truly a cursed phrase) as work that “masquerades as good work, but lacks the substance to meaningfully advance a given task.” Predictably, the study shows that the prevalence of work slop is a torpedo to collaboration and trust: if you have to hunt around for hallucinations in your colleague’s work, how can you trust anything they say or do? And perhaps that is actually the point of this whole phenomena: in the same way that slop across our social networks makes it impossible to believe in even a semblance of reality, work slop makes it unwise to treat your coworkers as human. But who wins when we see each other as little more than faulty tools? Not us.
-
Virginia Valian has a work problem: she struggles to do the work she wants to do. In this essay from 1977 (PDF), she describes how she identified the problem and the program she used to address it—one that involves very short periods of work alongside a deepening awareness that good work is its own reward. Her program isn’t a means of doing more work for other people, but for herself; and it isn’t a productivity hack so much as a means of becoming attuned with what your work means to you and why it matters. Importantly, what she describes involves being able to do the work that matters to you even when times are difficult—the work becomes a salve, not an obligation. “For me, there are two main rewards for working,” she writes. “One is the continual discovery within myself of new ideas; the other is deeper understanding of a problem.” (Stay for the kicker.)
-
Brian Merchant asked workers what AI was doing to their jobs and got back loads of thoughtful, hilarious, at times desperate and at times righteous responses. In this post, he shares what he heard from tech workers as they deal with edicts to throw AI at everything, bottom lines be damned. The whole thing is long, but it’s worth reading in full (and the kicker is its own reward); the stories make plain that AI is being used to deskill, de-spirit, and demean the work and craft that so many people have spent years developing, and that the promise of AI is the kind of promise wiser people have learned to expect from the emperor’s tailor. What I’ll call out here: if you’re one of the many, many workers who is angry, fearful, demoralized and worse about the AI sloppification of work, know that you are not alone, and you likely aren’t even in the minority.
-
Oliver Burkeman on the insufferable edicts to use AI in your work: “The obvious answer, of course, is that you might have no choice: that given what’s coming, anyone who wants to keep food on the table must give up their dreams of aliveness, and buckle down to placating the machines instead. I have two things to say about that, the first of which is that I don’t believe it: that aliveness is so central to meaningful human experience that there’ll always be a market for those who can cultivate it, embed it in what they create, foster it in institutions and organizations, and bring people together to experience it. But the second is that even if I’m hopelessly wrong about that, and the direst predictions about AI disruption come true, then navigating through life by aliveness is still the right choice, because that’s what makes life worth living.”
To put this another way: even if you believe that shackling yourself to the machines is the only way to keep food on the table, you’re still coming to harm. Any choice you make here isn’t between safety and harm but between different kinds of harm. And maybe the threats are just that—sneering words spit from the mouths of bullies. Maybe it’s time to call their bluff.
-
Applications are open now for the summer speculative fiction work/shop. The work/shop will gather a small group of people eager to imagine what comes next in their work, and all too aware that the usual tricks—the planning and projections, the goals and milestones and objectives—aren’t the right tools. We’ll use speculative fiction to break out of those ruts, to open up a lens on how we think about work that creates more awareness, more opportunities to revise and re-story our work, more room to maneuver—even on the darkest of days. If you (or someone you know) feels stuck, uncertain, or lost in their work and wants to open up some space to imagine different futures, if you want room to think more expansively and in community—this is for you.
-
Defector has a series of interviews with federal workers, including this one with Sabrina Valenti who was a budget analyst at NOAA. It’s abundantly clear reading these pieces how much the administration is attacking workers themselves as well as the work they do to make the world a living, thriving place for all of us. In Valenti’s own words: “The work that we do benefits the American people. And when I say the American people, I mean all of them, not just the ones who are wealthy, not just the ones who live in certain locations. Every single person who lives near body of water, whether it’s a river, a gulf, an ocean, they benefit from the work that NOAA does. For the dismantling to be proceeding apace, it’s destroying the hopes of thousands of people who have dreamed of public service. I have colleagues who were fired who wanted to work at NOAA since they were in elementary school. And the reason that we do our jobs is because we’re passionate about the subject. We’re passionate about the mission. And we’re passionate about serving the entire country, everyone.”
-
This memo from a group of lawyers contains brief, eminently readable, and plainly argued context for why the new administration’s targeting of DEIJ programs doesn’t change the underlying legality of those programs nor does it require organizations to proactively eliminate those programs or to scrub their websites of mention of them. The memo is oriented towards universities but reads (to this non-lawyer, at least) like the kind of argument that would also apply to companies both large and small. Perhaps most critically, it points out that the January 20th executive order “concedes that DEI initiatives are not inherently unlawful,” and that the order “is constitutionally suspect because it appears to rest on pernicious stereotypes that presume the intellectual inferiority of women and Black people.” To me, that’s the strongest counter to anyone who says that the order compels an organization to jettison it’s DEIJ programs: to comply with the order is to reinforce those pernicious stereotypes. Anyone who chooses compliance should be reminded of that, loudly and persistently.