This week, we explore two compelling shifts—one in the architecture of the digital world and the other in the architecture of human experience. Both invite us to consider what happens when long-held defaults begin to dissolve—whether they are inherited structures of online control or the understated life lessons from those who quietly built something lasting. What we label disruption in business often emerges from a refusal to comply with established norms—sometimes out of defiance, often out of necessity. The same can be said of how we build lives that don’t fit neatly into templates.
At one end, platform businesses are being reimagined not just by how they acquire users and data but by how they interpret boundaries—legal, ethical, and infrastructural. At the other, a conversation over lunch reminded us how childhoods shaped in uniformity and simplicity can still produce bold entrepreneurial journeys and ethical compasses, grounded in grit and grace.
Running, like building, asks the same question over and over again: will you keep going when the terrain changes?
DTW
During the Week, the Claude AI maker Anthropic won a major legal reprieve when a U.S. District Court ruled that using copyrighted books for training purposes and converting them from print to digital constituted "fair use." However, the court also found that Anthropic’s use of pirated copies to build a central digital library was not protected under fair use. While the company avoided a blanket infringement verdict, it must now face a trial to determine damages for this subset of unauthorised materials. The ruling marks a mixed outcome—favourable precedent for AI firms seeking legitimacy for model training data, but also a cautionary signal about sourcing content unlawfully. As the legal landscape evolves, AI firms must navigate between speed and compliance, innovation and ethical stewardship.
Data is the new oil—and AI companies are the wildcatters. The big generative AI platforms require enormous training data to function well, and the most effective way to obtain high-quality data has been to integrate, license, scrape, or acquire content from legacy media, internet archives, and user-generated content platforms.
OpenAI’s initial edge came from partnerships—particularly its deal with Microsoft, which brought not just funding and infrastructure but access to enterprise data via GitHub (Copilot), LinkedIn, and the Office suite. This is the “borrow”component, where legacy content or formats are repurposed through partnerships and API integrations. Meanwhile, Google’s Gemini and Meta’s LLaMA have had to rely on alternative scraping tactics, acquiring datasets from the open internet—often pushing the boundary of fair use.
The legal dimension of content acquisition has become increasingly pronounced. As AI companies scale, they face lawsuits for copyright infringement and IP misuse. The New York Times sued OpenAI and Microsoft, alleging that their copyrighted articles were used without consent to train language models. Getty Images is suing Stability AI over the unauthorised use of its stock photography. Meanwhile, book authors and artists have brought class actions against companies like Meta and Anthropic for training on copyrighted texts. You can read more about the current strategies in an earlier issue of this newsletter.
These lawsuits reflect a broader debate: should AI companies ask for permission or forgiveness? The “ask forgiveness” model dominated the last decade, with companies like Uber, Airbnb, and Facebook pushing into new markets and regulatory grey zones before laws could catch up. But with AI, the stakes are higher—and the legal system is starting to catch up.
Let’s examine how the big four—Apple, Amazon, Meta, and Alphabet—are implementing this strategy as they try to catch Microsoft and OpenAI:
Apple: Control First, AI Later - Apple is playing a slow, defensive game. Its core strategy is control: by limiting Progressive Web Apps (PWAs) and pushing developers into its App Store, Apple is defending its walled garden. But even Apple has started to borrow: it signed deals with Shutterstock and other providers for licensed training data. Its forthcoming Apple Intelligence system appears designed to be tightly sandboxed—minimal scraping, maximum compliance.
Amazon: Commercial Partnerships and Quiet Acquisition- Amazon recently inked a deal with The New York Times to license its content for Alexa and other AI systems. It’s also backed Anthropic with billions, mirroring Microsoft’s approach to OpenAI. Amazon’s strength lies in its ecosystem: Alexa, Kindle, and AWS together form a self-reinforcing flywheel. But its AI products have been slower to gain market mindshare.
Meta: Open Models, Closed Motives- Meta’s LLaMA models are open-source (with caveats), and it has been aggressive in scraping data to build them. Yet its Threads integration with the fediverse suggests an interest in decentralised models of engagement. However, Meta has also faced scrutiny for using public Instagram and Facebook posts as training material—often without explicit consent. Their strategy is closest to the “steal” end of the spectrum. In its latest play, Meta is embedding Meta AI assistants inside WhatsApp, turning chat apps into AI battlegrounds.
Alphabet: The Guardian of Search- Alphabet’s Bard (now Gemini) has had access to Google’s search corpus, YouTube transcriptions, and more. Yet Alphabet is vulnerable: news publishers, artists, and musicians are increasingly pushing back against its content usage. Its recent AI Overviews feature—which summarises search results—has drawn fire for using publishers’ work without compensation. Google is now pivoting towards licensing, with partnerships with Reddit, Stack Overflow, and others to train its models.
For platform companies, the Beg-Borrow-Steal approach offers speed and scale. But it also invites regulatory blowback. As AI becomes a general-purpose technology touching education, law, medicine, and politics, the need for consent, attribution, and compensation grows stronger.
Some firms are now retrofitting legitimacy after the fact: licensing data they previously scraped, or forming consortiums with publishers and creators. Others continue to argue that public content implies fair use for training purposes—an assumption increasingly being tested in courts. Historically, platforms have operated on the edge of legality and public opinion. Uber “stole” taxi markets, Airbnb “borrowed” housing supply, and Facebook “begged” forgiveness for privacy lapses. These models worked until regulators responded. The AI playbook is similar but riskier—because it deals not just with markets, but with language, knowledge, and creativity itself.
As we move into an AI-saturated future, the winners may not be those who scrape the most—but those who partner, license, and co-create in ways that are sustainable. “Beg, Borrow, Steal” may be the phase we’re in, but the next phase must be “Build, Share, and Respect.”
That’s where real platform resilience—and public trust—will lie.
OTW
Over the Weekend, the inaugural session of Lessons from their Life wasn’t just about building a firm—it was about remembering where one starts. At BuckSpeak this Saturday, Mr. Chandrasekhar Atmakuri, founder of Atmakuri & Company, invited us into his early world: township lanes, open homes, and school classrooms that doubled as temples of transformation.
What stood out wasn’t the scale of his professional success, but the emotional clarity with which he recalled his childhood. Those of us in the room—many of whom had grown up in similar government quarters—nodded in recognition. The transistor radios, the regimental routines, the schoolmasters who doubled as life mentors: it was less a talk, more a time machine.
The real ledger, as Atmakuri Sir reminded us, is shaped early. The mentors who step in when it matters. The routines that instil discipline. The quiet, unnoticed dignity of middle-class ambition. His journey from those early days to building one of Hyderabad’s most respected accounting firms felt less like a rags-to-riches tale, and more like a chart of values compounding over time.
Special thanks to Subir Jha, Founder at BuckSpeak, for thoughtfully organising and moderating the session. The conversation was as warm as the company—and the Chinese lunch, the guest’s favourite, made it all the more memorable. A lovely Saturday well spent!
In a world obsessed with unicorns, it was a gentle, powerful reminder: the best returns often come from lives invested patiently, with purpose and grace.
I Love You
Shailendra