From Companions to Corporations: What We’re Quietly Automating
AI Is Reshaping How We Connect, Create, and Lead—Are You Designing It or Defaulting to It?
It’s Not Just a New Financial Year or Quarter. It’s a New Question: What Are We Really Building With AI?
1 July marks more than a reset on the financial calendar.
It’s a chance to step back and ask not just how AI is accelerating—but what it’s reshaping.

This issue opens our FY26 lens. It surfaces five stories that span intimacy, identity, creativity, legality, and labour—and asks: what systems are we reinforcing, and what possibilities are we quietly closing off?
In this new financial year, AI isn’t just scaling workflows.
It’s learning how we relate.
It’s deciding what gets copied.
It’s shaping who gets trusted.
And it’s training on the culture we leave behind.
These aren’t just questions of ethics, IP, or operations. They’re questions of design intent—and they demand human-centred courage.
Here’s what we’re opening with this year:
Hinge CEO: AI Companions Are “Playing with Fire”
→ When intimacy becomes interface, what happens to emotional growth?Disney vs. Midjourney
→ Why ownership, consent, and creativity can’t be automatedAnthropic’s Copyright Ruling
→ Legal clarity meets ethical ambiguity in AI’s data dietAmazon CEO on Workforce Disruption
→ Rethinking skills, culture, and resilience—before roles disappearGoldman Sachs’ AI Assistant Rollout
→ The difference between deploying productivity and designing for trust
Let’s begin this year with better questions. Let’s build with clarity, not default

Hinge CEO Warns: AI Companions Are “Playing with Fire”
Topic: Human-Centred Design | Emotional Integrity | AI Ethics

Quick Insight
In a moment where AI chatbots are being marketed as companions, Hinge CEO Justin McLeod is drawing a bold line: “This is playing with fire.” He warns that while AI can enhance parts of the dating journey, outsourcing intimacy to machines risks undermining our emotional health.
This isn’t just a product question. It’s a systems design challenge—one that touches emotion, ethics, and trust.
Details
In a recent conversation on Decoder with Nilay Patel, McLeod challenged the growing trend of AI-generated “companions.” He likened the concept to junk food—emotionally engineered for instant comfort, but nutritionally empty in the long term. He cautioned that treating AI as an emotional surrogate could backfire, leaving users lonelier, not more connected.
But McLeod isn’t anti-AI. In fact, Hinge is actively exploring tools that help humans date better, not avoid it. As Global Dating Insights reports, Hinge may soon deploy AI dating coaches that help users craft profiles, clarify preferences, or rehearse hard conversations.
That distinction—between augmenting emotional intelligence and automating intimacy—is at the heart of this design moment.
Why It Matters
This perspective highlights the need for clear ethical guidelines and human-centred design in the deployment of AI on dating platforms. It prompts critical questions:
How can we ensure that AI supports rather than supplants human connection?
What measures are necessary to prevent the misuse of AI-generated content in romantic contexts?
How do we balance technological innovation with the preservation of emotional integrity?
👉 Consider:
What policies can organisations implement to safeguard emotional well-being in the age of AI?
How can designers and technologists collaborate to create ethical AI applications that enhance rather than replace human interaction?
How can we educate users about the responsible use of AI in personal relationships?
Disney & Universal vs. Midjourney: The Battle for Creative Ownership in the Age of AI
Topic: Intellectual Property | Human-Centred Design | Ethical AI

Quick Insight
As AI starts generating media that feels emotionally familiar, this case asks: Who owns the characters we trust? Disney and Universal have filed a landmark lawsuit against AI company Midjourney, alleging unauthorized use of their copyrighted characters in AI-generated content. This legal action highlights the growing tension between creative ownership and the capabilities of generative AI.
Details
In June 2025, Disney and Universal initiated legal proceedings against Midjourney, accusing the AI firm of infringing on their intellectual property by generating images of iconic characters without authorisation. The lawsuit emphasises the studios' commitment to protecting their creative assets in the face of rapidly advancing AI technologies.
The controversy intensified when images of an AI-generated Darth Vader began circulating on social media, reportedly modded into Fortnite gameplay—raising alarm over the potential misuse of beloved IP in interactive media. This led to inappropriate in-game behavior and raised concerns about the ethical use of AI in interactive media.
This case is part of a broader discourse on the ethical and legal implications of AI in creative industries, prompting discussions among designers, ethicists, and legal experts.
Why It Matters
This lawsuit underscores the necessity for clear ethical guidelines and legal frameworks to deploy AI within creative domains. It prompts critical questions:
How can we ensure that AI respects the rights of original creators?
What measures are necessary to prevent misuse of AI-generated content?
How do we balance innovation with the protection of creative legacies?
👉 Consider:
What policies can organisations implement to safeguard intellectual property in the age of AI?
How can designers and technologists collaborate to create ethical AI applications?
In what ways can we educate users about the responsible use of AI-generated content?
Anthropic’s Copyright Ruling: A Win for AI—But Far From Settled
Topic: Ethical AI | Intellectual Property | Systems Design

Quick Insight
Anthropic has achieved a significant victory in its lawsuit against a group of authors. A federal judge has ruled that training its Claude AI model using legally purchased books falls under the fair use doctrine. However, the same ruling did not accept Anthropic’s defense regarding its use of millions of pirated books, which sets the stage for a crucial trial scheduled for December.
This is more than just a copyright case; it signals a need to consider the origins of data and the values it embodies.
Details
In a landmark ruling by Judge William Alsup of the U.S. District Court for the Northern District of California, the court determined that Anthropic’s use of legally acquired print books—ones that were purchased and scanned for training—was “spectacularly transformative” and aligned with existing fair use precedents.
According to The Washington Post, the court compared Claude's learning process to that of a human writer who studies style and tone, rather than simply replicating specific content. The authors involved in the case did not successfully demonstrate that Claude was capable of generating outputs similar to their original works, which weakened their claim of competitive harm.
However, the ruling also revealed that Anthropic ingested over 7 million pirated books, which were downloaded from illegal archives and retained permanently. As reported by Reuters, this aspect of the case will go to trial in December and could result in damages of up to $150,000 for each book involved.
AP News highlighted that this lawsuit represents one of the first attempts by the judiciary to establish legal boundaries regarding AI’s use of copyrighted content, potentially setting a precedent for similar cases against companies like OpenAI, Meta, and Google.
Why It Matters
This is where copyright meets system logic—and why your data sourcing strategy might be tomorrow’s reputational risk. This case underscores the delicate balance between innovation and ethical responsibility in AI development:
Designing for Integrity: AI developers must consider not just the legality but the ethical implications of their data sources.
Systemic Implications: The decision highlights the need for systemic approaches to data acquisition, emphasizing transparency and respect for intellectual property.
Future Frameworks: As AI continues to evolve, establishing clear guidelines and ethical frameworks will be crucial to navigate the complexities of data usage and copyright law.
Reflective Questions:
How can AI companies ensure their training data respects both legal and ethical standards?
What systems can be implemented to prevent the use of unauthorized content in AI development?
In what ways can the industry collaborate to establish best practices for data sourcing and usage?
Amazon CEO: AI Will Reduce Corporate Workforce
Topic: Organisational Design | Workforce Transformation

Quick Insight
Amazon CEO Andy Jassy has announced that the integration of generative AI will lead to a reduction in the company's corporate workforce over the next few years. This development highlights the broader trend of AI reshaping white-collar employment, prompting a reevaluation of workforce strategies in the tech industry.
Details
This isn’t just a skills gap—it’s a culture shift. How companies guide emotional resilience in the face of automation may define employee trust more than any model. In a recent memo to employees, Amazon CEO Andy Jassy stated that the company's increasing reliance on generative AI and automation technologies will decrease the need for specific corporate roles. Jassy emphasised that while some positions will become obsolete, new opportunities will arise that require different skill sets.
Amazon's investment in AI is substantial, with over 1,000 generative AI applications currently in development or deployment across various departments, including inventory management, customer service, and product listings.
Jassy encouraged employees to adapt by engaging in AI-related training and innovation efforts, highlighting that those who embrace AI will be well-positioned to contribute significantly to the company's evolution.
This announcement aligns with broader industry sentiments, as other tech leaders have similarly warned about AI-driven disruptions to white-collar jobs.
Why It Matters
The integration of AI into corporate operations presents both challenges and opportunities. For design and innovation leaders, this shift necessitates a focus on:
Reskilling and Upskilling: Developing programs to equip employees with the skills needed to work alongside AI technologies.
Organisational Redesign: Reevaluating job roles and workflows to align with AI capabilities and enhance efficiency.
Ethical Considerations: Ensuring that AI implementation supports employee well-being and maintains organizational values.
👉 Consider:
How can your organisation proactively prepare for AI-induced changes in the workforce?
What strategies can be employed to support employees through this transition?
In what ways can AI be integrated to complement human work rather than replace it?
Goldman Sachs Launches AI Assistant Firmwide: Augmentation or Automation?
Topic: AI Integration | Workplace Systems

Quick Insight
Goldman Sachs has rolled out its internal AI assistant across the firm—marking a bold move to integrate generative AI into knowledge work at scale. But behind the productivity promise lies a deeper question: is this AI designed to augment expertise, or automate it?
Explore how different sources are interpreting this rollout—from strategic promise to systemic risk.
Details
After piloting the GS AI Assistant with 10,000 employees, Goldman Sachs announced a full internal rollout this week. The tool helps staff summarise documents, draft emails, and analyse data—offering productivity boosts in roles ranging from Investment Banking to Wealth Management.
📰 Goldman Sachs launches AI Assistant firmwide – Reuters (June 23, 2025)
Chief Information Officer Marco Argenti says the goal is to create AI that “acts like a seasoned Goldman banker”—not just a tool, but an intelligent internal partner.
But not everyone is convinced. While Goldman talks augmentation, others see signs of quiet automation. As coverage from Futurism and Finextra shows, questions of job displacement, power dynamics, and systems ethics remain wide open.
🔗 Goldman wants AI to act like a seasoned banker – Finextra
🔗 Goldman Sachs quietly replacing roles with AI? – Futurism
Why It Matters
This story signals more than internal transformation—it marks a shift in how AI is entering professional culture.
For human-centred design and strategy leaders, this moment asks:
How do we design tools that enhance confidence, clarity, and capability—not erode it?
Are we building systems that value human judgment—or quietly replacing it?
What kind of design feedback loops should we build into AI deployments like this?
👉 Explore the coverage above and reflect:
Where are we using AI to support people—and where are we simply scaling the system faster?
What systems of work are you reinforcing through the tools you choose?
This is one of those signals worth pausing on. The future of AI at work won’t be decided by code—it will be shaped by the conversations we’re willing to have before the rollout.
🧭 Thanks for Starting FY26 with Me
If this issue got you rethinking the systems you’ve inherited—or the ones you’re quietly building—that’s the point.
Because this year won’t just be shaped by the tools we adopt.
It’ll be shaped by the trust we cultivate.
The stories we choose to protect.
And the human boundaries we decide to uphold, even when AI could blur them.
AI isn’t just learning from our data.
It’s learning from our decisions.
💬 I’d love to hear where this lands for you. What are you seeing shift—on your team, in your practice, or inside your product?
📥 Share this issue with someone who’s designing for the long game.
🧠 Or take one of these questions into your next retro, roadmapping session, or leadership offsite:
Where are we using AI to support courage, and where are we using it to avoid discomfort?
What assumptions about skill, culture, or creativity are we reinforcing through automation?
What signals are we ignoring—because they’re too slow to measure?
Let’s lead this year with clarity.
Not just faster systems.
But braver ones.
See you in the next issue,
**Bron