Lifewide Education
  • Home
    • About
    • Community
    • 2030 Vision
    • 2021 activities
    • White Paper
  • Lifewide Learning
    • Sustainable & Regenerative Futures >
      • Healthy Futures
      • Sustainable Futures Inquiry
      • Pathways >
        • Pathways Stories
        • Motivations
        • Inspring people
      • Lifewide Magazine 25
    • Our Learning LIves
  • Magazine
  • Inquiries
    • Earth Charter Inquiry >
      • Values & Beliefs Statements
    • LIVING & LEARNING WITH AI >
      • Lest we forget
      • First Meeting
      • Living with AI
      • Our relationships with AI
      • AI across life domains
      • Early morning tango with AI
      • AI & Transition Spaces
      • Concerns about AI
  • Books
    • Learning for a Complex World
    • Lifewide Learning, Education & Personal Development
    • Lifewide Learning & Education in Universities & Colleges
    • Professional learning
  • Blog
  • Resources
  • Lifewide Development Award
  • IHB3 seminar
  • INN
  • SDU
  • Qinghai
  • OU employability
  • EXPLORING LEARNING ECOLOGIES
  • Learning Lives Conference
  • John Cowan Podcasts
 '𝗟𝗲𝗮𝗿𝗻 𝘁𝗼 𝗹𝗶𝘃𝗲 𝘄𝗲𝗹𝗹 𝘄𝗶𝘁𝗵 𝗔𝗜 — 𝗗𝗼𝗻’𝘁 𝗱𝗿𝗶𝗳𝘁 𝘄𝗶𝘁𝗵 𝗶𝘁'  Living & Learning with AI Inquiry
Noticing is an important first step in any inquiry.  I wrote a short account of the apps I looked at when I first woke this morning and then asked Claude 'What does this say about me - the way I am using AI and the way AI is influencing me?'
I then interacted with Claude as the conversation unfolded and what emerged was revealing.
​

Five Apps, Seven Minutes, One Human Morning — Norman Jackson
Living and Learning with AI  —  A Reflective Inquiry

Five Apps,
Seven Minutes,
One Human Morning

What an early morning routine of AI interactions reveals about our relationship with artificial intelligence — and ourselves.

Norman Jackson  ·  Living and Learning with AI Inquiry  ·  Week 1
The post that started this reflection
LinkedIn  ·  Living and Learning with AI  ·  Week 1

What does your early morning app routine tell you about yourself — and about AI?

This morning I paid close attention to the first thing I did when I woke up at 7.10am.

Before I’d got out of bed I had opened five apps. WhatsApp first — checking for messages from my daughter in Dubai, with her family’s safety in the Middle East on my mind. Then BBC News — the same difficult story, still unfolding. Then BBC Sport — Manchester United away at Newcastle tonight. Then Google feed — geology, psychology, the cosmos, things that ground me and lift me when the news is heavy. Then LinkedIn — checking on an inquiry I’m running.

A simple morning routine. But look at what was actually happening.

Every app I opened was AI-powered. Every feed was curated — shaped by algorithms that have been learning my behaviour, my interests, and my anxieties for years. And yet the meaning of each interaction was entirely mine, rooted in contexts the algorithm knows nothing about: a war, a family, a football team, a lifelong curiosity about how the world works.

The BBC Sport notification told me the fixture. But it was me who felt the simultaneous flicker of apprehension, excitement, and possibility — because I know this team, this manager, what away games at Newcastle usually mean, and why this season feels different. One data point. A whole human interior.

The Google feed gave me geology and the cosmos. Partly entertainment, partly genuine learning, partly — I’ll be honest — an antidote to the miserable news. I scroll it knowing I have to be discerning. I stop when it starts to pull. That discipline is mine, not the algorithm’s.

And that’s what this small experiment revealed. AI doesn’t deliver emotions or meaning. It delivers inputs — and we bring everything else. Our histories, our attachments, our hopes, our capacity for discernment. The person is always where the meaning lives.

But here’s the question worth sitting with: how much of your early morning emotional landscape is yours — and how much has been quietly assembled for you, before you’ve even had a cup of tea?

Try noticing tomorrow. Just for five minutes. What you reach for first, and why. What it does to you. Whether that’s the relationship with AI you’d consciously choose.

That kind of noticing is where this inquiry begins.

What this moment reveals: a closer reading

The post above describes seven minutes of one person’s morning. But those seven minutes, examined carefully, open into something much larger: a set of questions about how AI is woven into our most ordinary routines, what it actually does there, and what our responses to it reveal about us.

The reflective conversation that followed the post produced three significant corrections and refinements — each of which points toward something important for anyone trying to understand their own relationship with AI.

1.  Context is everything — and AI knows almost none of it

An initial analysis of this morning routine identified an ‘emotional arc’ — a sequence moving from anxiety to grief to dread to curiosity to self-consciousness, with AI-driven curation shaping each step. It was a plausible-sounding reading. It was also substantially wrong.

The WhatsApp check was not ‘AI shaping anxiety.’ It was a person with family in a war zone doing what any caring human being would do: checking that the people they love are safe. The author’s wife’s parents and sisters live in Iran. His daughter and her family are in Dubai. These are not abstract concerns. The BBC News check was not passive consumption of a curated feed. It was someone with direct personal stakes in a conflict trying to understand what had happened overnight. The LinkedIn check was not social feedback-seeking. It was a researcher and facilitator monitoring the infrastructure of an inquiry he is actively leading.

Strip out that context and you get a technically accurate but humanly hollow analysis. The first and most important principle for understanding AI in everyday life is this:

“AI interacts with the whole person — their history, their relationships, their responsibilities, their fears. The algorithm knows your click patterns. It knows almost nothing else.”

This has a direct implication for how we talk about AI influence. The apps were not assembling an emotional landscape from nothing. They were delivering information into a life already richly structured by love, worry, commitment, and curiosity. The person came first. The AI served — and was interpreted through — that prior human reality.

2.  One data point, a whole human interior

The football fixture is the sharpest illustration in the morning’s account. A single piece of information — Manchester United away at Newcastle tonight — triggered what the author described as simultaneous apprehension, excitement, and genuine hope grounded in real evidence: the new manager, the team’s recent form, the particular character of this season. Not a sequence of feelings arriving one after another. A complex, concurrent emotional state, held together and resolved in the space of a few seconds.

The app delivered a fact. Everything else was already in the person reading it: the history of supporting this club, the memories of previous Newcastle away days, the specific weight of cautious optimism. Two people reading the same notification would have entirely different interior experiences depending on their attachment to the teams, their knowledge of the game, their investment in the outcome.

This matters for how we think about AI’s emotional influence

It is tempting to say that news feeds and recommendation engines ‘make us feel’ things. And in a limited sense they do — they select the inputs. But the emotional response is not produced by the algorithm. It is produced by the encounter between an input and a whole human life. The richer and more self-aware that life, the less the algorithm’s selection determines the outcome.

Human emotional landscapes are more nuanced than any external analysis can capture. It is the person who weighs the competing thoughts and feelings that a single stimulus generates, and allows what they feel to emerge. That weighing is not algorithmic. It is irreducibly human.

3.  The Google feed: curation, curiosity, and conscious resistance

The Google feed presented a more complex picture than it first appeared. Described initially as a potential ‘curation trap,’ it was more accurately characterised as something closer to an intellectual refuge — a deliberately sought counterweight to the weight of the news. Geology. Psychology. The cosmos. These are not trivial distractions. They are enduring domains of knowledge that provide perspective precisely because they operate on timescales that dwarf the urgent and the distressing. Turning to these things on a difficult morning is not drift. It is, in its way, a considered act of self-care.

Understanding how the Google feed is populated adds useful self-knowledge. It draws on search history, YouTube watch patterns, and content engagement across Google’s ecosystem, constructing over time a portrait of a person’s intellectual interests from their digital behaviour. The interests it surfaces are genuinely the person’s own. The mechanism that serves them is commercial, designed to maximise engagement rather than wellbeing.

“The interests are authentically yours. The selection and sequencing of content designed to hold your attention is the algorithm’s work. Both things are true simultaneously.”

The author’s response — taking content with a pinch of salt depending on the source, stopping when distraction starts to pull — is exactly the kind of active discernment that distinguishes deliberate engagement from passive drift. It does not eliminate the tension between personal benefit and platform design. But it is the right orientation toward it.

4.  Deliberate, drifting, or something more interesting?

Reading across the whole morning, what emerges is not a picture of someone being passively shaped by algorithms. It is the picture of a person using AI-powered tools with a fair degree of intentionality — for human connection, for information relevant to real concerns, for intellectual pleasure, for the maintenance of meaningful work — while remaining alert to the ways those tools could, if unexamined, begin to shape rather than serve.

That alertness is itself significant. The noticing study — paying deliberate attention to a routine that usually runs on autopilot — is exactly the kind of reflective practice this inquiry is designed to cultivate. And what it produced was not alarm but insight: a clearer picture of a relationship that is, on balance, purposeful and intentional, with specific areas worth continued vigilance.

Three principles emerging from this morning’s reflection
  • The person always comes before the device. Context is not background — it is the thing that makes sense of everything else. Any account of AI influence that strips out the human life surrounding the interaction is at best partial, at worst misleading.
  • AI delivers inputs. We bring the meaning. The richness of a human response to any AI-delivered information is not the algorithm’s achievement. It belongs to the person, their history, and everything they carry with them when they pick up their phone.
  • Discernment is a practice, not a setting. Knowing when to stop scrolling, which sources to trust, what to hold lightly — these are cultivated habits. They require attention and they require practice. They do not come for free.
An invitation to your own noticing

The question the post ends with is worth sitting with a little longer: how much of your early morning emotional landscape is yours — and how much has been quietly assembled for you, before you’ve even had a cup of tea?

The answer, for most of us, is: both. We bring our lives, our loves, our preoccupations to every interaction. The algorithms bring their models of our behaviour, their commercial incentives, their extraordinary capacity to surface things we are genuinely interested in. The relationship between those two forces is what this inquiry is trying to map.

The starting point is simple: notice. Tomorrow morning, pay attention to the first five minutes. What do you reach for, and why? What does it do to you? Is the sequence you follow one you would consciously choose — or one that has simply accumulated, app by app, habit by habit, without you ever quite deciding?

That kind of noticing is not a critique of technology. It is an act of self-knowledge. And self-knowledge, in the age of AI, is increasingly something we have to work at deliberately — because the systems around us are very good at making the unreflective feel entirely natural.

Living and Learning with AI Inquiry  ·  Norman Jackson
Full paper available on request
Picture
We advocate, encourage and support lifelong - lifewide and ecological approaches to learning, development, creativity and education for a sustainable regenerative future