Visions of AI: Building our Sociotechnical Future

For AICan 2024: Annual Meeting for the pan-Canadian AI Strategy and the CIFAR AI Chairs; In Banff.

You can’t help but know when you come to a place like this, and a view like this, that you are somewhere special. And I have that same feeling being at this meeting and the honour of the invitation to be part of your group this week. So thank you for having me here. It is a real privilege to serve on your scientific advisory board for the pan-Canadian AI strategy, and especially to have the opportunity to see so regularly the work and the new directions that you all keep advancing. I am truly awestruck by all of you. As a group, one thing that stands out is the incredible power you have in our world. Because what is collected here this week, through all of you, are incredible visions for the future. Visions that are so powerful, that they seek to create not only Canadian, but global change. Now, what type of change this could be is the big question facing us.  So I‘m here to dedicate this next 30 mins for us to reflect on the visions we have for our futures, the values we will promote as we act on our visions, and ultimately, to explore how we are going about Building our Sociotechnical Future.

The word sociotechnical here is what you think it is: it emphasises that society and technology can be, and should be, thought of as a coupled system. 

Sociotechnical turns out to be one of the big words of these last several months.

  • As that first Global AI Safety Summit wrapped up in November 2023, two new state-mandated AI safety institutes were established. The UK’s AI safety institute is set up with the mission to develop “the sociotechnical infrastructure needed to understand the risks of advanced AI and support its governance.” And we saw the follow up 2 weeks ago at the Seoul summit.
  • The US AI Safety institute also has clear expectations of its consortium members to “contribute technical expertise in sociotechnical methodologies”.  
  • And sociotechnical demands appeared in the The White House Blueprint for an AI Bill of Rights, and also in the EU harmonised rules on AI. 

And these are just a snapshot of its appearances. And I was overjoyed yesterday to see this sociotechnical mandate being put forward for forthcoming the Canadian institute as well. 

From these excerpts, sociotechnical AI seems to have something to do with being people-centred, having social-purpose, understanding and reducing risks, building trust, widening opportunities, centring rights and values, being accountable for claims, and taking a system-wide view on AI. 

All this assumes that we can prove the benefits of AI, and that we have the thinking and tools to systematically steer AI towards benefit. To do this we will need more than technical AI research; and that is the role of sociotechnical AI research.

Sociotechnical AI is all about reconfiguring how we go about our work: asking our technical and engineering work to account for a wider and more expansive set of considerations; while also bringing focus and manageability to the seeming vastness of social considerations. Sociotechnical approaches ask for an ecosystem view, and is a way of engaging with that ecosystem. 

Having set up this background, let’s come back to our visions and their power. I want to concretely explore 3 visions of the future. They are in areas that I am working in, and many of you as well, covering education, climate, and communities. For me, these three areas are related in a fundamental way. Together, they are ways of experimenting, sharing, developing the science of, and building a case for, public benefit from AI. 

I think this resonates with the purposes of this pan-Canadian AI strategy summit, and the vision you have set out for yourselves that “by 2030, Canada will have a robust AI ecosystem that brings positive social, economic and environmental benefits for people and the planet.”

For each of the three visions i’ll explore, I want you to ask sociotechnical questions: so questions that come up when you turn your focus to where and how our research and technologies interact with people and society. I want to celebrate our visions today, and shine a light on some of what might be ahead.  But I also want to stimulate a culture of critiquing our visions. Because productive scepticism remains fundamental to our technical and sociotechnical progress. So let’s start.

Vision 1: A Tutor for Everyone

The pithy vision statement is to build  “a tutor for every learner and a TA for every teacher”. And this is a vision you’ll see used by many actors including khan academy google and others  

AI and Education is one way to simultaneously have research, product, and social impact. I like working in these, what I call, triple-impact domains. But we work on education and learning because it is also a global concern and a global challenge. If we get AI and Education right, we could achieve one of the great public-benefit uses of AI for people across the world. 

This is captured by that urgent vision for education set out in sustainable development goal 4: “By 2030, ensure inclusive and equitable quality education and promote lifelong learning opportunities for all”. 

Sociotechnical questions are all about what happens when people and technology meet, and education is a great platform for this research. Let’s focus on the role of AI-based tutors. Technically, many of the approaches right now are chat agents either with prompting or fine-tuning, and designed and tested for multi-turn learning-type interactions. So these are interactions asking for introductions, explanations, quizzes, and resources related to a learning goal or subject area..

Let me point out three sociotechnical focus areas. 

  • Broad expertise. One natural place to begin engaging with the sociotechnical system is to start with anyone who is not an AI expert; in this case, specifically looking to people whose expertise is in learning and pedagogy. One initial task that is ongoing for us, is to work with teaching experts to establish a set of pedagogical principles that characterises what a good AI tutor would be: don’t give away the answer, be engaging and motivating, guide the learning journey, gauge a learner’s level of understanding, consider empathetic responses.
  • Contextual use. These principles become a philosophy of design, inform how all data is created and collected, and emphasises what should be evaluated for usefulness. So when we work with raters or teachers or learners, we assess: how do these people benefit from the interaction; and we check if pedagogical principles are used in the right situation and in the right context. 
  • Respect for users. The experience of those using a technical system is essential for any work on AI in service of society. The AI tutor is designed so that it isn’t positive about toxic content (as you’d expect), and specific tests are created to check that content creators are respected: as one good example of this, we check and train the AI tutor to make sure it does not misgender or assume the gender of a video’s speaker.

Let me make this concrete by using as an example one system and study we did earlier this year, on placing a tutor like this in the hands of real students. We call this tutoring system LearnLM-tutor. 

The setting is one where students are enrolled in a programme called Study Hall, which is a partnership between Arizona State University, Crash Course and YouTube that offers a pathway to college credit that is accessible to learners of all ages and backgrounds. The actual interface is a chrome extension chatbot called HallMate, and when there is a video available uses the transcript of the video as the context for the language model, and otherwise is grounded on course overview materials. We learnt a lot from this study, and I’ll let you read the paper for details. 

A key aspect of any work like this is the role of meaningful evaluations. Any system that claims to support learning needs to be assessed in ways that are valid, robust, and comparable to other competing approaches. We are far from reaching this state across the Education technologies (EdTech) sector, so this is where there is a major need for deeper and ongoing sociotechnical AI research. Measurement and evaluations are central to any scientific work, and so far we developed seven pedagogical assessments in this that are qualitative, quantitative, human-based, and automatic. 

But there are many other sociotechnical research questions to look at. One question is how chat agents and assistants should embody professional norms, and generally on the relationships that people might build with language agents. For education, the question of self-disclosure becomes important: how should agents respond to students who show vulnerability or share details of their personal mental state. And this is connected to other needs, like exploring digital wellbeing in assistant interactions. These are Human-AI interaction problems that will become an ever-larger need within our research programmes and critical to how we show any public benefit from AI.

Although our eyes are fixed on that vision of a tutor for everyone, and the current trajectory is for learning tutors that are interactive and multimodal,  we have not forgotten to remain critical of our claims. We can’t motivate work like this by appealing to equity in education, since we can hardly claim to have equitable access if you need a video subscription, or depend on a high-bandwidth internet connection. And we are acutely aware of the significant variability in long-term learning outcomes that digital tools and interfaces can create.  We are at the beginning of a robust area of responsible and sociotechnical research here, and so this needs us to bring our curious and critical eyesight to this evolving area and field . For now, onto

Vision 2: Replenishing the Earth

This is a vision given to us by Wangari Maathai, the Nobel Laureate and one of our world’s great environmentalists. This is probably familiar to you in some form. Maybe you’ve connected with the general vision for “A healthy planet”, or the climate justice vision for “A Just Transition”.  

As the world's nations met in Dubai last December for COP28, one of its core groups, known as the Technology Mechanism, met to put forward a vision of “AI as a powerful tool for advancing and scaling up transformative climate action”. This expresses a hope for significant public benefit, but again, the burden of proof lies with AI designers, and so requires new forms of sociotechnical AI research. 

One underlying motivation here is that given the urgency of our planetary needs, environmental and climate modelling now pose both a computational and data challenge that scaling and keeping on the same trajectory as we have been taking is unfeasible. 

So here is an opportunity for us as machine learning researchers, and is one area where, as a research community, we have made significant inroads. Working on predictive models of the atmosphere and weather is a gorgeously technical field to work in, and an area where principled probabilistic generative models are essential. 

Let me pick on the problem of making 10 day ahead weather predictions. This is known as medium range forecasting, and the prediction problem involves making predictions in 6 hour intervals from 6 hrs to 10 days ahead, at around 25 km spatial resolution at the equator. In the data we used, each “pixel” in this grid on the earth contains 5 surface variables, along with 6 atmospheric variables each at 37 vertical pressure levels, for a total of 227 variables per grid point. So that means 235k variables for each time point we want to make a prediction for. 

Using graph neural networks, we were able to show state-of-the-art performance that significantly outperforms the most accurate operational deterministic global medium-range forecasting system on 90% of the 1380 verification targets we assessed. This model also outperforms other machine learning-based approaches on 99% of the verification targets reported. Our model can generate a forecast in 60 seconds on a single deep learning chip, which we estimate is 1-2 orders of magnitude faster than traditional numerical weather prediction methods. We called this approach Graphcast, and were happy to see it listed as a Science journal Breakthrough runner-up for 2023. 

Investing more time and research, we now have its probabilistic successor that uses diffusion models to provide 15 day forecasts that can generate an ensemble forecast, so that a collection of trajectories of possible future weather. Probabilistic forecasts like this are more useful and highly-needed for environmental decision-making since they allow the uncertainty associated with weather events to be quantified and represented. This approach outperformed the world-leading probabilistic forecast system in 97% of the targets we assessed and is able to provide better predictions of extreme weather, tropical cyclones, and wind power production. 

We think a lot about the real-world uses of these models, and sociotechnical questions come up everywhere weather and society meets.

Global Cooperation. We are able to make these highly-impactful research contributions only because of the commitment of international meteorological organisations and their member states to cooperate and share weather data openly. This means there needs to be some value provided in return, and since sharing the Graphcast inference, it is now being used by a growing number of meteorological groups. 

You can see this model yourself in action - it is live right now as part of the experimental services from the ECMWF, one of the world’s leading providers of weather forecasts. And is also being used by Met Services in the US and Europe, and a growing number of research groups.

Rare Events. The challenges of environmental change demand new statistical tools for extreme event predictions, since it is these events that cause the most harm to people and property. 

This shifts our attention to assessing the performance of AI weather models in dealing with severe events. Tropical cyclones are of particular focus right now.

Everyday users. Of course, all of us rely on the weather everyday. We as individuals are an essential part of any sociotechnical system. Our everyday users are spread across the world with different needs. 

This is where we again resurface scepticism of our visions. In the data we use, temperature forecasts for example are currently substantially more accurate in high income countries than in low income countries, raising serious questions about who can actually benefit. And we are acutely aware of the many cases where improved forecasts instead significantly disadvantaged already vulnerable communities. 

Again there is a broad research programme on technical and sociotechnical AI for environment and sustainability that is vast and where there is again opportunity for much more work. As for a related vision, we have just 26 years left to reach net zero.  One final vision.

Vision 3: Maximum Feasible Participation

This is a simple vision asking for people to be included in as many ways as possible in the design of AI. A more poetic vision dreams of barefoot scientists and grassroots intellectuals. The barefoot scientist is the one who leaves the lab to understand the living reality of people who are meant to benefit from science’s advances; while the grassroots intellectuals are the people and communities whose lived experience is of significant value in directing science and technology towards society's greatest needs.  These are visions for a participatory AI. 

Wangari Maathai captured the sociotechnical nature of all the the visions I chose today, when she wrote: “The task for us all in healing the Earth’s wounds is to find a balance between the vertical and horizontal views; the big picture and the small; between knowledge based on measurement and data, and knowledge that draws on older forms of wisdom and experience.”

On the last day of the 2023 Edinburgh fringe, the world's largest performance arts festival, a group of comedians walked into a room. Leaving all the Jokes outside, they were there to meet AI researchers and to experiment with the use of language models for their craft of creating comedy. What was used was experiment-then-deliberate methodology, and is a sociotechnical tool not too widely used, but is unique in the access it provides into what people experience as useful, safe and beneficial. And we learnt a great deal about AI and creativity from this work.

Participation is and will continue to be important for the ongoing development of large language models, generative AI, and almost all other areas of AI. 

A methodology for sociotechnical language agent alignment, that we refer to as STELA,  uses deliberation within sets of social groups to develop rules for agent alignment directly from people. A key outcome is that developer-led rules (like those in constitutional ai) emphasise very different concerns compared to community-led rules (like those that are the outcome of STELA). And importantly, the exposure to new  technologies alongside the deliberative process allows participants to feel included and empowered. 

But Participation must be more than just a user study or a market-assessment and can take many different forms. From the two pieces of work I described, a key component is the importance of deliberation. A recent OECD review talks about catching the deliberative wave, and describes at least 8 forms of  deliberative democracy that we could consider. What i hope you can see is that the sociotechnical aim, is to use participation to both to lead to beneficial AI development while also strengthening varied forms of political community. 

Participation has limitations and can’t do everything. Fortunately, there are other approaches for community involvement available to us. As one approach, I am a passionate advocate for the support of grassroots organisations and in their ability to create new forms of understanding, and demonstrate the forms of participation and alternative research community that are already possible. 

I know you may have heard me talk about this before, but i’ll use the example of the Deep Learning Indaba, which is a charitable grassroots organisation working towards strengthening African AI and machine learning, animated by the vision that communities should be owners and shapers of the role of AI and technology in their social worlds. This year the Indaba is 8 years old, we are hosting the annual meeting in Senegal, and we are proud to say that this year we reached the milestone of having established machine learning communities in 42 of Africa countries. 

Further developing all these varied forms for social and community involvement is another focus area for sociotechnical AI. And when included amongst a broad set of methods and evaluations, participation establishes a people-centred approach to AI design. If we keep on this path, AGI could also be a participatory system, and that is another vision worth considering.

Wrapup

So those were 3 visions, 3 challenges, and 3 impact areas, but just 3 amongst so many visions being animated by everyone here.  If I achieved my aims then you made it here with 3 key outcomes:

  1. Public-benefit. Firstly, Showing the public benefit of AI is a key part of our work, is part of your Canadian AI strategy, but also as part of our scientific and social commitments. And more work is needed to develop the science and evidence for societal benefit from AI.
  2. Sociotechnical AI. Secondly, Sociotechnical AI Research is one way to create the thinking and tools to systematically steer AI towards benefit; it is an area we are called to advance, and an area I hope we can make more investments in.
  3. Education, Climate, Community. Finally, there is truly exciting foundational research, product and social impacts in the areas of Education, Climate, and Participation (and many others beyond), and they are visions worth succeeding in. 

AI can help us make a truly amazing world; but this is just one vision of the future. Human decisions influence technological development, so no vision of the future is inevitable. Our power is to choose: what we work on, how we will work together, and the values we will espouse, as we go about building our sociotechnical future. 

So that’s it for me. My deep, deep thanks for this opportunity to optimistically vision and openly hope with you in this way today. Before I end, let me thank all the amazing people (who I shan’t mention) who I am able to collaborate and learn from every day. They, like all of you, are visionaries.  

As a short postscript, here are a few resources that might be of interest. The papers on some of the work that I leaned on today:

  • A report on the need for evaluation and participation in AI for education: Towards Responsible Development of Generative AI for Education: An Evaluation-Driven Approach
  • A paper on probabilistic weather predictions: GenCast: Diffusion-based ensemble forecasting for medium-range weather
  • And a paper on participation for language model development: STELA: a community-centred approach to norm elicitation for AI alignment

And some books I thought you might enjoy:

  • The book that I took the subtitle of this talk from: Technology and Society: Building our Sociotechnical Future
  • The source of the quote from Wangari Maathai: Replenishing the Earth: Spiritual Values for Healing the Earth and Ourselves.
  • And this one that I think you’ll enjoy when you want to reflect on your research leadership: The Genesis of Technoscientific Revolutions: Rethinking the Nature and Nurture of Research.

Again, thank you all.

Leave a Reply

Your email address will not be published. Required fields are marked *