Agent-Oriented Programming: A Brief Introduction

Not quite the same, but much catchier.

Agent-oriented programming (AOP, not to be confused with aspect-oriented programming) is a programming paradigm in the same way that object-oriented programming (OOP) is; it provides a set of concepts, and a way to think about the world in terms of those concepts. AOP is a more recent development, and still an area of considerable research and standardisation. Wikipedia traces OOP back to the 1960s, while AOP came about from research into artificial intelligence by one Yoav Shoham in the 1990s. As someone who is fascinated by new ways of thinking about software and its development, AOP is of great interest to me, and I’ve been quite fortunate to fall into a year-long university project centred around it (on the Android platform, appropriately enough).

So what is AOP? What is an agent (more precisely a “software agent“), and how does one orient one’s programming around them? AOP sits one level of abstraction above OOP, such that agents are effectively abstractions of objects. Intuitively, agents can be thought of as software entities with intelligence, just as we think of science fiction AI robots. They are machines, or in this case computer constructs, but at runtime we provide them with instructions as if they were people, like everyday colleagues. In particular, we try to avoid micromanagement of communications between agents; ideally, we would like to provide them with a task to perform and allow them to talk with each other to determine how best to accomplish it.

As if computers weren’t already making humans redundant in enough ways, taken to the logical extreme you might think of AOP as a blueprint for the obsolescence of human labour. Agent-oriented software engineering (AOSE) goes through the typical software engineering phases of requirements and design, but it does so in, again, a more human fashion. It’s more similar to designing a business process workflow than a set of modules; the focus is on the dynamic communications, the flow of information through the system, rather than the static structure, and the communication takes the form of messages passed between ‘roles’. Roles are more or less what you would expect: responsibility for an area of functionality, and the separation of concerns principle applies to keep individual roles coherent and independent. These roles are eventually adopted by agents, and then each agent has its capabilities specified hierarchically (multiple levels of sub-capabilities to achieve each capability), and the abstractions are peeled away layer by layer until each agent is composed of a set of concise, almost atomic, functions. Ultimately quite similar to OOP, but overall we’re more concerned with what an agent can do, and which other agents might be interested in that service.

The main application of AOP that comes to mind is artificial intelligence for autonomous robots, and the canonical example is one such robot that inhabits some arbitrary environment which it scans for rubbish, and upon detecting some rubbish, picks it up and drops it off in a rubbish bin. Even in such a simple case (well, it’s no driverless car), it’s interesting to consider what types of components are required. One common class of agents have beliefs, desires and intentions (hence BDI agents), just like humans, and can operate autonomously with very few ‘moving parts’:

  • Beliefs represent what the agent knows to be true or false about its environment (e.g. there is some rubbish at location (X, Y)). Beliefs are updated when the agent perceives some unexpected change in the environment (much like an event-driven programming model);
  • Desires describe the agent’s goals, or the state it would like the environment to be in (e.g. no rubbish at any location (X, Y));
  • Intentions are the actions which the agent can take to affect its environment, based on its beliefs, in order to change the state of the environment to achieve its desires (e.g. if some rubbish exists, the agent will intend to pick it up; having picked it up the agent will intend to move to the rubbish bin, and then drop it off, and so on).

Firstly, it’s interesting to consider that these basic constructs alone can theoretically describe much of the field of physical human endeavour (where the environment is the physical space-time continuum that we inhabit). However it may also be interesting to consider what human activities cannot be modelled in such a way. For example, I can’t quite picture how machines will be able to emulate the mental creativity of humans (though I’m sure they will eventually), and this is difficult to model since there is no tangible environment and often no precise desires. Empathy, or the natural understanding and sharing of emotion, would also be difficult to emulate for similar reasons, but also because an emotionless machine wouldn’t have that individual experience to fall back on. It comes down to the idea of understanding, which raises the question of what is meant by the phrase machine learning. I consider this roughly equivalent to becoming statistically more accurate by the modelling of some logical process, but whether this is actually increasing any kind of understanding on the part of the machine really depends on your definition.

Of course, there are many less fluffy applications for agent-oriented programming, and I look forward to exploring some of them in future posts. It’s a genuinely thought-provoking topic, and I hope I’ve got your mind ticking over too. It’s a field that appears easy to understand (they’re just like people, right?) though very difficult to really get your head right around, but one that I think might just be worth the time.