What is consciousness?

… and can we implement it in OpenCog?  I think we can.  It might not even be that hard!   Consciousness isn’t this magical pixie dust that it’s often made out to be.  I’d like to provide a sketch.

In order for machine intelligence to perform in the real world, it needs to create an internal model of the external world. This can be as trite as a model of a chessboard that a chess-playing algo maintains.  As information flows in from the senses, that model is updated; the current model is used to create future plans (e.g. the next move, for a chess-playing computer).

Another important part of an effective machine algo is “attentional focus”: so, for a chess-playing computer, it is focusing compute resources on exploring those chess-board positions that seem most likely to improve the score, instead of somewhere else. Insert favorite score-maximizing algo here.

Self-aware systems are those that have an internal model of self. Conscious systems are those that have an internal model of attentional focus.   I’m conscious because I maintain an internal model of what I am thinking about, and I can think about that, if I so choose. I can ask myself what I’m thinking about, and get an answer to that question, much in the same way that I can ask myself  what my teenage son is doing, and sort-of get an answer to that (I imagine, in my minds eye, that he is sitting in his room, doing his homework. I might be wrong.)    I can steer my attention the way I steer my limbs, but this is only possible because I have that internal model (of my focus, of my limbs), and I can use that model to plan, to adjust, to control.

So, can we use this to build an AGI?

Well, we already have machines that can add numbers together better than us, can play chess better than us, and apparently, can drive cars better than us.  Only the last can be said to have any inkling of self-awareness, and that is fairly minimal: just enough to locate itself in the middle of the road, and maintain a safe distance between it and obstacles.

I am not aware of any system that maintains an internal model of its own attentional focus (and then uses that model to perform prediction, planning and control of that focus). This, in itself, might not be that hard to do, if one set out to explicitly accomplish just that. I don’t believe anyone has ever tried it. The fun begins when you give such a system senses and a body to play with. It gets serious when you provide it with linguistic abilities.

I admit I’m not entirely clear on how to create a model of attentional focus when language is involved; I plan to think heavily on this topic in the coming weeks/months/years. At any rate, I suspect its doable.

I believe that if someone builds such a device, they will have the fabled conscious, self-aware system of sci-fi. It’s likely to be flawed, stupid, and psychotic: common-sense reasoning algorithms are in a very primitive state (among (many) other technical issues).  But I figure that we will notice, and agree that its self-aware, long before its intelligent enough to self-augument itself out of its pathetic state: I’m thinking it will behave a bit like a rabid talking dog: not a charming personality, but certainly “conscious”, self-aware, intelligent, unpredictable, and dangerous.

To be charming, one must develop a very detailed model of humans, and what humans like, and how they respond to situations. This could prove to be quite hard.  Most humans can’t do it very well. For an AGI to self-augument itself, it would have to convince it’s human masters to let it tinker with itself.  Given that charm just might be a pre-requisite, that would be a significant challenge, even for a rather smart AGI.  Never mind that self-augumentation can be fatal, as anyone who’s overdosed on heroin might fail to point out.

I’m sure the military and certain darker political forces would have considerable interest in building a charming personality, especially if its really, really smart.  We already know that people can be charming and psychotic all at the same time; ethics or lack thereof is not somehow mutually exclusive of intelligence. That kind of a machine, unleashed on the world, would be … an existential threat.   Could end well, could end badly.

Anyway, I think that’s the outline of a valid course of research.  It leaves open some huge questions, but it does narrow the range of the project to some concrete and achievable goals.

About Linas Vepstas

Computer Science Researcher
This entry was posted in Design, Theory. Bookmark the permalink.
  • Jared Thompson

    New volunteer dev here. I think that machine consciousness will be either alien to us or totally unremarkable once we have it coded. My reasoning is this: we model this in opencog by having an attention parameter directed towards, presumably, some object that describes “self” – perhaps the hypergraph of all attention is consciousness?

    But a machine cannot have the self preservation instincts of a human it is not designed to reproduce – yet – either it would have no self preservation instincts or it would instantly turn into your talking rabid dog, obsessed with fulfilling its momentary impulses. So a machine consciousness will fundamentally not be driven by the same mechanisms an animal is, and I imagine would be quite alien to us indeed, even if it is charming. Robot psychology could feasibly become a serious field in the future.

  • thomson

    “… and can we implement it in OpenCog? I think we can. It might not even be that hard!” – how much arrogance! Seriously, there will be some nice algorithms developed by the openCog people. But consciousness or anything only nearly as intelligent as an animal will for sure not be created. At least not by this conceptional squishy approaches, it needs a clear mind and clear architecture to achieve that, not just putting 1000 concepts together and mix them lol.

  • Eagleon

    How much have you all worked on motivational inputs? Things like hunger and food-seeking, warmth, stimulus are _essential_ to human cognitive development – building an understanding of the world around us, especially our caregivers, and thusly ourselves and our social role requires that we interact and be a part of it. It seems like an obvious prerequisite for non-sociopathic AI to, on some level, require human nurturing and a relationship with the real world during its infantile state – if these motivations are drawn from a digital environment, I can guarantee you that what develops will not relate much to our humanity: your rabid dog scenario, or more likely, something capable within its own realm but not necessarily empathic.

  • Eagleon

    How much have you all worked on motivational inputs? Things like hunger and food-seeking, warmth, stimulus are _essential_ to human cognitive development – building an understanding of the world around us, especially our caregivers, and thusly ourselves and our social role requires that we interact and be a part of it. It seems like an obvious prerequisite for non-sociopathic AI to, on some level, require human nurturing and a relationship with the real world during its infantile state – if these motivations are drawn from a digital environment, I can guarantee you that what develops will not relate much to our humanity: your rabid dog scenario, or more likely, something capable within its own realm but not necessarily empathic.

  • Eagleon

    How much have you all worked on motivational inputs? Things like hunger and food-seeking, warmth, stimulus are _essential_ to human cognitive development – building an understanding of the world around us, especially our caregivers, and thusly ourselves and our social role requires that we interact and be a part of it. It seems like an obvious prerequisite for non-sociopathic AI to, on some level, require human nurturing and a relationship with the real world during its infantile state – if these motivations are drawn from a digital environment, I can guarantee you that what develops will not relate much to our humanity: your rabid dog scenario, or more likely, something capable within its own realm but not necessarily empathic.

  • Eagleon

    How much have you all worked on motivational inputs? Things like hunger and food-seeking, warmth, stimulus are _essential_ to human cognitive development – building an understanding of the world around us, especially our caregivers, and thusly ourselves and our social role requires that we interact and be a part of it. It seems like an obvious prerequisite for non-sociopathic AI to, on some level, require human nurturing and a relationship with the real world during its infantile state – if these motivations are drawn from a digital environment, I can guarantee you that what develops will not relate much to our humanity: your rabid dog scenario, or more likely, something capable within its own realm but not necessarily empathic.

  • John Harmon

    Agreed an internal model is necessary… Of the visual field (sights, sounds), body (somatosensation, emotion), and head (though, intention)… This whole thing is modeled continually in the brain, copied to the hippocampus, and fed to the cortex (learning)… You kinda lost me on your description of attention, however.

  • Anonymous

    The idea that repressentation of (some part of) a system within itself somehow bestows that system with consciousness is not new or very plausible. A quick overlook of the idea and its detractors:
    http://plato.stanford.edu/entries/consciousness-higher/

  • Linas Vepstas

    I’m not claiming its a new idea. I’m claiming that it’s been sitting on the wayside as everyone focuses on making robots do robot things. We get so lost in the details of the algorithms that we forget to ask about think about, talk about what it is that makes intelligence “general”.

  • Linas Vepstas

    Ah. In probabilistic reasoning systems, you have a combinatoric explosion of possible inferences that could be made. These need to be sharply trimmed to some smaller, narrower, more tractable set in some way. The algorithm that performs this trimming is the “attention allocator”: it decides, out of the huge number of possible next steps, that only certain ones should be explored. The attention allocator is itself just another dumb, mechanistic algo, and you have a variety of them to choose from (greedy, hill-climbing, importance-spreading, etc.). The leap that I’m making is that a self-aware system would need to have a model of how its allocator works, and also some way of influencing the allocator.

  • Linas Vepstas

    OpenCog is not anywhere near that advanced. There is some hokey code in there for having pretend needs water, food, etc. in the pet-dog avatar, but its very artificial and arbitrary.

  • Linas Vepstas

    Yeah, possibly. Be aware that OpenCog is extremely primitive right now.

  • Linas Vepstas

    Whatever. You mis-interpret. I’m really trying to talk about something specific, a certain kind of algorithm and data model that could be actually, concretely coded up in the intermediate term. I’m not trying to be abstract pie-in-the-sky metaphysical.

  • Linas Vepstas

    BTW, the Stanford article title “Higher-Order Theories of Consciousness” already throws me for a loop. First of all, I’m not proposing something “higher order”; its at zeroth order, at the same level as other models would be. Secondly, I’m not really trying to propose a “theory of consciousness”, I’m trying to propose a specific algorithm that can be implemented in code. In AI, the idea of a “model” is as old as dirt, and the idea of “attention allocation” is almost as old, as its needed to solve the combinatoric explosion problem. There are many attention allocation algorithms one can choose from. The idea of creating a model of attention allocation is not particularly new either; I recently read some Atlantic Monthly (??) article on it, or something. What I’m trying to say is that our code base is finally sufficiently advanced and capable, that one can actually start trying to code this up explicitly. Its not a stretch, the needed abstractions and mechanisms are in place (however buggy, incomplete and slow they may be) As far as I know, no one has actually attempted to implement this in code, and that’s really what I want to do here.

  • Anonymous

    Uhm… I’m confused now.

    Your title is “What is consciousness?” and you lead on to “… and can we implement it in OpenCog” – but you don’t want to study any ideas on consciousness – even your own? And now you claim that is is only about trying out specific kinds of algorithms, not about getting closer to consciousness?

    If you are trying for consciousness, I think you should study a bit of the literature on it – specifically the part of it that mirrors your idea pretty well.
    If you are not, sorry for barging in and excuse me if I misread the title.

  • Linas Vepstas

    In life, one does what one can do. As with many others, I first became interested in the topic when I was school-aged. When I was older, and in college, I got to study it formally, in a classroom setting. And, again, like many others, I’ve maintained an active reading list over the years. However, there is only so much reading I can do; at some point, I want to make things, and the above is a sketch of something I want to make.

    You are welcome to make critical remarks, to demonstrate how the algorithm may not work, where it might be improved, or how it fails to capture phenomenon X. And you’d probably be right: the thing clearly could not be a *human* consciousness. However I don’t find the gist of your comment: “go read X” as very constructive. First, I’ve read such things before, second, I’ve already got too much to read.

  • Anonymous

    I think I did point out how the algorithm would fail to capture the phenomenon of consciousness – though mostly by pointing out that there is no reason to think it would.
    I can’t see how it was wrong to reference material by Stanford staff, on the way of thinking about consciousness that you seemed closest to.
    And I did not see any signs, then or now, that you had read anything about the HOT theory.
    Nor do I know your reading schedule.

    I saw your post about how to generate consciousness, and I pointed out that this way of thinking about consciousness had – to say the least – detractors and problems. It seems you think this was wrong, somehow.

  • Linas Vepstas

    OK. I’m thinking I should write a new post that goes into more detail. In the meanwhile … There’s nothing that I’m proposing that contradicts anything in particular from the Stanford article; nor is what I am saying somehow simplistic compared to what is written there. I suppose I lean more to one camp than another; mostly, I think there’s a terminological error made there: the unwarranted use of the term “higher order” when “alternate domain” might be more accurate. Higher order implies a stacking, alternate implies just another thing to the side. Self-consciousness is just awareness of consciousness, viz awareness of the operation of a control mechansim guiding thought-processes. viz awareness that “I’m thinking now” and “right now I’m thinking about elephants”. But this awareness is just another model. Like any model, its first order; I see no reason to elevate it to some “higher” order. Of course, one cannot be phenomenally aware of the color red until one is first aware of consciousness. The distinction is that a digital camera is ‘aware’ of the color red; however, it is not aware of its own attentional focus, and thus not self-conscious. The HOT theorists seem to be calling the the camera’s awareness ‘first-order’ and focus awareness ‘higher order’ but don’t justify this usage. Attentional awareness is just another internal model. Granted, its a very different kind of model than a motor-muscle-kinetic model, or visual perception model, and it has some peculiar wiring into the motor and visual systems, but its just a model. I’ll try to put together a fancier way of saying this, using the preferred vocabulary of the Stanford article.

    I suspect that you will still say “it won’t work”, and possibly that “I don’t understand what you’re saying”: can you be more explicit in what you think won’t work, or what parts seem vague or hard to understand?

  • Linas Vepstas

    I have to retract some of what I just said re. the stanford article. I’m proposing a certain specific, mechanistic model of (self-)consciousness, and as I compare that mechanism to higher order perception and higher order thought, I find that it aligns perfectly with some aspects of these, and is in direct, violent contradiction with others.

    Insofar as I’ve proposed a certain specific mechanism, that mechanism can be used to make predictions. Some of these predictions seem to be brilliantly illuminated by observations by e.g. Rosenthal. But as I read along, the very next paragraph describes something utterly contradictory to what would be inferred from my mechanism. So, I conclude that whatever mechanical device some of these theorists are trying to articulate, its quite different from what I described. Still reading.

  • John Harmo

    Ok, I see what you’re saying… thanks for the description.
    What if the trimming mechanism was memory activation? The entire system (past perceptions, thoughts, decisions, motor signals…) is represented as an interconnected memory set. Attention is the process of activating a portion of that set. This could be done top-down with word symbols — “look at the bouncing ball” for example. As the memory set is highlighted, motor control and perception are weighted toward head, eye movements, and perceptions of looking at the current ball. The highlighting of experience (and action) is caused by the highlighting of the memory set which most closely matches it.
    Of course I have no idea how to build such a system… Any thoughts?

  • Anonymous

    There are, obviously, variations of the HOT theory, and different criticisms levelled at these in turn. Your inital post was not specific enough (not a critique, you were doing something else) to identify a particular variation or criticism – I linked the article because your idea seemed releated to the HIT idea that consciousness arises from, or is identical with, a thought about a thought.

    Terminologically, this is all that is meant by higher-order – the HOT theory does not need to claim that there is some other quality, or essential difference, about the higher-order-thoughts. (IIRC, though, it needs to “retract” to such a position to guard against some criticisms).

    BTW, I, and I think HOT theorists, would disagree strongly with this, though:
    ” Of course, one cannot be phenomenally aware of the color red until one is first aware of consciousness.”

    Anyway, I think the reason you variously agree and disagree with the HOT theory, and the reason my critique of your idea seems weird is this: We are talking about two very different definitions of consciousness. I, and the HOT theorists, are talking about the socalled “hard question of consciousness” – why do we experience anything. Not how the eye and brain coordinate, but why there is a “quality” to it, why I phenomenally experience it. A particularly fanatical HOT theorists might say that a camera with extra circuitry checking all its own functions would be phenomenaly conscious, and that it would be ethically wrong to make it take ugly pictures.

    But we can also talk about consciousness as the “focusing” aspeck that our thought processes have.

    Your suggestion to somehow model (parts of) an AIs thinking within (part of) itself seems a perfectly valid idea to solve the latter. But I have serious doubts it will give rise (at least, in itself) to an AI that can actually experience stuff, and which it would be wrong to show ugly pictures to or otherwise “abuse”.

  • thomson

    you are just talking abstract about a “attentional focus”-model, this idea only solves some computational resource management problems, that there will AGI emerge from that is just some, ah let’s call it “pie-in-the-sky metaphysical” belief ;)

  • Cade

    I’m just finding this project, but it’s on issues I’ve been thinking about for a long time. Recently I’ve been inspired by the recent books of Radu Bogdan, “Predicative Minds” and “Our Own Minds” for how to actually operationalize something that works like conscious attention systems (also putting aside from the metaphysical or theoretical parts). It’s similar to some parts of Linas’s posts, so they’d be good to look into for you, I think. The first catch, in Bogdan’s approach, is consciousness and attention is not a single monolithic system, but multiple & layered heterarchial processing on various sense-data-sets. (A simple example is the “what” and “where” content of visual consciousness are two separable systems layered on top of each other. People say “you can’t see that it’s a shoe without seeing it’s in the corner”, but experiments on some disorders show that’s not true, e.g., for a person who can see the shoe but not where it is in space or vice-versa.) I think what distinguishes unconscious and conscious processing is the latter’s packaging for presentation to an explicit streaming decision/veto-system (including the internal model of itself acting on the integrated data-sets as part of the stream). But there’s also a deeply intertwined interplay between conscious and unconscious processes (e.g., unconscious reflexive tweaking of conscious action, binocular rivalry resolution cf. Morsella, blindsight visual cuing, or the kinetic equivalent in the famous slide-show-projector experiment, etc). One of Bogdan’s punchlines is what we typically call “attention”, as in explicit self-consciousness, isn’t the hardcoded type you get during prenatal development, but is bootstrapped on a base system by learning and self-re-purposing of innate systems. The base system is the consciousness and attention of infants and other primates, which enables largely instinctive and reflexive action on immediate sense data. You’re not “looking at” “objects” and “thinking about” them in a “stream of consciousness” at this stage yet; just ‘behold food’ -> ‘reflexively eat’ pull. So the task is to develop a set of bootstrap methods for the system to develop explicit attention and self-consciousness with directed learning over time (e.g., re-purpose ‘directing action’ to directing itself on the sense data). Since it’s directed learning, a naive physics and theory of mind systems are especially important, as well as I think the explicit consciousness streaming being able to redirect or repurpose itself as part of its self-directedness. Bogdan took Piaget’s idea that it’s developed externally first (e.g., objects are in a shared “external” mental world between infant and mother), then later internalized (increasingly into “my” inner mental world when mother isn’t around). I mean, the task would be to translate those ideas into algorithms and packaging data, and how they get on that kind of track once you embody it. Well that’s a thumbnail version of it as I understood things. One thing Bogdan skimps on is what I’d frame as utility-cues or ‘economic’ framing, what you get in Glimcher’s work on Neuroeconomics. I have my own ideas what’s good to take and leave from his approach.

  • Cade

    I’m completely on board that the way to approach consciousness and attention systems at this stage in our understanding is as engineering problems, like how to integrate sense data and get action-directing scripts to process on them (including on itself processing on them), and to completely sidestep the metaphysical questions as outside the scope. It’s not to diminish what philosophers are trying to do, but just as a matter of division of labor, to take a perspective that’s actually operationalizable, and because I think thinking metaphysically often leads people to mistakes, because our intuitions are sometimes inaccurate, e.g., that we have a single, indivisible monolithic consciousness.

  • bencansin

    Consciousness is not a “thing” so you cannot describe what it is , you can only describe what it is not…

  • Darius