What is consciousness?

… and can we implement it in OpenCog?  I think we can.  It might not even be that hard!   Consciousness isn’t this magical pixie dust that it’s often made out to be.  I’d like to provide a sketch.

In order for machine intelligence to perform in the real world, it needs to create an internal model of the external world. This can be as trite as a model of a chessboard that a chess-playing algo maintains.  As information flows in from the senses, that model is updated; the current model is used to create future plans (e.g. the next move, for a chess-playing computer).

Another important part of an effective machine algo is “attentional focus”: so, for a chess-playing computer, it is focusing compute resources on exploring those chess-board positions that seem most likely to improve the score, instead of somewhere else. Insert favorite score-maximizing algo here.

Self-aware systems are those that have an internal model of self. Conscious systems are those that have an internal model of attentional focus.   I’m conscious because I maintain an internal model of what I am thinking about, and I can think about that, if I so choose. I can ask myself what I’m thinking about, and get an answer to that question, much in the same way that I can ask myself  what my teenage son is doing, and sort-of get an answer to that (I imagine, in my minds eye, that he is sitting in his room, doing his homework. I might be wrong.)    I can steer my attention the way I steer my limbs, but this is only possible because I have that internal model (of my focus, of my limbs), and I can use that model to plan, to adjust, to control.

So, can we use this to build an AGI?

Well, we already have machines that can add numbers together better than us, can play chess better than us, and apparently, can drive cars better than us.  Only the last can be said to have any inkling of self-awareness, and that is fairly minimal: just enough to locate itself in the middle of the road, and maintain a safe distance between it and obstacles.

I am not aware of any system that maintains an internal model of its own attentional focus (and then uses that model to perform prediction, planning and control of that focus). This, in itself, might not be that hard to do, if one set out to explicitly accomplish just that. I don’t believe anyone has ever tried it. The fun begins when you give such a system senses and a body to play with. It gets serious when you provide it with linguistic abilities.

I admit I’m not entirely clear on how to create a model of attentional focus when language is involved; I plan to think heavily on this topic in the coming weeks/months/years. At any rate, I suspect its doable.

I believe that if someone builds such a device, they will have the fabled conscious, self-aware system of sci-fi. It’s likely to be flawed, stupid, and psychotic: common-sense reasoning algorithms are in a very primitive state (among (many) other technical issues).  But I figure that we will notice, and agree that its self-aware, long before its intelligent enough to self-augument itself out of its pathetic state: I’m thinking it will behave a bit like a rabid talking dog: not a charming personality, but certainly “conscious”, self-aware, intelligent, unpredictable, and dangerous.

To be charming, one must develop a very detailed model of humans, and what humans like, and how they respond to situations. This could prove to be quite hard.  Most humans can’t do it very well. For an AGI to self-augument itself, it would have to convince it’s human masters to let it tinker with itself.  Given that charm just might be a pre-requisite, that would be a significant challenge, even for a rather smart AGI.  Never mind that self-augumentation can be fatal, as anyone who’s overdosed on heroin might fail to point out.

I’m sure the military and certain darker political forces would have considerable interest in building a charming personality, especially if its really, really smart.  We already know that people can be charming and psychotic all at the same time; ethics or lack thereof is not somehow mutually exclusive of intelligence. That kind of a machine, unleashed on the world, would be … an existential threat.   Could end well, could end badly.

Anyway, I think that’s the outline of a valid course of research.  It leaves open some huge questions, but it does narrow the range of the project to some concrete and achievable goals.

About Linas Vepstas

Computer Science Researcher - Hanson Robotics
This entry was posted in Design, Theory. Bookmark the permalink.

45 Responses to What is consciousness?

  1. Jared Thompson says:

    New volunteer dev here. I think that machine consciousness will be either alien to us or totally unremarkable once we have it coded. My reasoning is this: we model this in opencog by having an attention parameter directed towards, presumably, some object that describes “self” – perhaps the hypergraph of all attention is consciousness?

    But a machine cannot have the self preservation instincts of a human it is not designed to reproduce – yet – either it would have no self preservation instincts or it would instantly turn into your talking rabid dog, obsessed with fulfilling its momentary impulses. So a machine consciousness will fundamentally not be driven by the same mechanisms an animal is, and I imagine would be quite alien to us indeed, even if it is charming. Robot psychology could feasibly become a serious field in the future.

  2. thomson says:

    “… and can we implement it in OpenCog? I think we can. It might not even be that hard!” – how much arrogance! Seriously, there will be some nice algorithms developed by the openCog people. But consciousness or anything only nearly as intelligent as an animal will for sure not be created. At least not by this conceptional squishy approaches, it needs a clear mind and clear architecture to achieve that, not just putting 1000 concepts together and mix them lol.

    • Linas Vepstas says:

      Whatever. You mis-interpret. I’m really trying to talk about something specific, a certain kind of algorithm and data model that could be actually, concretely coded up in the intermediate term. I’m not trying to be abstract pie-in-the-sky metaphysical.

      • thomson says:

        you are just talking abstract about a “attentional focus”-model, this idea only solves some computational resource management problems, that there will AGI emerge from that is just some, ah let’s call it “pie-in-the-sky metaphysical” belief 😉

  3. Eagleon says:

    How much have you all worked on motivational inputs? Things like hunger and food-seeking, warmth, stimulus are _essential_ to human cognitive development – building an understanding of the world around us, especially our caregivers, and thusly ourselves and our social role requires that we interact and be a part of it. It seems like an obvious prerequisite for non-sociopathic AI to, on some level, require human nurturing and a relationship with the real world during its infantile state – if these motivations are drawn from a digital environment, I can guarantee you that what develops will not relate much to our humanity: your rabid dog scenario, or more likely, something capable within its own realm but not necessarily empathic.

  4. Eagleon says:

    How much have you all worked on motivational inputs? Things like hunger and food-seeking, warmth, stimulus are _essential_ to human cognitive development – building an understanding of the world around us, especially our caregivers, and thusly ourselves and our social role requires that we interact and be a part of it. It seems like an obvious prerequisite for non-sociopathic AI to, on some level, require human nurturing and a relationship with the real world during its infantile state – if these motivations are drawn from a digital environment, I can guarantee you that what develops will not relate much to our humanity: your rabid dog scenario, or more likely, something capable within its own realm but not necessarily empathic.

    • Linas Vepstas says:

      OpenCog is not anywhere near that advanced. There is some hokey code in there for having pretend needs water, food, etc. in the pet-dog avatar, but its very artificial and arbitrary.

      • MDude says:

        On the subject of giving your AI an emotional or physiological system,
        I’m thinking there’s a few attempts at biologically inspired artifical life worth looking to for ideas. Steve Grand’s norns have a slightly more involved simulation, with a simplified digestive system, individual organs, brain chemicals, and separate neurons. And of particular relevance to the question of making a non-rabid AI, Lovotics set out specifically to make a robot capable of appreciating and giving affection, and also use a system of simulated chemicals to define its emotions.

        I’ve been thinking for a while that it would make sense to use a biological simulation for high level decisions of what the AI wants based on its emotions, then have Opencog take that and decide how to actually reach the AI’s goals. Then maybe it could all get run on some toy like a Robosapien that’s based on lower level biology like motor control. Among other, often fairly silly sounding ideas on tossing together autonomous systems.

  5. Eagleon says:

    How much have you all worked on motivational inputs? Things like hunger and food-seeking, warmth, stimulus are _essential_ to human cognitive development – building an understanding of the world around us, especially our caregivers, and thusly ourselves and our social role requires that we interact and be a part of it. It seems like an obvious prerequisite for non-sociopathic AI to, on some level, require human nurturing and a relationship with the real world during its infantile state – if these motivations are drawn from a digital environment, I can guarantee you that what develops will not relate much to our humanity: your rabid dog scenario, or more likely, something capable within its own realm but not necessarily empathic.

  6. Eagleon says:

    How much have you all worked on motivational inputs? Things like hunger and food-seeking, warmth, stimulus are _essential_ to human cognitive development – building an understanding of the world around us, especially our caregivers, and thusly ourselves and our social role requires that we interact and be a part of it. It seems like an obvious prerequisite for non-sociopathic AI to, on some level, require human nurturing and a relationship with the real world during its infantile state – if these motivations are drawn from a digital environment, I can guarantee you that what develops will not relate much to our humanity: your rabid dog scenario, or more likely, something capable within its own realm but not necessarily empathic.

  7. John Harmon says:

    Agreed an internal model is necessary… Of the visual field (sights, sounds), body (somatosensation, emotion), and head (though, intention)… This whole thing is modeled continually in the brain, copied to the hippocampus, and fed to the cortex (learning)… You kinda lost me on your description of attention, however.

    • Linas Vepstas says:

      Ah. In probabilistic reasoning systems, you have a combinatoric explosion of possible inferences that could be made. These need to be sharply trimmed to some smaller, narrower, more tractable set in some way. The algorithm that performs this trimming is the “attention allocator”: it decides, out of the huge number of possible next steps, that only certain ones should be explored. The attention allocator is itself just another dumb, mechanistic algo, and you have a variety of them to choose from (greedy, hill-climbing, importance-spreading, etc.). The leap that I’m making is that a self-aware system would need to have a model of how its allocator works, and also some way of influencing the allocator.

      • John Harmo says:

        Ok, I see what you’re saying… thanks for the description.
        What if the trimming mechanism was memory activation? The entire system (past perceptions, thoughts, decisions, motor signals…) is represented as an interconnected memory set. Attention is the process of activating a portion of that set. This could be done top-down with word symbols — “look at the bouncing ball” for example. As the memory set is highlighted, motor control and perception are weighted toward head, eye movements, and perceptions of looking at the current ball. The highlighting of experience (and action) is caused by the highlighting of the memory set which most closely matches it.
        Of course I have no idea how to build such a system… Any thoughts?

  8. Anonymous says:

    The idea that repressentation of (some part of) a system within itself somehow bestows that system with consciousness is not new or very plausible. A quick overlook of the idea and its detractors:
    http://plato.stanford.edu/entries/consciousness-higher/

    • Linas Vepstas says:

      I’m not claiming its a new idea. I’m claiming that it’s been sitting on the wayside as everyone focuses on making robots do robot things. We get so lost in the details of the algorithms that we forget to ask about think about, talk about what it is that makes intelligence “general”.

    • Linas Vepstas says:

      BTW, the Stanford article title “Higher-Order Theories of Consciousness” already throws me for a loop. First of all, I’m not proposing something “higher order”; its at zeroth order, at the same level as other models would be. Secondly, I’m not really trying to propose a “theory of consciousness”, I’m trying to propose a specific algorithm that can be implemented in code. In AI, the idea of a “model” is as old as dirt, and the idea of “attention allocation” is almost as old, as its needed to solve the combinatoric explosion problem. There are many attention allocation algorithms one can choose from. The idea of creating a model of attention allocation is not particularly new either; I recently read some Atlantic Monthly (??) article on it, or something. What I’m trying to say is that our code base is finally sufficiently advanced and capable, that one can actually start trying to code this up explicitly. Its not a stretch, the needed abstractions and mechanisms are in place (however buggy, incomplete and slow they may be) As far as I know, no one has actually attempted to implement this in code, and that’s really what I want to do here.

      • Anonymous says:

        Uhm… I’m confused now.

        Your title is “What is consciousness?” and you lead on to “… and can we implement it in OpenCog” – but you don’t want to study any ideas on consciousness – even your own? And now you claim that is is only about trying out specific kinds of algorithms, not about getting closer to consciousness?

        If you are trying for consciousness, I think you should study a bit of the literature on it – specifically the part of it that mirrors your idea pretty well.
        If you are not, sorry for barging in and excuse me if I misread the title.

        • Linas Vepstas says:

          In life, one does what one can do. As with many others, I first became interested in the topic when I was school-aged. When I was older, and in college, I got to study it formally, in a classroom setting. And, again, like many others, I’ve maintained an active reading list over the years. However, there is only so much reading I can do; at some point, I want to make things, and the above is a sketch of something I want to make.

          You are welcome to make critical remarks, to demonstrate how the algorithm may not work, where it might be improved, or how it fails to capture phenomenon X. And you’d probably be right: the thing clearly could not be a *human* consciousness. However I don’t find the gist of your comment: “go read X” as very constructive. First, I’ve read such things before, second, I’ve already got too much to read.

          • Anonymous says:

            I think I did point out how the algorithm would fail to capture the phenomenon of consciousness – though mostly by pointing out that there is no reason to think it would.
            I can’t see how it was wrong to reference material by Stanford staff, on the way of thinking about consciousness that you seemed closest to.
            And I did not see any signs, then or now, that you had read anything about the HOT theory.
            Nor do I know your reading schedule.

            I saw your post about how to generate consciousness, and I pointed out that this way of thinking about consciousness had – to say the least – detractors and problems. It seems you think this was wrong, somehow.

          • Linas Vepstas says:

            OK. I’m thinking I should write a new post that goes into more detail. In the meanwhile … There’s nothing that I’m proposing that contradicts anything in particular from the Stanford article; nor is what I am saying somehow simplistic compared to what is written there. I suppose I lean more to one camp than another; mostly, I think there’s a terminological error made there: the unwarranted use of the term “higher order” when “alternate domain” might be more accurate. Higher order implies a stacking, alternate implies just another thing to the side. Self-consciousness is just awareness of consciousness, viz awareness of the operation of a control mechansim guiding thought-processes. viz awareness that “I’m thinking now” and “right now I’m thinking about elephants”. But this awareness is just another model. Like any model, its first order; I see no reason to elevate it to some “higher” order. Of course, one cannot be phenomenally aware of the color red until one is first aware of consciousness. The distinction is that a digital camera is ‘aware’ of the color red; however, it is not aware of its own attentional focus, and thus not self-conscious. The HOT theorists seem to be calling the the camera’s awareness ‘first-order’ and focus awareness ‘higher order’ but don’t justify this usage. Attentional awareness is just another internal model. Granted, its a very different kind of model than a motor-muscle-kinetic model, or visual perception model, and it has some peculiar wiring into the motor and visual systems, but its just a model. I’ll try to put together a fancier way of saying this, using the preferred vocabulary of the Stanford article.

            I suspect that you will still say “it won’t work”, and possibly that “I don’t understand what you’re saying”: can you be more explicit in what you think won’t work, or what parts seem vague or hard to understand?

          • Linas Vepstas says:

            I have to retract some of what I just said re. the stanford article. I’m proposing a certain specific, mechanistic model of (self-)consciousness, and as I compare that mechanism to higher order perception and higher order thought, I find that it aligns perfectly with some aspects of these, and is in direct, violent contradiction with others.

            Insofar as I’ve proposed a certain specific mechanism, that mechanism can be used to make predictions. Some of these predictions seem to be brilliantly illuminated by observations by e.g. Rosenthal. But as I read along, the very next paragraph describes something utterly contradictory to what would be inferred from my mechanism. So, I conclude that whatever mechanical device some of these theorists are trying to articulate, its quite different from what I described. Still reading.

          • Anonymous says:

            There are, obviously, variations of the HOT theory, and different criticisms levelled at these in turn. Your inital post was not specific enough (not a critique, you were doing something else) to identify a particular variation or criticism – I linked the article because your idea seemed releated to the HIT idea that consciousness arises from, or is identical with, a thought about a thought.

            Terminologically, this is all that is meant by higher-order – the HOT theory does not need to claim that there is some other quality, or essential difference, about the higher-order-thoughts. (IIRC, though, it needs to “retract” to such a position to guard against some criticisms).

            BTW, I, and I think HOT theorists, would disagree strongly with this, though:
            ” Of course, one cannot be phenomenally aware of the color red until one is first aware of consciousness.”

            Anyway, I think the reason you variously agree and disagree with the HOT theory, and the reason my critique of your idea seems weird is this: We are talking about two very different definitions of consciousness. I, and the HOT theorists, are talking about the socalled “hard question of consciousness” – why do we experience anything. Not how the eye and brain coordinate, but why there is a “quality” to it, why I phenomenally experience it. A particularly fanatical HOT theorists might say that a camera with extra circuitry checking all its own functions would be phenomenaly conscious, and that it would be ethically wrong to make it take ugly pictures.

            But we can also talk about consciousness as the “focusing” aspeck that our thought processes have.

            Your suggestion to somehow model (parts of) an AIs thinking within (part of) itself seems a perfectly valid idea to solve the latter. But I have serious doubts it will give rise (at least, in itself) to an AI that can actually experience stuff, and which it would be wrong to show ugly pictures to or otherwise “abuse”.

    • Doug Moen says:

      The Stanford Encyclopedia of Philosophy is a great resource, but this article on the philosophy of consciousness is irrelevant to the OpenCog project. The linked article references people like Nagel and McGinn who are anti-materialists, or who are “new mysterians”, and who categorically deny that any so-called “machine intelligence” can ever be truly conscious. It doesn’t matter how good the final results of OpenCog and other AGI projects are; these philosophers have decided in advance that nothing we can build will ever count as conscious. These arguments are irrelevant to the OpenCog project, which takes materialism as its starting point, and assumes that building a conscious machine is an engineering project. It’s not “arrogance” for Linas to outline an architecture for building a conscious machine; it’s just the accepted way to make progress in such a project: you come up with a theory, write working code that embodies the theory, and test if it works. Building something that works is a better way to attain knowledge than sterile philosophical debates.

  9. Cade says:

    I’m just finding this project, but it’s on issues I’ve been thinking about for a long time. Recently I’ve been inspired by the recent books of Radu Bogdan, “Predicative Minds” and “Our Own Minds” for how to actually operationalize something that works like conscious attention systems (also putting aside from the metaphysical or theoretical parts). It’s similar to some parts of Linas’s posts, so they’d be good to look into for you, I think. The first catch, in Bogdan’s approach, is consciousness and attention is not a single monolithic system, but multiple & layered heterarchial processing on various sense-data-sets. (A simple example is the “what” and “where” content of visual consciousness are two separable systems layered on top of each other. People say “you can’t see that it’s a shoe without seeing it’s in the corner”, but experiments on some disorders show that’s not true, e.g., for a person who can see the shoe but not where it is in space or vice-versa.) I think what distinguishes unconscious and conscious processing is the latter’s packaging for presentation to an explicit streaming decision/veto-system (including the internal model of itself acting on the integrated data-sets as part of the stream). But there’s also a deeply intertwined interplay between conscious and unconscious processes (e.g., unconscious reflexive tweaking of conscious action, binocular rivalry resolution cf. Morsella, blindsight visual cuing, or the kinetic equivalent in the famous slide-show-projector experiment, etc). One of Bogdan’s punchlines is what we typically call “attention”, as in explicit self-consciousness, isn’t the hardcoded type you get during prenatal development, but is bootstrapped on a base system by learning and self-re-purposing of innate systems. The base system is the consciousness and attention of infants and other primates, which enables largely instinctive and reflexive action on immediate sense data. You’re not “looking at” “objects” and “thinking about” them in a “stream of consciousness” at this stage yet; just ‘behold food’ -> ‘reflexively eat’ pull. So the task is to develop a set of bootstrap methods for the system to develop explicit attention and self-consciousness with directed learning over time (e.g., re-purpose ‘directing action’ to directing itself on the sense data). Since it’s directed learning, a naive physics and theory of mind systems are especially important, as well as I think the explicit consciousness streaming being able to redirect or repurpose itself as part of its self-directedness. Bogdan took Piaget’s idea that it’s developed externally first (e.g., objects are in a shared “external” mental world between infant and mother), then later internalized (increasingly into “my” inner mental world when mother isn’t around). I mean, the task would be to translate those ideas into algorithms and packaging data, and how they get on that kind of track once you embody it. Well that’s a thumbnail version of it as I understood things. One thing Bogdan skimps on is what I’d frame as utility-cues or ‘economic’ framing, what you get in Glimcher’s work on Neuroeconomics. I have my own ideas what’s good to take and leave from his approach.

  10. Cade says:

    I’m completely on board that the way to approach consciousness and attention systems at this stage in our understanding is as engineering problems, like how to integrate sense data and get action-directing scripts to process on them (including on itself processing on them), and to completely sidestep the metaphysical questions as outside the scope. It’s not to diminish what philosophers are trying to do, but just as a matter of division of labor, to take a perspective that’s actually operationalizable, and because I think thinking metaphysically often leads people to mistakes, because our intuitions are sometimes inaccurate, e.g., that we have a single, indivisible monolithic consciousness.

  11. bencansin says:

    Consciousness is not a “thing” so you cannot describe what it is , you can only describe what it is not…

  12. Mahbub Zaman says:

    Hi Linas,
    Thanks for the article. I understood your intent the moment I read it. It was very clear to me. I see some have argued against it perhaps without realizing the crux of it.
    putting aside all the philosophical arguments and confusion we can at least attempt to build such a system (that you outlined here) and it will be rather interesting to see what the result will be instead of spending time on whether it satisfies the many outstanding philosophical riddles. Sometimes we just got to try it. Reading never ends. Preparation can never end. Trying out is the best possible way to break new grounds. Perhaps only.

  13. MDude says:

    I don’t know if I get what you’re describing entirely, but if I want to make something that can think about what it’s paying attention to I’d guess it’d first need to be able to divide its attention, so it can continue paying attention while it thinks about its attention system. Or it could instead think about what it was paying attention to previously, and decide what to pay attention to in the near future. I’m thinking it would help to ahve a primary and secondary attention system, so one can scan for other things that might be worth attention while the other actually considers something in-depth.

  14. Curious_Sahoo says:

    I liked your article , and I have very a simple question , you might find it rather silly, but I still want you to answer it if at all possible. I love ice creams , I love ice-cream A (I don’t know why but I love it), but very often I eat C , D and E , You see the problem here , My behavior is inconsistent , incoherent and non-linear. In case of a machine ,you would do a statistical analysis of the available information/data and you would say that “My fav ice cream is either C,D or E depending upon their frequency”, which is totally wrong. Is it possible to write an algo, which will behave this way.

    • Gaurav Gautam says:

      If I wanted to write a program to simulate your behavior, I would just write one to choose the ice cream based on a probability distribution centered and peaked on A. Also, if I were you I would always eat A.

  15. Matthew Simpson says:

    Is it ominous that this is the most recent blog entry viewable here? 🙂

    A relative newcomer to the field, I enjoyed reading your post as it mirrored the very thoughts I’ve been having reading through the history and literature of your field. I thought I’d comment, although the post is nearly two years old now, that the first thing that occurred to me reading it, newbie and non-mathematician and non-programmer and so forth that I am, was that you might do well to have your two domains for attention as outlined, whether integrated in the internal process or not, but also a third null domain of attention. The null domain representing some awareness of a divergence in the cognitive activity elsewhere from it’s own null value, thus permitting a kind of logical inference of beingness, in its observations, as a divergence from mere nullity. This is probably badly expressed – nullity is merely the form of the thought as I had it and perhaps the entire thing is superfluous in some sense. But I’ve shared it in any case, perhaps because I simply don’t yet know any better. 🙂

  16. USCF United States Civilian Fo says:

    Machines can never achieve true consciousness as we have it because they are not sentient beings. They are models, shadows, reflections of sentient beings and will always remain so, no matter how sophisticated they become. The most real painting of an apple or photograph of an apple, is not an apple.

    • Gaurav Gautam says:

      But what if I cloned apple cells in a culture and make the apple flesh grow as huge cubes that are easy to ship? Why do you assume that painting is the only way to represent an apple? Just because a painting can’t rival an apple, doesn’t mean other representations cannot either.

      • USCF United States Civilian Fo says:

        Cloning is not a representation of an apple. It is a reproduction. Big difference between a big data apple and a big apple.

        • Gaurav Gautam says:

          Okay, what if I fabricated a food product that is not an apple but looks, smells and tastes just like one? We could take some kind or artificially made starch or cellulose substrate and mix it with sugar and flavor.

          What if I somehow stored the pattern in which your neurons fire when you eat an apple and make them fire in the same way when you are not eating one? Not saying this can be done. But what if it could? Wouldyou agree that then, the difference between a big data apple and an apple diminish?

          What if you have a particularly vivid dream about eating an apple? Isn’t the information about the apple stored in your brain, rivaling the real apple at that point?

          And in any case an apple can only be an analogy to an intelligence or consciousness only so far. If some carbon can achieve consciousness then why not silicon? How can you make the claim that a machine cannot achieve consciousness? Aren’t we just incredibly sophisticated machines? You are not saying that some kind of soul or other intangible entity is a prerequisite for consciousness, are you?

          • USCF United States Civilian Fo says:

            Carbon has not become conscious. Carbon is inert matter. Consciousness is not inert matter. They are two different substances. Matter does not give rise to consciousness. You cannot combine chemicals and produce consciousness. Study father of Quantum physics Neils Borr and his musings on consciousness.

  17. D Campbell says:

    I’ve been thinking that purpose is the driver of all evolutionary change. And therefore consciousness would come about from complex interactions with direction. Where does that direction come from? We have replication as our main purpose but so minor purposes from those few. I explore this more here http://www.jesaurai.net/uncategorized/consciousness-and-ai-creating-a-mind/

  18. Kirk Hughey says:

    Consciousness is a subjective state-it cannot be proved to exist by any external, ‘objective’, means. If I assume that you, my dog, or a cockroach are conscious in the way I am it will be because I see that we all are alike in being organic, autonomous, intentional life-forms. But it is still only a working assumption. If a ASI, my computer or a thermostat doesnt have the above qualities I can assume they do not have consciousness even if they can mimic its effects very closely. They may well be conscious or conscious in a very different way-but I will never know that for certain. It relates in a way to the fact that from a strict materialist point of view all actions take place in a causally closed material reality that excludes any of our thoughts, feelings, intentions, decisions(and discussions or attempts to convince others) as so much irrelevant, ineffectual “pixie-dust” .

  19. Gaurav Gautam says:

    That is one scary picture you painted for the birth of an ai consciousness. I always thought –even though I had no reason to– that an AGI would more likely turn out to be good. Now suddenly Elon Musk’s demon doesnt seem all that far fetched.

    • I don’t believe that there are forces within the device itself, driving it towards good or bad. However, as humans, as society we do exert an effect. We can train dogs to be nasty, or to be nice. We can do the same for AI. I’m very tempted to say that high-IQ AI will generally be sufficiently self-aware to be nice. On the other hand, I’ve met some high-IQ people who are quite nasty, so …

  20. Gabi Lordad says:

    I think any kind of consciousness starts with the perception of existence. The perception
    of “am” not “I am” because at the very bottom of
    consciousness is no “I” there is no seeing, no hearing, no
    feeling, no knowing, no remembering, no attention, no wanting – nothing
    then “am”.

    So I am very sure that
    modeling of consciousness starts with the modeling of the perception
    of existence, the perception of “am”. It must be understood how
    it works and modeled and than higher functions of consciousness can
    be added like onion peels.

    Linas, I think what you
    are speaking about are the higher functions of consciousness and I
    think that they are all referencing the “am” constantly. They are
    centered by and on the “am”. And it might turn out that without
    understanding this mechanism any model consciousness will work
    poorly.

    And I think also that it
    is not a question of 1., 2. or 3.- person perspective. The question is how it
    works and how to model it.

  21. blind desire says:

    The
    first 3 paragraphs plus the first 2 sentences of the next one are the
    sanest, briefest, most refreshing take on (description/explication of)
    “consciousness for practical purposes” I have ever seen.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.