• May 15, 2020, 7:56 a.m.

    Substrate independence is the idea that consciousness can reside on different kinds of physical or digital substrates and isn't exclusive to a biological brain. How do you falsify substrate independence if you cannot directly observe consciousness? I can see taking free will as a given, but should we take it as a given that our uploaded mind is a conscious mind? I'd like to know if my potential new neighbors are P-zombies before I move into the Titan supercomputer.

  • June 6, 2020, 4:29 p.m.

    I think it is really hard, because we don't really even know what 'consciousness' is yet. But I have some thoughts on this.

    So 'I think therefore I am' comes to mind - let's say my brain is damaged in some way - does this affect my consciousness? Or if a drug changes the way my brain thinks - is my consciousness altered? Maybe then we can organise the different levels of intervention into a spectrum, where we can judge the critical point where consciousness 'dies':

    1. My normal functioning awake biological human brain.
    2. Normal function asleep biological human brain.
    3. Temporary drug induced brain which makes my brain sloppy ('alcohol' for instance)
    4. Drug induced brain sleep (anaesthetics for instance)
    5. Damaged brain - causing distraction and loss of concentration
    6. Very Damaged Brain - causing loss of motor functions, loss of speech, loss of observable function
    7. Permanent brain sleep with electrical activity in the brain and ongoing body functions, like a coma
    8. Loss of electrical activity in the brain

    SO, where does death occur, or loss of 'consciousness'? No one really argues that being drunk you are not 'conscious', but what about asleep? Are you conscious then? Do you suddenly regain 'consciousness' when you awake? Initially I was thinking consciousness was up till 5 on the scale, but perhaps not 6.

    And what of distortions by disease or drugs - such as Alzheimer's disease or even mind-altering drugs - are they conscious? Perhaps the threshold is a bit more blurry than we think here, and perhaps we are all conscious up to 8.

    A scientific way forward could be defining conscious as electrical activity in the brain, which appears to match observation on 8 of the scale - it is generally accepted that the complex signals that occur in the brain is very difficult to replicate or start again after the 'brain' dies, and is irrecoverable. This begs the question though, that if we can restart the same patterns, does your consciousness suddenly reappear? Maybe then even '8' on the scale is not low enough to determine consciousness, and death is not permanent.

    If this is true then consciousness is about configuration: a fleeting moment in time of electrons and photons buzzing from one part of our brain to another. We experience this place now at this moment, and yet if this is replicated somewhere else (no matter when or where) then we experience it again, over there, in that moment. Indeed, maybe the buddhists had it right all along - we are resurrected, just as everyone else, or in everybody, or everything.

    So - I suppose my conclusion is then a qualified 'yes', you could experience the same pattern in another 'substrate'. However, it is not you 'now' - you're currently you and will continue to be so until you reach scale '8'. If someone replicates the same pattern as you 'now' then you will also experience that pattern, but not at the same time or place, you will experience it at that time and place. It get's complicated but I don't think our consciousness is then defined solely by our 'substrate'.

    It also makes me think that we live in an amazing and unique world - how wonderful it is to know that your configuration of electrons and photons in your brain is moving through the world and experiencing it in all its glory and at this unique place and time. It is a remarkable privilege.

  • June 11, 2020, 8:47 p.m.

    Imagine being on board a ship orbiting Titan. Anchored to the floor via spin gravity you attend to some plants you grow which provide you with fresh veggies in addition to your stored food supplies. Your 9 year old daughter is with you and you are explaining photosynthesis to her. She peers out the window, sees the giant pyramid shaped mega computer on the moon below and asks you, "Daddy does the sun shine on plants in the virtual worlds of the Titan mega computer? Do they have photosynthesis there?" You answer, "From where we stand now, its not really photosynthesis, its just a simulation run on silicon...electrons moving around in particular pattern within the processors.....but to the the people who in live in that computer the photosynthesis is just as real to them as real photosynthesis is real to us". Your daughter looks up at you and asks, "Daddy, if the photosynthesis in the mega computer isn't real photosynthesis, how can the people be real people?"

    Instead of thinking about how to answer her last question, you begin to think that now is probably not the best time to break the news to her that you and she will soon be moving into the Titan Mega computer.

    Photosynthesis isn't a computation. It is a quantum process operating on specific kinds matter organized in a very specific way. Why can't consciousness be like photosynthesis?

  • June 26, 2020, 4:24 p.m.

    My son bought me a VR head set for a Christmas. I can use it to interact with people all over the world. When using that head set sometimes people I am interacting with will for all appearances cease to be conscious. Maybe they lagged-out, or maybe they took off their head set to attend to a real world need. They look deadish to me, but their consciousness, I assume, hasn't ceased to be. It has simply been pulled from the reality we once shared. What does that suggest? Observations of consciousness or lack thereof are not necessarily reliable indicators of its existence.

  • Jan. 7, 2021, 7:52 p.m.

    Hi there. I'm new to the forum.
    These are some really insightful ideas. I have a few thoughts to add to the pile; not sure if they will help us design a good experiment, or just muddy the waters further, but I hope they will help a little.

    Firstly, it occurs to me that the Teleportation Problem actually tells us something very important about the nature of consciousness, if it exists. If there's a transporter accident on the Enterprise, and the transporter makes a copy of me down to every sub atomic particle, I know the copy is not me, because I will not be experiencing life from their perspective. I will not suddenly be seeing through their eyes as well as my own because there is (presumably) no communication between our brains, separated by space. We will not be processing the same information and our thoughts and behaviors will subsequently diverge just from having different experiences.What this tells us about consciousness (and possibly even information in general) is that it possesses spatial locality, i.e. it has a position or coordinates in space that you can point to. It may have no volume, like a point particle, or be distributed in space, like an electromagnetic or acoustic wave, but it is "somewhere" and cannot occupy the same space at the same time as another consciousness, at least not without some interference (presumably). I find this exciting because we know a thing or two about space, and can manipulate it to some degree, so that would give us a place to start in the design of an experiment to test its existence and other properties, though I personally don't know where to go from here.

    Secondly, a lot of the mind-uploading or transhumanism schemes I've heard of that involve the gradual replacement of neurons with computer chips, or interception of synaptic signals between neurons, base their reasoning on the Ship of Theseus thought experiment: if you replace all the parts on a boat over time as it wears out, is it still the same boat? (Let me hasten to point out that no definitive answer to this question was given with it.) The problem with trying to apply this reasoning to the human brain is that, while neurons do repair and replace their molecular components over time, to the best of our knowledge at this time, the neurons themselves are NOT! My understanding is that there's still debate within the scientific community about weather or how much neurons are grown or replaced in the brain, but with the exception of some adult neurogenesis being observed in the human Limbic System and nowhere else, the neurons you have now are the same ones you had when you were a born, and the same ones you'll have when you die.
    I personally consider this a big blow to the idea of substrate indifference, because if we think of a brain as a machine to support continuous consciousness or cognition, animal or human, then why wouldn't the brain be able to regenerate itself like every other tissue in the body? Granted, the brain is remarkably resilient to injury already, but it's believed that the reason a person can survive a piece of rebar or pipe going through their head is because most of your brain isn't actually brain cells, but the axons and dendrites branching between them, and those are degenerating and regrowing all the time, every time you remember something in fact, so it's not hard for the brain cells to grow connections around an obstruction, which just pushes the nuclei out of the way instead of killing them.
    I imagine trying to replace neurons in a brain like trying to replace transistors on a computer chip while it's running, or boards on a boat while its in the water: you can probably pull it off, but data's probably going to get corrupted while you replace the transistor, or water's going to get in the boat while you replace the board, potentially crashing the program, or sinking the boat respectively. Replacing those parts during operation is going to interfere with the function of that device. You can replace one atom or protein of a transistor or neuron and not affect its functions significantly because it has enough of them for that one atom to be redundant, and the Ship of Theseus is still The Ship of Theseus. But if you try to replace the whole thing at once, or replace one with another, it's not; even though they both float, a speed boat with the same title is not The Ship of Theseus, nor is the US Navy still appreciably the US Navy if you replace one or all of its ships with Roman Triremes, because its components no longer perform the same functions or processes. Likewise, a neuron is more than the sum of its parts. And if you remove a simulated neuron from a digital neural network, all of the sudden, it can no longer tell the difference between a picture of a Koala and a picture of a Dixie Cup, and while you can get that functionality back if you replace the neuron, it will take time to re-learn the information that old neuron had and stored in the form of its connections with other neurons; why should we assume a biological neural network is any different, except in being a lot more fragile?

    This leads into my third thought on the subject, which is the definition of "non-consciousness" or death. If we define it as a cessation of electrical activity in the brain, like Hub suggested, how exactly do we define "electrical activity" and over what timelines? Technically chemical reactions are electrical activity in that electrons are moved and shared between atoms during bonding; is there still "electrical activity" in the brain when it is in the process of chemical decomposition? I don't think anyone would agree with that, so we need to sharpen our definition of "electrical activity in the brain" to the very specific electro-chemical processes that occur within neurons and between synapses, ranging from ion pumping through the cell membranes to the dendrite branching that is essential for memory formation and recollection.
    Furthermore, how do we clearly define "cessation" of this activity? Consider the slowed down black hole super computers discussed in "Civilizations at the End of Time," where it may take 100 years to flip a bit. If we watch that thing for a few decades and determine there's no measurable electrical activity, does that make the super computer "dead" by this criterion? We would have to say "yes," so there's probably a problem with this criterion too. I think it would be safe to sharpen the definition of brain-death in this context to mean not an absence of measurable activity, but the neurons being in such a state that they cannot perform the previously described processing functions anymore without extensive repair or replacement, and even if you do that, the information they contained might be lost forever, like accidentally stepping on a thumb-drive; you can repair it one transistor at a time, atom by atom, if you really want to, but you can't get back the information they held unless you already have a copy, in which case why do you need or want to repair the original?

    This begs another question which I alluded to with the Teleportation Problem, and actually didn't realize until I started writing this. Does information itself have identity? If we assume every atom in the universe it unique from every other atom in the universe, in that they can't be in the same place, at the same time, occupying the same states and are thus differentiable from each other, it kind of makes sense that any information about one atom is going to be unique to it, even if it is only in the respect that a measured state, like the up or down spin of one of its electrons, was measured from that atom or electron, and not the one next to it. If you have a group of atoms together in a grid, as though representing a matrix, that group has information unique from another otherwise identical grid of atoms; even if each atom in each spot has the exact same state as its counterpart, the information in each matrix is unique just by virtue of the atoms storing it.
    If we assume this is true, then in actual fact, information cannot be transferred from one physical system to another, because it is unique to the object or system it is measured from; it can only be copied and encoded from one substrate to another, and as we established, a copy of a thing is not the thing itself (This gets even more complicated when you factor in the "No Cloning" theorem of Quantum Mechanics). The information on this screen therefore would not be going into my eyes and brain, but rather the information in the form of the states of the atoms on the screen gets encoded onto photons, which then get encoded into chemical states on my retina, which then gets encoded to a voltage or electron positions in my optic nerves, which then gets encoded to the physical geometry of the neurons in my occidental lobe. There is no "information" being transferred from one particle to another, any more than atoms in a transverse mechanical wave move in the direction of that wave; they move perpendicular to it. We may therefore merely interpret this as the "movement" of information in that the laws of physics facilitate the state of one object to change the state of another in a particular way that allows us to infer the state of the first.
    However, the Ship of Theseus question muddies the waters again because if information is unique to the object holding it, then the act of neurons replacing their atoms still removes and adds information unique to you, but we would say your consciousness is still continuous. And if that is the case, consciousness and information is still being copied and encoded to new atoms all the time, in which case why couldn't it be encoded to a new substrate without problems? And if such transfer is constantly occurring between particles and is non-problematic for continuity of consciousness, why stop at transferring consciousness from neurons to computer chips? Why couldn't that "wave" of state differentials be transferred to a stone, or water, or to the vacuum of space itself? This is kind of a problem I've always had with people suggesting that consciousness can be transferred between substrates, because if it can, what makes that much different from the idea of a soul or spirit? And if it can encode to any substrate, might we be denying ourselves some other kind of afterlife by downloading ourselves into computers? It starts smelling a bit like various religions, and I understand discussion of such topics is discouraged on this forum, at least in so far as they invite vitriol.

    My personal opinion on the subject of substrate independence is that it's safer to assume consciousness is substrate dependent, because if I sign up for mind-uploading, worst case scenario: I die; best case scenario in the current state of the art: I get cloned. Therefore I'd prefer to invest in life-extension, particularly brain-extension, until mind-uploading is the only option left, and I've got nothing else to lose.

    Hope that helps, or at least gives you something interesting to think about. Sorry I talk so much.

  • Jan. 11, 2021, 2:47 p.m.

    You don't even need science fiction for the teleportation question.

    In a patient who has had the two hemispheres of their brain cut apart from each other, when they are tested, it seems each side of the person's brain believes itself to be the original, complete person. This in spite of the fact that each hemisphere doesn't seem to know what the other is thinking or even doing.

    But even more extreme are hemispherectomy patients:

    In these people an entire half of the forebrain has been amputated, usually for untreatable epilepsy. But after surgery, family members say that the patient's personality and sense of humor remain entirely intact, with maybe some paralysis on one side.

    This paints the picture that our hemispheres are basically 2 copies of one personality. Not exact copies, but close enough to fool our loved ones and even ourselves. Neither one is the 'real' or 'fake' person.

    The uploading of one side to another probably happens continuously throughout our lives. A computer analogy would be two servers which mirror each other by exchanging all of their most recent changes every day.

    So it's kind of amusing that all the pop fiction imagines uploading ourselves to computers -probably the hardest way possible to do such a thing- when it is a lot more plausible to upload ourselves to real, living brain tissue.


    JPG, 196.3 KB, uploaded by MultiTool on Jan. 11, 2021.

  • Jan. 12, 2021, 6:28 p.m.

    I'd be interested to know how such experiments are conducted. Do you have reference sources I could look at?

    ... no reason.


  • Jan. 13, 2021, 4:06 p.m.

    I had a debate with my brother about this when I was young. It's a wonderful debate. I remember a point that if the Enterprise transporters managed to copy the entire of my brain, including all the memories, then from the point of view of the materialised 'new me' my historical memory would tell me I'm the same person.

    I would remember walking up to the transporter pad, saying 'energise', and materialising at the destination. My memory is clear, and I am utterly convinced that that person before was the same me.

    Then again, I look back now at some of my dim early memories, and realise how fallible memories are. I remember distinctly everyone saying 'Kindergarden' and read it as such, and only recently did I discover it was in fact 'Kindergarten'. How could I get it wrong, in particular a word so frequently used? I struggle to remember memories in early childhood - my grandmothers house, dim cloudy days and mist, a tornado? (I don't think it was, but at the time I thought I saw one). I know it is logical to think that my body was there experiencing things, but to be honest, 'I' wasn't there - I simply can't be where I can't remember being.

    Add to this that if I took a prescription drug that alters the way I think, could you say that that was me? I suppose my conclusion is our experience is subjective, and paradoxically also not all of who we are. Evidence tells me that I'm more than my memories, and thus I agree with you @gwolffe356 - I wouldn't step on that transporter pad either.

    About life extension though - that is also a scary thought. Imagine all the grumpy old men we know, and imagine them living for another 10,000 years. They would be very, very, very grumpy by then.

  • Jan. 14, 2021, 4:19 p.m.

    That is the perfect movie scene for this post.


    Rereading this, the split-brain outcomes are not as exactly as I had thought they were. The hemispheres are definitely not identical, but do both act like independent people, one who can speak and one who cannot.

    You know, this could lead to a sci fi/horror story of someone who achieves immortality by splitting their brain and putting each half into 2 unfortunate victims every generation.

    Every time the mad doctor installs his/her half-brain in a victim, they also install a 'blank' hemisphere (3d printed? cloned baby brain?), which will absorb all of the thoughts and experiences of the older mad-doctor hemisphere, becoming a complete, imperfect 2-hemisphere copy of the doctor's personality.

    Eventually you would have hundreds of slightly-different copies of the same doctor running around, possibly getting into fights with each other. The protagonists have to hunt them all down and stop them.

    Since it is uploading without computers, maybe it could be set around the 1950s, or even 1800s, Frankenstein's century.

  • Jan. 19, 2021, 7:31 a.m.

    If over a hundred years of service all the parts on a particular ship are replaced such that none of it is original, can you say the ship you see today is the same one commissioned out the ship yard a century before? That is a curious question but I don't think it useful. A ship existing for 100 years is what we observe. Original/Different are just adjectives which have more emotional utility than objective utility.

    If you take ship parts and arrange them in particular manner what emerges from that process is a ship. Does replacing a part of that ship cause the ship to de-emerge? That depends on the part. Replace a deck stanchion and the ship doesn't de-emerge. Replace the entire hull in one repair and I think de-emergence happens. Replacing one biological neuron with a synthetic one is probably like replacing a deck stanchion.

  • Jan. 23, 2021, 7:54 a.m.

    Here's another thing to consider: the big problem with testing if human consciousness can be transferred from one substrate to another is that, if the result is merely a copy, that copy will have all of the original’s memories, and believe itself to be the original; meanwhile the original consciousness may be cut off during the transfer, however gradual, if it is indeed substrate dependent, and will not be able to tell the scientists that the experiment failed. In not so many words: you cannot prove substrate transfer of consciousness, because no one can be sure if the result is the original or a copy.

    What occurred to me though was that this thought experiment relies on memory, namely the copy’s memory of self; so what happens if we turn that situation on its head? If a person got amnesia or had a dissociative fugue and lost their memories, ranging from those necessary to maintain a sense of self to even basic motor and speech functions, is that person no longer the same consciousness but a new one? If they regain their memories at the end of the fugue, is the consciousness they were in the interim period now dead? While there is no clear answer, I think most people would argue that, whether you lose your memories or not, you are still the same consciousness, the same person, because you maintain continuity. If true, then it means continuity of the thinking process is more important to consciousness and identity than the memories or data being processed; e.g. having all of Abraham Lincoln’s memories does not make you Abraham Lincoln and never will, no matter how firmly you may believe it. That does not directly tell us whether consciousness is dependent on its substrate or not, but what it does tell us is that any transfer or replacement of that substrate must not interfere with the continuity of that conscious process.