Hi there. I'm new to the forum.
These are some really insightful ideas. I have a few thoughts to add to the pile; not sure if they will help us design a good experiment, or just muddy the waters further, but I hope they will help a little.
Firstly, it occurs to me that the Teleportation Problem actually tells us something very important about the nature of consciousness, if it exists. If there's a transporter accident on the Enterprise, and the transporter makes a copy of me down to every sub atomic particle, I know the copy is not me, because I will not be experiencing life from their perspective. I will not suddenly be seeing through their eyes as well as my own because there is (presumably) no communication between our brains, separated by space. We will not be processing the same information and our thoughts and behaviors will subsequently diverge just from having different experiences.What this tells us about consciousness (and possibly even information in general) is that it possesses spatial locality, i.e. it has a position or coordinates in space that you can point to. It may have no volume, like a point particle, or be distributed in space, like an electromagnetic or acoustic wave, but it is "somewhere" and cannot occupy the same space at the same time as another consciousness, at least not without some interference (presumably). I find this exciting because we know a thing or two about space, and can manipulate it to some degree, so that would give us a place to start in the design of an experiment to test its existence and other properties, though I personally don't know where to go from here.
Secondly, a lot of the mind-uploading or transhumanism schemes I've heard of that involve the gradual replacement of neurons with computer chips, or interception of synaptic signals between neurons, base their reasoning on the Ship of Theseus thought experiment: if you replace all the parts on a boat over time as it wears out, is it still the same boat? (Let me hasten to point out that no definitive answer to this question was given with it.) The problem with trying to apply this reasoning to the human brain is that, while neurons do repair and replace their molecular components over time, to the best of our knowledge at this time, the neurons themselves are NOT! My understanding is that there's still debate within the scientific community about weather or how much neurons are grown or replaced in the brain, but with the exception of some adult neurogenesis being observed in the human Limbic System and nowhere else, the neurons you have now are the same ones you had when you were a born, and the same ones you'll have when you die.
I personally consider this a big blow to the idea of substrate indifference, because if we think of a brain as a machine to support continuous consciousness or cognition, animal or human, then why wouldn't the brain be able to regenerate itself like every other tissue in the body? Granted, the brain is remarkably resilient to injury already, but it's believed that the reason a person can survive a piece of rebar or pipe going through their head is because most of your brain isn't actually brain cells, but the axons and dendrites branching between them, and those are degenerating and regrowing all the time, every time you remember something in fact, so it's not hard for the brain cells to grow connections around an obstruction, which just pushes the nuclei out of the way instead of killing them.
I imagine trying to replace neurons in a brain like trying to replace transistors on a computer chip while it's running, or boards on a boat while its in the water: you can probably pull it off, but data's probably going to get corrupted while you replace the transistor, or water's going to get in the boat while you replace the board, potentially crashing the program, or sinking the boat respectively. Replacing those parts during operation is going to interfere with the function of that device. You can replace one atom or protein of a transistor or neuron and not affect its functions significantly because it has enough of them for that one atom to be redundant, and the Ship of Theseus is still The Ship of Theseus. But if you try to replace the whole thing at once, or replace one with another, it's not; even though they both float, a speed boat with the same title is not The Ship of Theseus, nor is the US Navy still appreciably the US Navy if you replace one or all of its ships with Roman Triremes, because its components no longer perform the same functions or processes. Likewise, a neuron is more than the sum of its parts. And if you remove a simulated neuron from a digital neural network, all of the sudden, it can no longer tell the difference between a picture of a Koala and a picture of a Dixie Cup, and while you can get that functionality back if you replace the neuron, it will take time to re-learn the information that old neuron had and stored in the form of its connections with other neurons; why should we assume a biological neural network is any different, except in being a lot more fragile?
This leads into my third thought on the subject, which is the definition of "non-consciousness" or death. If we define it as a cessation of electrical activity in the brain, like Hub suggested, how exactly do we define "electrical activity" and over what timelines? Technically chemical reactions are electrical activity in that electrons are moved and shared between atoms during bonding; is there still "electrical activity" in the brain when it is in the process of chemical decomposition? I don't think anyone would agree with that, so we need to sharpen our definition of "electrical activity in the brain" to the very specific electro-chemical processes that occur within neurons and between synapses, ranging from ion pumping through the cell membranes to the dendrite branching that is essential for memory formation and recollection.
Furthermore, how do we clearly define "cessation" of this activity? Consider the slowed down black hole super computers discussed in "Civilizations at the End of Time," where it may take 100 years to flip a bit. If we watch that thing for a few decades and determine there's no measurable electrical activity, does that make the super computer "dead" by this criterion? We would have to say "yes," so there's probably a problem with this criterion too. I think it would be safe to sharpen the definition of brain-death in this context to mean not an absence of measurable activity, but the neurons being in such a state that they cannot perform the previously described processing functions anymore without extensive repair or replacement, and even if you do that, the information they contained might be lost forever, like accidentally stepping on a thumb-drive; you can repair it one transistor at a time, atom by atom, if you really want to, but you can't get back the information they held unless you already have a copy, in which case why do you need or want to repair the original?
This begs another question which I alluded to with the Teleportation Problem, and actually didn't realize until I started writing this. Does information itself have identity? If we assume every atom in the universe it unique from every other atom in the universe, in that they can't be in the same place, at the same time, occupying the same states and are thus differentiable from each other, it kind of makes sense that any information about one atom is going to be unique to it, even if it is only in the respect that a measured state, like the up or down spin of one of its electrons, was measured from that atom or electron, and not the one next to it. If you have a group of atoms together in a grid, as though representing a matrix, that group has information unique from another otherwise identical grid of atoms; even if each atom in each spot has the exact same state as its counterpart, the information in each matrix is unique just by virtue of the atoms storing it.
If we assume this is true, then in actual fact, information cannot be transferred from one physical system to another, because it is unique to the object or system it is measured from; it can only be copied and encoded from one substrate to another, and as we established, a copy of a thing is not the thing itself (This gets even more complicated when you factor in the "No Cloning" theorem of Quantum Mechanics). The information on this screen therefore would not be going into my eyes and brain, but rather the information in the form of the states of the atoms on the screen gets encoded onto photons, which then get encoded into chemical states on my retina, which then gets encoded to a voltage or electron positions in my optic nerves, which then gets encoded to the physical geometry of the neurons in my occidental lobe. There is no "information" being transferred from one particle to another, any more than atoms in a transverse mechanical wave move in the direction of that wave; they move perpendicular to it. We may therefore merely interpret this as the "movement" of information in that the laws of physics facilitate the state of one object to change the state of another in a particular way that allows us to infer the state of the first.
However, the Ship of Theseus question muddies the waters again because if information is unique to the object holding it, then the act of neurons replacing their atoms still removes and adds information unique to you, but we would say your consciousness is still continuous. And if that is the case, consciousness and information is still being copied and encoded to new atoms all the time, in which case why couldn't it be encoded to a new substrate without problems? And if such transfer is constantly occurring between particles and is non-problematic for continuity of consciousness, why stop at transferring consciousness from neurons to computer chips? Why couldn't that "wave" of state differentials be transferred to a stone, or water, or to the vacuum of space itself? This is kind of a problem I've always had with people suggesting that consciousness can be transferred between substrates, because if it can, what makes that much different from the idea of a soul or spirit? And if it can encode to any substrate, might we be denying ourselves some other kind of afterlife by downloading ourselves into computers? It starts smelling a bit like various religions, and I understand discussion of such topics is discouraged on this forum, at least in so far as they invite vitriol.
My personal opinion on the subject of substrate independence is that it's safer to assume consciousness is substrate dependent, because if I sign up for mind-uploading, worst case scenario: I die; best case scenario in the current state of the art: I get cloned. Therefore I'd prefer to invest in life-extension, particularly brain-extension, until mind-uploading is the only option left, and I've got nothing else to lose.
Hope that helps, or at least gives you something interesting to think about. Sorry I talk so much.