The foremost assumption made in this book is that it is possible for us to understand how the human brain works if we crack the neural code, i.e., how the human brain encodes the sensory information it receives, and moves information in the brain to perform cognitive tasks, such as thinking, learning, problem solving, internal visualization, internal dialogue, etc. A second key assumption is that in breaking through with such understanding we will then have the knowledge to create a human-level artificial intelligence (AI and HLAI): a machine that can perform cognitive tasks at the level that the best humans can achieve. There’s a lot wrapped up in these statements, for example:
- Is a computer simulation of a brain sufficient to make it intelligent?
- Do you need consciousness to have intelligence?
- Do you need to be alive to have consciousness?
If your answer to the first of these questions is no, then I hope this book will open your views to counter arguments, however, if your answer is yes, then I trust this book will provide material for your research or feed your curiosity. I will return to these questions in Part Three, and the question of whether a computer simulation can possess HLAI.
To speculate, an HLAI machine will be able to open new dimensions of capabilities. Our first tentative steps towards HLAI may well be primitive, possibly as slow (compared to modern computers), or slower than the human brain or brains of simpler animals with fewer neurons than humans. But once we crack the neural code we will engineer faster and superior brains with greater capacity, speed and supporting technology that will surpass the human brain. This is even before transcendency or the so-called singularity (Kurzweil, 2005) is reached, where the AI machines create the next generation of intelligent machines that reach even higher levels of intelligence.
I have opted not to discuss whether these prospective developments in AI are good for humanity or not except to say that I think such developments in HLAI can offer huge benefits for mankind. However, society must also act to control this technology and prevent its misuse. Knowledge is value free; it is up to society to ensure it is used morally and ethically. These are deep and complex topics that are beyond the scope of this work.
There is a general principle assumed here that all life forms on Earth share common features in their makeup, that evolution reuses what it has created, though typically incrementally in more evolved forms. This assumption is borne out in neuroscientific research. For example, it’s what gave Eric Kandel the Nobel prize for his work on memory, which was conducted on a sea snail, but the principles learned in that research are relevant to the human brain. Hence research on simpler life forms to understand how their brain works is a useful approach to understand the workings of the human brain. Take the honeybee, with some 1 million neurons, it is capable of a surprising degree of higher cognitive functions, including the ability to cope with the concept of ‘sameness’, recognize human faces, use top-down visual processing, as well as solve complex maze-type problems and show context-dependent learning (Lesley Rogers et al, 2013).
There is a huge amount of scientific research on the brain conducted in many disciplines: in the many branches of neuroscience, psychology, and AI research, but the scientists and engineers work largely within their silos, writing in their house journals, and attending their disciplines’ favored conferences. There isn’t enough cross-over. It is a kind of scientific tower of Babel where scientists rarely come up to the surface to communicate with scientists from other disciplines. This will have to change because I’m convinced that only a multi-disciplinary approach will crack the neural code.
While some AI researchers have the goal of achieving HLAI, neuroscientists working on understanding the human brain have more varied aims, and some can be dismissive of such AI research because of its primitiveness compared with the sheer complexity of biology, and even the simplest cell lifeform. But neuroscientists (cognitive neuroscientists, neuropharmacologists, neurophysiologists, neuroanatomists …) are not trained in computation and may not appreciate clues their work offer as useful to AI researchers. Of course, there are disciplines such as computational neuroscience (computational neurophysiology, evolutionary cognitive neuroscience, computational cognitive science) that may speak the language of AI researchers (i.e., mathematics and simulation) but have radically different research aims. And then cognitive psychologists form another research group altogether. The proliferation of specialisms are multiple and create barriers for useful information to flow between them.
There is another challenge, nicely put by Terrence Sejnowski (Anderson and Rosenfeld, 2000) who moved across disciplines, from relativistic physics to neuroscience and biology to neural modelling, that once you start experimenting in biology the complexity and detail is so deep that you don’t get a chance to rise above it to find system level answers. The challenge becomes understanding what detail is important to carry forward into an engineering model. A multi-disciplinary approach is essential, and the role of AI modelling is to tie the necessary details together into a simulation of the brain that emulates intelligence.
This is the motivation for the book: to bring together the clues to be found across all the science concerned with the brain, gather them in the whole to inspire the next steps in AI research. This book does not crack the neural code but I would like to inspire the next generation of scientists and engineers, to a) make them aware of the challenge, b) provide a basis for how this challenge can be solved, and b) collect the relevant clues scattered across multiple papers and books and reduce the barriers of the scientific tower of Babel. I believe that we are at a critical point in the history of AI, where the AI community is growing at a fast pace, largely preoccupied with narrow AI applications, while neuroscientific data is growing at a rapid pace thanks to ever evolving non-destructive measuring technology (see Appendix). The research cited in this book from across multiple disciplines should provide a convenient starting point for AI researchers building HLAI.
The readership this book aims for runs across anyone interested in the topic of where AI goes next: AI researchers, neuroscientists, students, research budget holders, venture capitalists, and the interested general reader. This book is divided into four parts:
- Part Zero provides a level setting for definitions of what I mean by AI, machine learning, deep learning etc. It provides a little history as well to put our current state of knowledge in AI into perspective so that the challenges ahead can be put into historical context.
- Part One pulls out of the neuroscience research literature useful pieces of information that should inform future AI models. It is all about facts, and is evidence based.
- Part Two covers the theories (perhaps more accurately called hypotheses) being pursued by first neuroscientists and then AI researchers on how the brain works.
- Part Three is speculative in nature, pulling bits from earlier chapters and trying to make sense of them holistically, in a logical manner, to guide research towards HLAI. For those AI researchers building HLAI systems I offer a series of test questions to compare with your model. This test is designed to follow more closely our knowledge of the human brain and what is likely to be needed in an HLAI model.
Finally, note there may be relevant research that I have missed or statements that later prove incorrect. Do write to me at my email below if I have missed research work or made incorrect statements. Some material may become dated and prove incorrect over time, but that is normal, it is how science progresses. My aim is to draw a line in the sand and say: this is where we are at the time of writing, as we make progress in our understanding, we will redraw the line closer to our goal.
Note, there are essentially three purposes to the use of references here:
- To describe, or quote directly from a relevant published work.
- To refer the reader to a paper, typically a review, or a book, that can help further explain the topic under discussion.
- In some cases, to attribute historical priority. However, this is not a work of history and is not meant to be comprehensive or exhaustive in respect of prior art.
- Finally, please leave a review on Amazon and tell me what you liked or did not like in this book.
Dr E Michael Azoff
Newark, UK
email: ema@hmnlvl.ai
book web site: www.hmnlvl.ai