Saturday, January 28, 2006

Belief -- Why We Know What We Know

I had a fascinating conversation recently regarding belief. It got me to thinking about the nature of belief and faith and knowledge.

I am very interested in knowledge, what we know, how we know it and how we model it. But knowledge has, for humans, an emotional component. That is, while you know what you know (to an infinite level of recursion by the way) you also have an emotional response to that knowledge we call belief.

Why do I classify belief as an emotional response? Because belief is our emotional response that defends, in a way, our model of the universe. Those things we know very well we come to believe.

Of course belief can be mistaken and it can be changed. We may believe something very strongly, but cease to believe it when presented with strong evidence to the contrary. So, for many years people believed there were only 8 planets in the solar system. Then in 1930 people had to change their belief to accommodate a 9th planet. Many people took quite some time to accommodate this.

We are actually watching a similar belief transition now as people come to terms with the new knowledge that there is a 10th planet beyond Pluto.

But the conversation I was involved in centered around a belief in God. And the person I was in discourse with wanted to know why anyone would not simply choose to believe. He referred to Pascal's Wager which is an argument attributed to Pascal justifying belief in the Christian God on the basis of probability. Essentially, the argument states that one should believe because if one is correct there is an infinite reward to follow whereas if one is wrong there is no harm in having believed during life.

The core problem with this argument, of course, is that we do not CHOOSE to believe. We are drawn to belief, we are coerced to belief by evidence. If I do not find sufficient evidence to believe in a god, I cannot choose to believe in one anyway. In contrast, if I do find sufficient evidence to believe, I cannot choose to NOT believe. Believe is not a choice, it is a coercion.

Tim Holt's website http://www.philosophyofreligion.info/pascalswager.html has a good analysis of the objections to Pascal's Wager of which this is the third.

Belief may be coercive, but it also IS plastic. As evidence accumulates in one direction or another a belief can shift. This is typically, for complex issues and decisions, not a binary switch but rather a continuum that takes the person from a strong belief into the area of doubt, then ultimately to an accolades of the new belief.

But, again, belief is coercive. The accumulation of evidence will ultimately one to a conclusion and a belief even if that is in opposition to previous belief. How many parents honestly believe their child cannot have done THAT (whatever THAT is) only to finally have to acknowledge that the child DID do THAT in the face of more and more incontrovertible evidence.

However, there IS that issue of evidence. While one is coerced to belief by an accumulation of evidence, one's INTERPRETATION of evidence IS a matter of choice. Well, to some degree, anyway. For some things, the evidence is fairly incontrovertible. Or, as Holmes said, "circumstantial evidence my be virtually convincing, as when a trout is found in the milk."

But bigger issues typically have much less clear evidence. So where one person may see evidence of a god in a grain of sand, another may see evidence of complex forces driven by random events. And there is an emotional component to this interpretation as well. We often WANT to believe in particular positions which DOES color our interpretations of evidence even when we try to minimize this.

So where does this leave us? Well, it probably means there is very little chance of people with completely polarized beliefs will be able to convince each other based on current bodies of evidence. And since the interpretation IS colored by other emotionalaspects such as a need or a desire to believe a particular outcome, it is very difficult to move from one point of view to another based solely on improved interpretations. Rather, more and more dramatic evidence is needed. And it is seldom forthcoming.

Comments?
Regards
Bill

Sunday, January 22, 2006

Medical Costs of the Future...Lower and Lower

The following is an esay from a book I co-authored in 2000 on the impact of technology on future life (Critial Mass, MC2 Publishers, 2000).
This particular essay is on how I and my writing partner came to feel that medical costs would peak, then fall during the 21st century.


Computers and Medicine: Hippocratic Art becomes Marketing Artifice


People of our generation (so called Baby Boomers) were raised to believe several things about doctors and medicine. First, doctors were members of some sort of priesthood. They knew things we could never know. They had education above and beyond that of normal people. We were encouraged to believe that doctors knew what was best for us and knew things about us we could never know.

Second, medical knowledge was the hidden knowledge of the inner sanctum, the runes of health often cast in terms the average person could not even begin to decipher. Body parts, diseases, conditions, bacteria, and viruses were named in Latin or Greek and often seemed deliberately obtuse. We were not meant to read the prescriptions scrawled by our doctors, or to ask too many questions.

Third, they gave you a sucker if you were a good patient.

The attitude is changing, though. Rising medical costs, coupled with a new understanding of disease and health is changing perceptions about the world of medicine. The cost of medical treatment rises at about 3 to 5 times the rate of inflation. Nothing seems to reduce it. Hospitals have tried to rein in costs. Their ideas, from reducing staff and shortening stays to timesharing equipment among facilities do not keep pace. In desperation, hospitals form conglomerates and try to become managed care associations, which tends to reduce the service to their patients while failing to contain costs. Managed care tries to rein in doctor fees, but the cost in care quality is high. They even stop giving away the suckers.

Finally, they try marketing. It begins to work, but in unforeseen ways.

We have a national (perhaps cultural) taboo against marketing medicine. We don’t like our doctors hawking cures to people so desperate they’ll try anything. We used to see it; we called it Patent Medicine and it earned a well-deserved reputation for deception and danger. When anyone could set up shop on a street corner and sing the praises of assorted elixirs to cure everything from smallpox to social diseases the population was at risk and had no recourse when the so-called cures failed. As a nation, we stepped in and began to regulate medicine and drugs as a way to protect ourselves from the charlatans along the road.

Yet, we have found ways to market prescription medicine and even medical care again. We see commercials about new medicines (carefully avoiding any statement of what they treat) and we see billboards from hospitals regarding the level of obstetric and neo-natal and cardiac care they provide (again, carefully couched to avoid any claims or any discussion of costs). Every sports magazine, women’s magazine and health magazine on the newsstand carries at least a few full page advertisements form prescription medicine, complete with information about uses and side effects that used to be available only to doctors.

We see it because our perception of medicine has changed. The only effective and ethical way to mass-market medicine was to make it more approachable, more understandable. It became necessary to involve patients in their own care to keep costs down. To do that some of the mysticism had to move aside.

Concomitant with the public’s increasing awareness of health and medical issues, computer technology made it possible to consumerize many aspects of medicine. Blood pressure machines began appearing in stores. Electronic thermometers and blood sugar monitors emerged as over the counter devices. Even digital stethoscopes are available to the causal buyer.
While medicine is becoming more approachable and understandable, we still want more. So, what other role does high technology, particularly information technology, play in this move to consumerize medicine?

Start with the hugely expensive MRI and CAT machines. We see amazing images produced by these non-invasive scanners. We can see the Visible Man Project, which available for anyone to see via the Internet, but only possible with computed tomography. Otherwise you would just have thousands of flat photos looking like sliced liver. Because of computed tomography, you can see a three dimensional image that can be rotated in all three axes and zoomed or probed with virtual views. We see TV shows like The Operation with incredibly high tech medical gear. Microsurgery, aided by motion control computers like those used in cinema, allows surgeons to perform absolute miracles.

These devices, while deeply dependent on computer technology, are only the beginning. They are large, complex, and may require a new priesthood of computer-savvy technicians. In addition, they require trained analysts to read the results. What if that were not so?
For centuries, medicine has been viewed as more of an art than a science. Hippocrates’ famous oath, in fact, describes medicine as an art, not a science. The original oath (circa 300 BCE) required young doctors to care for their teachers and teachers’ families and to teach other doctors at no cost. That was probably the first clause to go. It also prohibited surgery (leaving that to the barbers of the time as they had the blades) and abortion. The AMA has changed the original oath just a bit.

Despite the changes in the oath, medicine remains as much art as science. Or, at best, a science in the service of an art.

For example:
Diagnosticians follow complex and intuitive chains of reasoning. Chains they are often at a loss to explain. Arriving at the correct diagnosis in the shortest possible series of steps is still considered one of medicine’s finest skills and students are tested in it constantly. It is obviously important to diagnose the correct problem in the minimum of time since a failure to do so can leave someone very dead. Some doctors have an almost mystical skill at this and they command very high respect in their profession.

Surgeons constantly talk about the delicacy of their operations. “The hands of a surgeon” is a phrase that captures the shamanistic nature of the awe in which surgeons are often held.
And, of course, the old joke goes that you and I, when confronted with a perplexing but solvable problem in our area of expertise, say that, after all, “it’s not rocket science!”

Rocket scientists say, “it’s not brain surgery!”

Brain surgeons say, “actually, it is brain surgery!”

Theirs, it is believed, is the most complicated and delicate of the surgeon’s art. Typical neurosurgeons know this too. They know it all too well. Very big heads in neurosurgery.
Understandable, really. But, perhaps, on the edge of changing.

The advent of the computer has begun a change in this view of medicine. With tremendous amounts of computing power available it is possible to better image the interior of humans and to better simulate the reactions taking place there. Art implies a certain lack of certainty and precision; science implies the opposite. The art of medicine is finally becoming a true science of medicine as our understanding of biochemistry matures. It matures because we can visualize molecules and simulate reactions using computers.

Let us look at a few examples.
Pharmaceutical companies have always referred to their discovery of drugs because of the brute force approach used throughout that industry. The process is changing, however, and now they refer to a drug and its design. Pharmaceutical companies now seek to design molecules, not discover drugs, because we now understand health and disease as macro-level manifestations of molecular processes. To get to this point they have employed information technology at its highest level.

Visualization on computers now allows researchers to see biological processes in simulation. Incredibly complex mathematics used to derive and predict the chemical forces that bind, shape, attract, and repel one organic molecule from another are available as 3D models. Virtual reality with force feedback allows designers to feel those forces as they pick up, twist, bend, and shape chemical compounds into novel and useful forms.

Animal testing moves into the past as our drugs become so tailored to the human condition that the only way to accurately assess their efficacy, other than with human trials, is to simulate their reactions in a human body. Animal models are still great for many purposes, but we can already see the writing on the wall. With the focus now on a genetic basis for disease it will no longer be as useful to test a drug on a rat, a rabbit, or even a chimp. The new class of pharmaceuticals that will emerge in the next few decades will be computer generated and so tailored to humans that animal testing will be useless.

Classic biology has left the realm of the taxonomic and descended to the garage level of the mechanic. Over the last two decades, aided by information technology, biology has begun to finally flourish as a predictive and engineered science. More and more high school biology classes are dissecting frogs virtually rather than using real frogs. Technology created for the film industry to show the subtle changes of a body as it disappears has been re-targeted for use in medical schools to replace the dissection of human cadavers.

Practical medicine is becoming more mechanistic as we unlock the molecular basis of disease, reproduction, and life. As it becomes more mechanistic, it becomes more amenable to being reflected in cyberspace. That is, as our understanding of our physiology drills down toward the lowest molecular nature of life, the information becomes more amenable to digitization. Once digitized, the information that describes the processes that make us ill or make us well can be recognized, manipulated, and administered by computers in far more precise ways than possible today.

Medicine will continue, for a while, to increase in cost. Pharmaceuticals will be costly to develop and the intellectual property represented by them will be hotly protected. But, not for long.
When the computer simulations become accurate enough and when the processes are understood well enough, you will begin to see the movement of medical treatment out of the hospital, outpatient surgery center and doctor’s office to Wal-Mart. You will begin to see computer kiosks that use expert systems to diagnose symptoms described by the customer. Note we said customer, not patient. At this level, you cease being a patient (that has always seemed an interesting term) and become a consumer and customer.

The psychology of taking your medical advice from a machine is considerably different that that of taking such advice from a human practitioner. The writer Larry Niven has referred to autodocs in many of his stories and novels. These are machines you slide into which perform examinations, diagnosis, and finally treatment. Not unlike the booth at the Levi’s shop that measures you for custom made jeans.

We may be a while before we have whole-body autodocs, but you can expect that computerization of medicine will migrate treatment for many ailments into kiosks, into your home and into the mass market. Who needs a pharmacist when a computer can take the prescription and synthesize the molecules needed (the medicine) and dispense directly to you? Who needs a doctor when a computer can take the history, perform the tests, diagnose the ailment, and write the prescription?

Who needs either of them when this can be performed at home? For example, a toilet which will perform many chemical tests on your urine and feces is being developed. The analysis of our eliminations has a long and proud history in medicine (short of surgery, how else could a doctor get anything out of you that had been through the loop, so to speak? Former food was convenient.). Such a toilet will be able to analyze and diagnose a wide variety of conditions. It will recommend treatment, which may be automatically included in your next grocery order since your refrigerator will talk to your toilet.

There is little reason to think that automated treatment is out of the question. Antibiotics could be administered by your bed linens while you sleep or by your clothing the next day. Antibacterial fabric is already a reality. Antibiotic fabric is not too far behind. Nicotine and arthritis patches have made it quite acceptable to have medication dispensed to you through skin absorption.

Our understanding of the effects of particular molecules is increasing steadily. With appropriate analysis of the customer (at the DNA level) and with sufficient computing power it will be possible to have a pharmaceutical synthesizer in your home to catalyze and synthesize molecules tailored to you and your condition. Security is a consideration, but it will be much harder to get an autodoc to dispense unneeded barbiturates or amphetamines than it is to corrupt a human doctor.

A while back Bill had Lasik surgery to improve his eyes. The procedure was painless, quick, and almost completely controlled by computer. It is not too hard to imagine that, in a few years, a machine at a Sears would be available to perform a similar surgery sans ophthalmologist. The necessary computer system to measure, calculate, track and focus a laser to perform such an operation is not beyond imagination at all.

It is not hard to imagine how other forms of simple surgery can be computerized and consumerized. Almost any skin surgery could be done with special lasers and software. Wart, mole, and cyst removal, Melanoma diagnosis, even some liposuction and vein stripping could be made fully safe, cheap, and computerized. Non-invasive surgical techniques will continue to improve with the growing sophistication of computer targeted and focused ultrasound, x-ray, microwave, and other forms of energy.

An elegant solution to bacterial infections is the possibility of using bacteriophages instead of antibiotics. These are viruses which are parasites of bacteria. Specific phages attack specific bacteria. Once the bacteria are dead, the phages die off. The two organisms evolved in twain with each other and bacteria are not likely to develop immunities to phages. They have already adapted as much as necessary. The phage for a particular strain of bacteria is typically found with the bacteria. In the human being, this is usually in the feces.

A computerized toilet could recognize particular disease bacteria, then isolate and amplify the phage associated with that bacteria. The toilet could then insert, inject, or transduce the phage back into the human through injections or patches.

This would form an extremely elegant solution to certain common bacterial infections where today we use broad spectrum antibiotics that prompt many bacteria to form resistant strains. The bacteria die and the phages then die off as well. The intelligent toilet is needed to make it work.

The result is that, in the not too distant future, medicine will become cheap. Sure, we will need controls to make it safe and effective. There will be failures and there will be quackery, but there is a lot of that out there now. We have to move forward to improve medical care, and we can do this if the specialized knowledge of doctors is captured and digitized, the specialized facilities of pharmaceutical companies are miniaturized and digitized, and there is a growing demand for cheaper care.

The driving forces are technological innovation, synthesis miniaturization, and computer control of those processes. The computer control issue is well in hand. Much of the synthesis technology we can take from NASA robot analysis technology on Mars probes. Nanotechnological advances will make chemical factories small and cheap. Research into all of these is in high swing now and we can expect dramatic developments in the future. We can also expect strong resistance.
Pharmaceutical companies are entrenched. Doctors are entrenched. Pharmacists are entrenched. They will all resist as will the public at first. Until the bills roll in.

Where will autodocs catch on first? Probably in third world countries that can’t afford medical care today, much less in the future if the costs keep rising. Provide a village with a device that can accurately diagnose and treat a variety of ailments and injuries for virtually free and third world governments will adopt it with a vengeance. Automated medicine could save millions in developing nations.

Second world nations who are struggling under the effects of brain drains from the cold war will probably adopt it next (they may even be a major part of the development). China, North Korea, and Russia are prime candidates who have or can have the technological skill to develop automated medicine and the lack of skilled practitioners of traditional medicine to prompt a demand.

Ultimately, as our own groaning medical system reaches the limit of its abilities to cope with an aging boomer population (aging into its hundred and twenties because of advances in traditional medicine), we will adopt it here as well. We’ll demand it.
What of the concern that pharmaceutical companies will still be needed to manufacture and distribute the medicines and might still charge inflated prices for them? Pharmaceutical companies rely on intellectual property rights. The molecules they produce and the processes they use to produce them are the secrets they hold dear. But the ability of computer-controlled nanotechnology to manipulate individual atoms into molecular combinations on demand will make those secrets very fleeting.

Once the processes are automated at the atomic or molecular level (rather than the macro reagent level they operate at now) there will be almost nothing to stop any particular drug being taken apart and then re-assembled. Slight, irrelevant changes in the molecule may be the way small firms get around patent issues. Some, in remote seriously ill corners of the globe, will just copy the drug and licenses be damned.

Think of it. Even now, if any village in South Africa (with 10% of its population HIV positive) could synthesize as much AZT as it needed on demand, would patent rights stop them? Not likely.

Sound outrageous? Consider the following:
Cloning of Dolly the sheep (and now many other animals) was accomplished by using delicate but well understood and very replicable techniques. The key turned out to be the application of a minute amount of electricity at just the right point in the process. It is all documented and the equipment and chemicals needed are available and not expensive. Farmers are already looking into it.

DNA strands can now be analyzed by computer circuitry using a special chip that has DNA molecules attached to silicon transistors on the chip. Such a device will soon be used routinely to analyze samples for infections. A home model is already planned.
Scientific American recently published a method by which amateur scientists could invoke the Polymerase Chain Reaction (PCR) in their homes. PCR is a process used in genetic research for rapidly reproducing DNA segments into quantities sufficient for analysis and use. It was invented a few years ago and made DNA research tremendously more effective by reducing the wait times for DNA reactions by factors of thousands. Now, you can do this in your kitchen.

We talk about high-tech medicine in this country all the time, but we’ve seen nothing yet. Most medicine is still the purview of doctors who listen to patients, make an educated guess of the problem, perform a test or two confirm the hypothesis and write a prescription. The “perform a test or two” step is so seriously discouraged by HMO’s and Managed Care insurance (and by National Health in other countries) it is often skipped. None of that is difficult to automate.
When it comes to hospital care, we need to be careful about assuming too much regarding its importance. Most people do not want medical care in hospitals. They tolerate it when nothing else will do. Much hospital care is devoted to easing discomfort in lieu of anything else to be done. People accept that because the discomfort is distracting and depressing.
What people want is medical repair. They want the problem to be corrected and to get on with their lives. Yes, we do want the quick fix. But, if the quick fix actually is a quick fix and the problem actually is corrected, what’s the problem? Influenza used to send millions to their beds and hundreds of thousands to their graves. Now it is a minor annoyance for most in this country.
So, while hospice care for terminal patients will continue in those situations we cannot correct (and there will be many), do not expect people to complain too much about not having hospitals to take care of them when a quick trip to KMart for their cancer cure or to Target for their cardiac repair kit will get them back on track with their life. Automated medicine will offer that.
Medicine and medical care will become very cheap in the future because both are highly amenable to technological innovation and computerization. It is at exactly this intimate level that cyberspace’s digital reflection of our society will be at its sharpest.

This essay is copyright Bill McDaniel and Pat McGrew...used with permission

Thursday, January 19, 2006

Pattern Recognition - A New Approach

Pattern recognition is re-emerging as one of the most important aspects of Artificial Intelligence and Neurological research. What has recently been determined is that a significant portion of neurological processing is actually pattern matching. Even what we think of as deductive reasoning is begining to be seen as a process involving a tremendous amount of pattern matching in its initial phase.

This re-raises the question of learnign machine algorithms and computational structures such as neural nets. Currently Neural nets have been getting a bad rap...Everyone seems to think that Support Vector Machines, AdaBoost, and other more recently derived algorithms.

However, the simplicity of construction coupled with the complexity of ability that classic neural nets provide strikes me as a powerful place to step off from in search of computational models of effective neurological processes. Or, as I have said before, I do not solve differential equations when I catch a ball (nor even quadratic equations).

Classic neural nets learn to recognize and segregate patterns through altered strengths of the connections between simplistic computing elements (neurons that pefome a simple sigmoid transfer or a discrete threshold transform).

But while neural net based pattern matching relies on strengthenng and weakening connection potentials between these artificial neurons, there are other emergent effects which are not fed back into the pattern matching process.

Essentially, what I am proposing here is that, as in continuous equations, patterns of data have multiple derivatives, slopes if you will, that reflect overlying patterns which moreadvanced techniques can take into account.

Neural nets do this to some degree as it is, but multi-layer nets do it better at the pure data level. The hidden layer of modern neural nets effectively captures the first derivative of the data pattern in an ordered set of connection weights. The connections encode the rate of change of the incoming feature data for different inputs, with differnt variations in those rates of change encoding different pattersn.

Please note, this is all sort of metaphorical. Discussing an encoding of data patterns as derivatives or trates of channge is not precisely accurate. However, Fourier transforms of image data do a similar reduction of a collection of data treated as a distribution of frequencies.

If we carry the analogy a step further, then, we could talk about the second derivative of the data pattern which would be the first derivative of the neural net's resulting pattern. From THAT pattern we could begin to derive deeper recognition of internal structures to our original data.

So, if we had one neural net stacked above another (metaphorically speaking) we could have it watch for pattersn in the lower level net. These patterns woudl arise from the patterns it detected in the data. Allowing the second level net to classify states in the first net provides a deeper, more refined set of nuances to the classification.

My recent experiments with this idea show a great deal of promise. However, the key remains to identify specific features of the first net to pass on to the second. The math associated with this effort is still somewhat obscure.

But the idea of stacked netswatching each other's patterns is similar to the way our brains networks watch each other. By feeding back patterns of connections and classifications about the primary networks, the secondary (and perhaps tertiary) networks can provide non-linear effects that act as perturbing noise in the process of pattern recognition.

Note that I am not talking about extra hidden layers in a network. It has been largely shown that extra hidden layers beyond about 2 do not add any benefit to the processing of the net. I suspect this is actually because we do not construct the nets with sufficient complexity.

However, I am talking about discontiguous nets, one being driven by extracted features from the other which is being driven by extracted features from a document or text corpora. The benefits from recognizing deeper patterns would provide us with nuanced patterns such as a recognition of trend data and sublties within the structure of the original data.

Thsi is a bit different from the other use f the term stacked neural nets that is common. In that use the same dayta (text for example) is passed to multipe nets that are trained to seek specific types of first level patterns. While the nets feed some information to the next net to recieve the data, they are essentially parallel and are all generating this first dericvative I spoke of. In my model, the higher order network is completely unaware of the the actual original text or data coming to it. Rather it is examining patterns of neuronal connections that it sees in the lowel level net without any knowledge of how they came to be.

Comments?
Bill

Monday, January 16, 2006

A Rant -- When Things are Too Hard

This is a rant

And, I hate to say it, but it is a rant about how things are done here in the bay area.

In particular, it is a rant about shopping for food

WHAT is it about grocery stores here in San Jose?

The shelves are NEVER stocked well. They are not fronted. My WIFE had to get on the floor and reach deep into the back of the shelves to find peanut butter, corn, pork and beans, and many other canned goods.

Thsi is not just about the Safeway on San Carlos where we shopped today...Albertson's, Zanotto's, even Trader Joe's looks like this a lot.

It wasn't just today...although a mnaager told me that a bunch of people just did not show up to work last night to stock and front shelves. But I have seen all these store look like this many times over the two years I have lived here.

I mentioned that I saw a lot of people wandering around...why couldn't some of them be facing shelves, making shopping more convenient for consumers? He just said they were busy with other things.

I suggested to him that, since I had been approached by two people asking for money for a school just at the front door, he should perhaps pay them to face the shelves. He explained that he couldn't just have "day laborers" do it. Then he walked away.

Well, perhaps he couldn't. Perhaps it really is more complicated than that to keep shelves stocked and faced. Perhaps it is just too hard a job for this manager to make shopping easy and convenient for the customers.

As someone from another place let me make a definitive statement. The customer service in this town sucks! Stores are poorly stocked, have few varieties and are absolutely not interested in making shopping better for the consumer.

Infrastructure is terrible...carts that wobble, tile floors that are cracked, shelf tags that are missing, goods that are mislabelled. From a retail standpoint, this town needs an enema!

I don't understand it. Why is it acceptable here to have poor shopping, poor selection, poor equipment, and poor infrastructure to get things done. The staff are friendly enough. They seem willing to help, but the management and owners, the rule setters and decision makers act as if customers are their last concerns.

I am not just speaking of food store either...clothing stores, music stores, hardware stores, furniture stores, restaurants...all seem to consider customers an interruption and a problem

Well, enough of that. Managers, Store Owners and Operators, clean up your acts and clean up your stores. Stop making excuses and serve your customers.

Bill

Friday, January 13, 2006

Churchill Club Visit and Tech trends

I joined the Churchill Club here recently and attended my first meeting last night which was the top tech trends for 2006. This was an interesting panel discussion/debate (not formal) with some audience voting on agreement with the panelists.

I was struck by two specific things
1) Each panel member (VC's of course) had specific vertical interests they wanted to mention. That makes sense, of course. The other panelists would agree or disagree, but everyone had a specific area of technological interest.

2) They did not actually integrate well. At one point Panelist A says X will happen. Panelist B says Y will happen, but A disagrees...Even though Y's occurrence is what will make X possible. Even when they

There was a lot of talk about new bio science trends as the place to make tech plays. There was also discussion about the overall plateauing of software and computing as an industry.

There was quite a lot of discussion and surprise at the idea that leaders in the computing field should recently have become interested in the biotech field.

None of them seemed to get that the next big COMPUTING revolution will be in BIOLOGICAL computation...The leveraging of the new, mechanistic understanding of biological processes at the molecular level to provide a surge in technological innovation in the computing arena.

While the presentation was very interesting, I sensed that the panel and even many of the attendees still didn't get it. The truly amazing thing that is beginning to happen in this century is NOT the advent of whole new technologies like genomics and proteomics, or the incredible advances in speed, performance, and miniaturization of electronics.

The thing to watch is the collapse of barriers between what have been disparate disciplines. Information Theory, as applied to Biological Engineering which feeds back into Materials Science to drive new Information technologies which will expand the biological horizons etc, etc, etc.

The synthesis of all these disparate fields...Biology, materials, electronics, photonics, chemistry, and radiology ... The synthesis of these is where ethe next huge leap in our technological civilization will be coming from.

Watch this space! You think things are strange now with kids getting pierced and tatooed, clothes that are starting to phone home for cleaning instructions, and locale-based technologies allowing the tracking of goods, people, and information?

Wait until the tatoos are wirelessly linked to the net to provide virtual services such as data and voice communication via clothes that reshape themselves based on the contextual environment of the wearer.

It gets weird from here!

And the VC's and Technologists of the world need to look at the synthesis, not the thesis.

My joining the Churchill Club was a good idea though and I intend to take in many of their meetings ... But I urge companies, economists, predictors and VCs trying to find the next big trend to look further out in a sense. The speed of change is about to be so great that 'further out' will be only 36-48 months in time, as it has been for these folks for decades. However, the AMOUNT of change in that time period will be far greater than ever before.

So a good idea today will be an expired idea in 6 months...Having already made its mark and brought in its revenue. To get ahead of THAT curve, people need to look at the emergent consequences of the exponentially increasing rate of change and figure out how what appear to be diverse technological trends will converge into products, services, and business models in a very sort time.

Comments?
Bill

Monday, January 09, 2006

Singularities and Growth Curves

I am in the middle of reading Ray Kurzweil's The Singularity is Near. This book inspired the previous one I mentioned, Accelreando, it is fairly obvious.

The recognition of accererating reurns is certainly the most important concept in the book. As far back as 1996 I began talking in my presentations to electronic document professionals, about how change was coming quickly, but the second derivative, the acceleation of that change was far more important than the speed of change itself. It is a hard concept for people to understand.

Change always seems to be coming too quickly. The very idea that it may be coming ever MORE quickly is frightening to many. Hwever, Kurzweil does an excellent job of explaining how accelerating returns really means that change and evolutionary shift IS occurring at ever increasing paces.

The conclusion he draws, about people merging with their technology to produce a new evolutionary step in the track of humanity is debatable. It is extremely difficult for most to concieve of such radical changes while still having the result be 'human'.

His arguments, however, are very cogent. The nature of humanity is certainly not tied up in our limbs, our physical form, or even our ways of interacting with the world. Is this not precisel what most major religions are teaching? That these physical forms do not matter? In many respects I see the same concepts in Singularity but driven by human mediated forces, not supernatural ones.

That makes a great deal of sense to me. We have, as a species, always attempted to transcend ourselves as they are now. suddenly, Singularity is saying, we are going to be able to do that in ways that will be so self evident as to be undeniable. Which leave me with a thought.

here is a debate going on between Evolutionists and Intelligent Design supporters. The gist of ID's arguemnt is that life as we know it, particularly human life, is so complex it MUST have been designed. It could not have emerged through even billions of years of random change and evolutionary pressure.

I do not agree...I see evolution working all the time and I believe it has sufficiently strong scientific evidence to be completely convincing as the driving architect of what we see.

BUT, if Kurzweil is correct, then perhaps the NEXT thing we see is comlex, human, intelligent life that IS designed by an intelligence...US!

The ID folks may just have the characters in their pasion play mixed up!

Scary
Bill