Wonderful conference last week in integrative health and the state of Utah, but it laid bare an important finding. Knowledge of our health needs is very thin on the ground. This is not to say that people do not know there is a problem with regard to health. On this point there is unanimity; change is clearly warranted and in the wind.
There are two major problems with regard to knowledge and lack thereof in this case. First, everyone does not know specifically what needs to be done, scientifically-speaking. Knowledge of the relationships between health and contributors to and detractors from general well-being are not widely understood. More about that later. The second issue was considered in more detail in the sessions. Something needs to be done socially, political, and economically, but at what level? With what specific objectives? With what level of reform in mind?
Given my MBA background, experience in venture capital and entrepreneurism, etc., and generally conservative background, it may come as some surprise that I have made a lifelong study of revolution. The nature and requirements of revolution have always interested me, dating to the time when as a child I would visit my local library with my little red wagon, loading up books to read, if not devour in their entirety. "Squanto and the Pilgrims" as an essay on revolution and upheaval? His fellow natives probably would have thought so. I have always been attracted to the stories of revolution -- economic, social, political, religious, cultural. This is one factor to be sure in my original choice to go into venture capital early in my career. I was so fortunate to have studied such developments at UCSD with the oversight of the politicians and economists of the International Relations and Pacific Studies program there.
Revolution represents the clash of interests in the raw. In a revolution, a new order takes hold, ushering out the authority structures and the economic fabric of the old order, replacing it with a new one. We Americans understand such a development well, at least in our early history as a state, given that we served to open a revolutionary, worldwide Pandora's box in that regard.
Reform was an underlying theme of the integrative health conference at the University of Utah, but there was little confidence on how such a development should take place, whether it should happen in the context of current relationships or whether a new order was in order. There were voices for either extreme and for various scenarios in between.
This sets up a problematic couplet. Scientific realities with regard to an assessment of the problem must be better understood even while political and economic alignments are in flux. Often, participants in the program referred to the repetitive nature of their meetings. An underlying theme of the current meetings was that in the estimation of participants as well as leaders, prior meetings had reiterated themes of reform several times over with no resolution and negligible progress. The question presented over and over was how to break such a cycle, how and where to exert some form of leadership that would make a difference, particularly with regard to scientific possibilities.
How to address two moving targets? We must reduce them down to one, an intrinsically stable one. Based on my understanding of successful reform efforts, even revolutions, there is a clear path for the movement, though a potentially painful one. It is this: As nature is an intractable force, the source of our bounty and our woes, we should make an intractable commitment to follow what nature tells us needs to be done. We have oh, so many examples of when nature and scientific reality have been ignored in the interests of political and economic prerogatives that are harmful and short-sighted. As nature expresses itself to us in various forms of data, the acquisition and use of data in its various forms should be paramount in our efforts for reform and change.
The point here is that we be willing to follow the path laid out by nature, scientific findings, and ongoing collections of data wherever it leads. Only such a commitment will generate longstanding results. Only such a commitment will bring health in its various forms and economic and ecological stability. All of our educational, cultural, and civic resources need to be brought to the fore in support of such a commitment, which will surely have far-reaching economic and political implications. Some commercial opportunities will present themselves and others will wither up and die. Organizations, public and private, that have come to support the underlying conditions for disease and imbalance will thus lose influence and will need to convert their missions to those that are better in alignment with the health-oriented needs of the people.
Thus there will be a need for entrepreneurship and leadership in both public and private sectors. In Utah, we have a stellar history in this regard, most particularly with respect to the work of David Eccles as continued on by his son, Marriner. This history is directly applicable to our situation in Utah in our time. For a time, Marriner stood alone. In the end, he changed the world that we now enjoy in many ways and on many levels. I will record a presentation later today that outlines this history from my memory, particularly as it applies to leadership in public/private reform, even bordering on the revolutionary. Utahns see the Eccles name everywhere they look, but there is little knowledge of the history. Take ahold of your seats, the Eccles story is "Lord of the Rings"-esque.
Monday, April 15, 2013
Friday, April 12, 2013
Good questions, bad questions, good thoughts, bad thoughts
Last evening, Glenda Christianens, who is, as it turns out, the "Good Nurse, Glenda", provided a great example of healthful behavior in the integrative health conference at the University of Utah. This is the "cancel, cancel" technique. When a non-serviceable idea crosses your mind, you use this phrase, understandable to us all in this techno-centric world, to get rid of it.
you
What runs through out minds is so important to our health; this is now widely understood. We harbor concerns for our health and this alone is problematic. I was on a glorious walk on the Utah bench this morning and I was thinking about thoughts (yes, for we cognitive psychology neophytes, that indeed is what meta cognition is). I was walking past the offices of basically all things medical in the research park, which reminded me of all of the things we have to worry about in our health, particularly when things get creaky and stiff.
Should we worry about our heart, the condition of our veins and arteries, etc? To be sure, we need data and in some cases, only data will do. On a walk, however, we are better off thinking about the kinds of things that walks bring to mind, things that do not come to mind while driving a car. This many be anything, of course, but we are best off thinking of the loves of our lives and people and conditions we are grateful for.
Since on a walk you must decide where to go, your mind is well-occupied with questions such as whether you can make it to the top of the hill in the fifteen minutes you have left. If you are more fortunate than that and have all the time in the world, you may wonder whether you can make the crest of that hill to sit under a tree and doze for a while. Such are questions well worthy of consideration.
Worrying about your heart, your arteries, whether you have some kind of noxious problem, without data -- bad idea. Get data and get better.
Worrying about your heart, your arteries, whether you have some kind of noxious problem, without data -- bad idea. Get data and get better.
And, by the way, what a glorious day we are having already.
Regards,
Ken
The singularity that matters
If you have been following certain developments over the last half-century or so, you will have seen a pattern. The pattern relates to computing, to the capabilities of electronic computers in particular.
The issue started with the work of Alonzo Church and Alan Turing, who simultaneously published the conceptual groundings for modern computing, before electronic computing systems were developed. In that era, a "computer" was considered to be a person. "Computer" was a job title, as in a forester or a shop-keeper. "Computing" served as the basis for a technical career, that of adding up numbers for the most part, but without a machine to assist in the process. There were some mechanical devices for very big jobs. In our day, such a work description may seem odd. Nonetheless, it is an artifact of an earlier time. Much of what we do now to earn a living will surely seem strange to our own descendants.
The idea brought forth by Church and Turing, respectively from Princeton and Cambridge, was couched in complex, arcane languages of mathematics and logic. The concept is simple, though. As long as a logical process could be found to loop through itself multiple times, to repeat itself, the result could represent the kind of reasoning and decision-making that we as humans carry out. Such repetition would allow for a cycle of reasoning, error-correcting, and learning. In this, computers could be used to model much of human behavior.
During and after World War II, when computing machines were brought into use, such guidelines showed great promise. Indeed, they have been embraced by all sectors of society in incremental steps. Computerization of computational tasks started slowly, with the famous "glass temples" of mainframe systems owned by large organizations. Smaller, but powerful new systems become available over time in many stages, until the world is now awash with computing devices of one kind or another, a testament to their usefulness for many things.
a tr
The development mentioned earlier, the one begun by the works of Church and Turing, is referred to as the "Singularity". Computer scientists make the claim that computing devices are eventually going to take over the task of thinking, releasing us from much of this function, given that they will at some time demonstrate their cognitive, or thought-producing superiority over humans. Looking forward to the Singularity has been a tradition of many computer scientists since the time when Turing mentioned the possibility of such a development. Computers are not only judged to be superior in collating and sorting and facilitating communications on a large scale, a point that is not in dispute. Their potential, according to proponents of the eventual Singularity, is to even take over in the production of new thoughts where we haven't even gone.
Whether or not the career of being a human "computer" was fulfilling, we are not going back there to be sure. Much has been written about a shift in reasoning power from people to machines, which is also the theme of many artistic works, movies, and literature. The Singularity is a staple of much science fiction. Similar to predictions of the end of the world, there have been many forecasts of the Singularity, when it will come and what its eventual implications will be. Concern for and promotion of the Singularity has been the basis of much federal research and development funding, particularly in the defense arena. If the end of the world -- or at least of someone's version of the world -- is to be ushered in by computers with unbounded power, at least we can rest assured that they will be ours. Actually, some Singularity prediction artifacts are wrapped up in catastrophic finality, the end of the world as stimulated by rogue computing devices of various kinds. In such fictional accounts, machines often act contrary to the interests of their creators once they establish a level of superiority thought-wise and in terms of control. By these accounts, a tragedy faces humanity to the degree that we are not ready for the Singularity.
It is interesting, of course, that government and other interests are almost frantically working to bring the Singularity about in spite of such risks.
Singularity predictions, many of which are long past due, tend to extend ever further into the horizon. While the Singularity was considered to be imminent within only a year or two in the 1950s through to the 1970s, the 1980s , and beyond, predictions extended the date ever further into the future. By the end of the Twentieth Century, predictions had been extended to 2050 or so. In our day, it is difficult to understand when the Singularity is expected, as predictions are not so often provided with associated dates. The temptation is surely there by Singularity prophets to make explicit predictions, but with popular knowledge being ubiquitous is it is in our time, it is surely more difficult to back out of predictions that clearly did not happen.
Nonetheless, the implications of the Singularity are presented as being increasingly stark and frightening, even as predicted dates extend over the horizon or disappear altogether. This is not to say that automation is not inherently beneficial and that some aspects of intelligence as a characteristic of computing devices are not not available and desirable. The problem is the idea that computers will out-think us. Proponents of artificial intelligence say that we are creating machines that are inherently, evolutionarially superior to us. As a result, we will become, relatively-speaking, stupid.
As can readily be discerned, there is much evidence that the human race does not need a Singularity to behave stupidly. Individually and severally, we can generate more than a few irrational thoughts and counterproductive behaviors. Funding an impending Singularity would stand up alongside other well-documented acts of insanity of which we are aware.
Rather than trying to build machines to out-think us, couldn't we concentrate on leveraging the power of computers to use existing knowledge in improved ways? We have pretty good brains. We have stores of knowledge in various forms that lie unused, to the detriment of all of us. Why don't we work to utilize, if not maximize, the fruits of human creative output and thought? In this vein, let us consider another potential form of singularity. How about a singularity in which all of the best knowledge, supported and guided by viable flow of data, was available for our evaluation and use? What if an idea, once documented and verified, were immediately available when it was needed?
By this, I don't mean just that knowledge that happens to be available at a particular time and place. That wouldn't be much of a singularity, now, would it? We should at least take a page from the Singularity-ists. We should think big. Why not a form of singularity where the knowledge would rush to the scene once the context of a problem or situation presented itself. In the impending "Internet of things", data will be available from many new sources. Health is an important part of this. What if you were to get a blood test, or weigh yourself, or order a meal at a restaurant, an impending event with potentially important consequences? Would you want to do the right thing, the smart thing, with the results of the test? If we are truly able to arrange for a singularity of knowledge of this kind, such knowledge would also incorporate the best-tasting, most desirable options, given your condition and preferences. Now we are talking! Taste THAT ice cream (this will make sense a little later).
Is this possible? Our message is that it is. Would it be the "death" of commerce? Yes, much of it, as there is a great deal of profiteering going on. There will be substantial opportunities for purveyors of the "good stuff", however. Commerce is based on providing "goods" and services, not "bads" and services. Disease, for example is bad; there is nothing good about it. Knowledge can and will get rid of it.
Now, of course, the question arises as to whether the Singularity of such computer scientists and other futurists are so rhapsodic about can or will occur. They just did well in the Jeopardy challenge. I do not have the energy or time to take on that question right now, but I have one observation. About ten years ago I was at an artificial intelligence conference, a defense-sponsored affair, where exhibitors asked me to type in a question to one of their systems. I put in, "Is the ice cream good?" Several of the people were eating ice cream at the time. The exhibitors dutifully told me that the machine could not taste the ice cream. At the time, and even now, I thought the advice was more than a little condescending.
I happen to know that there are electronic taste and smell sensors on the market that are more than able to discern between the chemical and sensory characteristics of basically anything, including ice cream. I think that that is beside the point, however. I find it hard to believe that computers will be able to replace us in the thinking department as long as it is our senses and our priorities that hold sway. Can we at least work on achieving the singularity of which I write prior to the Singularity, if we need such a thing at all?
The issue started with the work of Alonzo Church and Alan Turing, who simultaneously published the conceptual groundings for modern computing, before electronic computing systems were developed. In that era, a "computer" was considered to be a person. "Computer" was a job title, as in a forester or a shop-keeper. "Computing" served as the basis for a technical career, that of adding up numbers for the most part, but without a machine to assist in the process. There were some mechanical devices for very big jobs. In our day, such a work description may seem odd. Nonetheless, it is an artifact of an earlier time. Much of what we do now to earn a living will surely seem strange to our own descendants.
The idea brought forth by Church and Turing, respectively from Princeton and Cambridge, was couched in complex, arcane languages of mathematics and logic. The concept is simple, though. As long as a logical process could be found to loop through itself multiple times, to repeat itself, the result could represent the kind of reasoning and decision-making that we as humans carry out. Such repetition would allow for a cycle of reasoning, error-correcting, and learning. In this, computers could be used to model much of human behavior.
During and after World War II, when computing machines were brought into use, such guidelines showed great promise. Indeed, they have been embraced by all sectors of society in incremental steps. Computerization of computational tasks started slowly, with the famous "glass temples" of mainframe systems owned by large organizations. Smaller, but powerful new systems become available over time in many stages, until the world is now awash with computing devices of one kind or another, a testament to their usefulness for many things.
a tr
The development mentioned earlier, the one begun by the works of Church and Turing, is referred to as the "Singularity". Computer scientists make the claim that computing devices are eventually going to take over the task of thinking, releasing us from much of this function, given that they will at some time demonstrate their cognitive, or thought-producing superiority over humans. Looking forward to the Singularity has been a tradition of many computer scientists since the time when Turing mentioned the possibility of such a development. Computers are not only judged to be superior in collating and sorting and facilitating communications on a large scale, a point that is not in dispute. Their potential, according to proponents of the eventual Singularity, is to even take over in the production of new thoughts where we haven't even gone.
Whether or not the career of being a human "computer" was fulfilling, we are not going back there to be sure. Much has been written about a shift in reasoning power from people to machines, which is also the theme of many artistic works, movies, and literature. The Singularity is a staple of much science fiction. Similar to predictions of the end of the world, there have been many forecasts of the Singularity, when it will come and what its eventual implications will be. Concern for and promotion of the Singularity has been the basis of much federal research and development funding, particularly in the defense arena. If the end of the world -- or at least of someone's version of the world -- is to be ushered in by computers with unbounded power, at least we can rest assured that they will be ours. Actually, some Singularity prediction artifacts are wrapped up in catastrophic finality, the end of the world as stimulated by rogue computing devices of various kinds. In such fictional accounts, machines often act contrary to the interests of their creators once they establish a level of superiority thought-wise and in terms of control. By these accounts, a tragedy faces humanity to the degree that we are not ready for the Singularity.
It is interesting, of course, that government and other interests are almost frantically working to bring the Singularity about in spite of such risks.
Singularity predictions, many of which are long past due, tend to extend ever further into the horizon. While the Singularity was considered to be imminent within only a year or two in the 1950s through to the 1970s, the 1980s , and beyond, predictions extended the date ever further into the future. By the end of the Twentieth Century, predictions had been extended to 2050 or so. In our day, it is difficult to understand when the Singularity is expected, as predictions are not so often provided with associated dates. The temptation is surely there by Singularity prophets to make explicit predictions, but with popular knowledge being ubiquitous is it is in our time, it is surely more difficult to back out of predictions that clearly did not happen.
Nonetheless, the implications of the Singularity are presented as being increasingly stark and frightening, even as predicted dates extend over the horizon or disappear altogether. This is not to say that automation is not inherently beneficial and that some aspects of intelligence as a characteristic of computing devices are not not available and desirable. The problem is the idea that computers will out-think us. Proponents of artificial intelligence say that we are creating machines that are inherently, evolutionarially superior to us. As a result, we will become, relatively-speaking, stupid.
As can readily be discerned, there is much evidence that the human race does not need a Singularity to behave stupidly. Individually and severally, we can generate more than a few irrational thoughts and counterproductive behaviors. Funding an impending Singularity would stand up alongside other well-documented acts of insanity of which we are aware.
Rather than trying to build machines to out-think us, couldn't we concentrate on leveraging the power of computers to use existing knowledge in improved ways? We have pretty good brains. We have stores of knowledge in various forms that lie unused, to the detriment of all of us. Why don't we work to utilize, if not maximize, the fruits of human creative output and thought? In this vein, let us consider another potential form of singularity. How about a singularity in which all of the best knowledge, supported and guided by viable flow of data, was available for our evaluation and use? What if an idea, once documented and verified, were immediately available when it was needed?
By this, I don't mean just that knowledge that happens to be available at a particular time and place. That wouldn't be much of a singularity, now, would it? We should at least take a page from the Singularity-ists. We should think big. Why not a form of singularity where the knowledge would rush to the scene once the context of a problem or situation presented itself. In the impending "Internet of things", data will be available from many new sources. Health is an important part of this. What if you were to get a blood test, or weigh yourself, or order a meal at a restaurant, an impending event with potentially important consequences? Would you want to do the right thing, the smart thing, with the results of the test? If we are truly able to arrange for a singularity of knowledge of this kind, such knowledge would also incorporate the best-tasting, most desirable options, given your condition and preferences. Now we are talking! Taste THAT ice cream (this will make sense a little later).
Is this possible? Our message is that it is. Would it be the "death" of commerce? Yes, much of it, as there is a great deal of profiteering going on. There will be substantial opportunities for purveyors of the "good stuff", however. Commerce is based on providing "goods" and services, not "bads" and services. Disease, for example is bad; there is nothing good about it. Knowledge can and will get rid of it.
Now, of course, the question arises as to whether the Singularity of such computer scientists and other futurists are so rhapsodic about can or will occur. They just did well in the Jeopardy challenge. I do not have the energy or time to take on that question right now, but I have one observation. About ten years ago I was at an artificial intelligence conference, a defense-sponsored affair, where exhibitors asked me to type in a question to one of their systems. I put in, "Is the ice cream good?" Several of the people were eating ice cream at the time. The exhibitors dutifully told me that the machine could not taste the ice cream. At the time, and even now, I thought the advice was more than a little condescending.
I happen to know that there are electronic taste and smell sensors on the market that are more than able to discern between the chemical and sensory characteristics of basically anything, including ice cream. I think that that is beside the point, however. I find it hard to believe that computers will be able to replace us in the thinking department as long as it is our senses and our priorities that hold sway. Can we at least work on achieving the singularity of which I write prior to the Singularity, if we need such a thing at all?
Thursday, April 11, 2013
Classification is what we do
By our nature, to the degree that we are knowledgeable, we classify. This can be easily demonstrated by trying to not classify. Say something, anything. Gossip. Say that so-and-so is a *%^##@! Ah hah!! You have classified! Talk about how you are going to get to or from your home after this session. By selecting a route and establishing a plan, you have classified. With each choice, you are faced with subsequent choices. Such options to lead to others. Understanding of such relationships is the beginning of knowledge. The more detailed such knowledge becomes, the more nuanced and useful it is.
My mentor, Dr. Dell Allen, made a presentation at a local university at my request. Though retired, he had devoted much of his professorial career to the understanding of classification. The presentation was titled "Classification, the Superscience". His point was that unless and until you classified something, you really didn't understand it. He asked the roughly ninety students present how many of them had learned about classification, about taxonomies of their subject areas. The answer was that none of them had learned anything about this important task either in their major areas of study or their general studies.
Without classification, you have chaos. I created a small series of guidelines on the subject as available on the web. The thing is, knowledge and classification are very closely related. As an example, scientists that study thought processes use the example of a person's entry into a fast food store. How is it that one knows how to make use of such a facility? Once you step inside, how do you know to go to the counter? How do you know which part of the counter to go up to? How do you know how an order is made? How do you know what to do once you have ordered? There are many hidden issues that are so deeply embedded in the situation that you don't even give them a thought.
Expertise in large part lies in the ability to recognize a situation in the first place. Place a novice in a meadow in the mountains and he or she will notice conditions on a very elementary level when compared to a forester or an plant ecologist or a geologist. Each of them sees a very different situation, though situated in the same meadow. Understanding of the situation in each case is couched in classification.
This is not just a passive issue, but active as well. The degree to which you can comprehend the implications of a situation you find yourself defines your use as a care giver. In some cases, recognition patterns may demand certain actions as a means of averting danger and disaster. Conditions in a mountain meadow may presage an earthquake, a fire, many kinds of weather-related threats, snakebites, altercations and battles, and biological risks. By the same token, they mean nothing more than the outline of a beautiful summer day, to be enjoyed and remembered.
The classification challenge is firmly embedded in questions of health and disease. Classification occurs at untold levels of importance, including the need to classify whoever is doing the classifying. In many health questionnaires, people are asked "has a doctor told you that you have diabetes"? Much of the importance of the answer can only be understood by dissecting further the nature of any doctor in question. Of course, you will want to know if it is a medical doctor or not. This is important to know, a consideration independent of many other facts. It may not always be the case that a medical doctor is the best source of medical classification, as a PhD virologist or immunologist may have more valid insights in a particular case. If the person was a medical doctor, was that person a specialist or a general practitioner? Was he or she particularly well-versed in diabetes and as related conditions? What data was used? Importantly, what data was not used? Was the doctor acting in a clinical capacity or was the comment made "at the opera" or in the context of another kind of social event?
This is how knowledge, at how least deep knowledge, is gained, continually burrowing down into greater levels of detail and specificity. In the intersection between science and society, there needs to be an effective match between such representations of reality and reality itself. If this is not the case, we will continue to bump up into physical and natural realities, to our discomfort and danger. This leads to the question of regulation. There are those that say that regulation it is inappropriate and counterproductive. What is called regulation in a political context may be an ugly affair, but that is because it is done poorly, not being informed of the detailed requirements of our situation.
Ask any scientist, particularly the naturalists and biologists. They will say that life itself and regulation are not too far afield from one another. We need to make a better go of it. Father Adam is reported in the beginnings of Genesis in the Bible to have been an ardent classifier, taking it upon himself to provide names for every living thing. An understanding of the ever-more-detailed task of classification lays bare an important factor, the need to classify not just names, but everything else that is of concern to the natural world.
My mentor, Dr. Dell Allen, made a presentation at a local university at my request. Though retired, he had devoted much of his professorial career to the understanding of classification. The presentation was titled "Classification, the Superscience". His point was that unless and until you classified something, you really didn't understand it. He asked the roughly ninety students present how many of them had learned about classification, about taxonomies of their subject areas. The answer was that none of them had learned anything about this important task either in their major areas of study or their general studies.
Without classification, you have chaos. I created a small series of guidelines on the subject as available on the web. The thing is, knowledge and classification are very closely related. As an example, scientists that study thought processes use the example of a person's entry into a fast food store. How is it that one knows how to make use of such a facility? Once you step inside, how do you know to go to the counter? How do you know which part of the counter to go up to? How do you know how an order is made? How do you know what to do once you have ordered? There are many hidden issues that are so deeply embedded in the situation that you don't even give them a thought.
Expertise in large part lies in the ability to recognize a situation in the first place. Place a novice in a meadow in the mountains and he or she will notice conditions on a very elementary level when compared to a forester or an plant ecologist or a geologist. Each of them sees a very different situation, though situated in the same meadow. Understanding of the situation in each case is couched in classification.
This is not just a passive issue, but active as well. The degree to which you can comprehend the implications of a situation you find yourself defines your use as a care giver. In some cases, recognition patterns may demand certain actions as a means of averting danger and disaster. Conditions in a mountain meadow may presage an earthquake, a fire, many kinds of weather-related threats, snakebites, altercations and battles, and biological risks. By the same token, they mean nothing more than the outline of a beautiful summer day, to be enjoyed and remembered.
The classification challenge is firmly embedded in questions of health and disease. Classification occurs at untold levels of importance, including the need to classify whoever is doing the classifying. In many health questionnaires, people are asked "has a doctor told you that you have diabetes"? Much of the importance of the answer can only be understood by dissecting further the nature of any doctor in question. Of course, you will want to know if it is a medical doctor or not. This is important to know, a consideration independent of many other facts. It may not always be the case that a medical doctor is the best source of medical classification, as a PhD virologist or immunologist may have more valid insights in a particular case. If the person was a medical doctor, was that person a specialist or a general practitioner? Was he or she particularly well-versed in diabetes and as related conditions? What data was used? Importantly, what data was not used? Was the doctor acting in a clinical capacity or was the comment made "at the opera" or in the context of another kind of social event?
This is how knowledge, at how least deep knowledge, is gained, continually burrowing down into greater levels of detail and specificity. In the intersection between science and society, there needs to be an effective match between such representations of reality and reality itself. If this is not the case, we will continue to bump up into physical and natural realities, to our discomfort and danger. This leads to the question of regulation. There are those that say that regulation it is inappropriate and counterproductive. What is called regulation in a political context may be an ugly affair, but that is because it is done poorly, not being informed of the detailed requirements of our situation.
Ask any scientist, particularly the naturalists and biologists. They will say that life itself and regulation are not too far afield from one another. We need to make a better go of it. Father Adam is reported in the beginnings of Genesis in the Bible to have been an ardent classifier, taking it upon himself to provide names for every living thing. An understanding of the ever-more-detailed task of classification lays bare an important factor, the need to classify not just names, but everything else that is of concern to the natural world.
Subscribe to:
Posts (Atom)