The world of artificial cognitive systems and machine learning is moving at a fast pace and is becoming a major international research challenge. New techniques are being developed in this field that will transform many aspects of our day-to-day lives and work. The SIMBAD project, backed by the EU with EUR 1.6 million in financing, is now looking at some of the ways that this research may be put to use.
SIMBAD aims to fully develop the new technology that is emerging within the pattern recognition and machine learning fields, and is researching the use of 'similarity information' rather than its previous feature-based approach, says the project's scientific coordinator, Professor Marcello Pelillo.Society is increasingly developing complex machines such as robots to carry out many of our everyday needs, he says. Artificial cognitive systems (ACSs) are now becoming a top international research priority and in accordance with this priority the European Commission has made this area one of the seven key research areas that Europe must develop in order to become one of the world leaders in next generation information and communication technologies (ICT).
Fruitful research into this area will lead to the development of many tools that will have a great social and economic impact on the EU, he goes on to say. Vehicle control, control of communication networks, medical diagnostics, and human-machine interaction are just a few of the areas that will benefit. There will also be many economic benefits that will boost European competitiveness.
'In addition to the applications mentioned above, we are devoting a substantial effort towards tackling two large-scale biomedical imaging applications. We are contributing towards providing effective, advanced techniques to assist in the diagnosis of renal cell carcinoma and diagnosis of major psychoses such as schizophrenia and bipolar disorder,' the UniversitĂ Ca' Foscari, Venezia researcher explains. 'These sorts of problems are not amenable to being tackled using traditional machine learning techniques due to the difficulty of deriving suitable feature-based descriptions.'
A successful outcome to these research applications would prove that SIMBAD’s approach is highly suitable for use in biomedicine and this would be a good springboard for further research to be carried out in this area, he says. Using pattern recognition techniques in medicine and health services would bring huge improvements to healthcare industries throughout the EU and open up many opportunities for health industry technology.'
A successful outcome of our experimentation would provide evidence as to the practical applicability of our approach in biomedicine, thereby fostering further research along the lines set up by SIMBAD, both at the methodological and at the practical level,' Professor Pelillo remarks. 'This would potentially open (up) new opportunities in health and disease management and bring radical improvements to the quality and efficiency of our healthcare systems,' he notes. 'The field of pattern recognition is concerned with the automatic discovery of regularities in data through the use of computer algorithms, and with the use of these regularities to take actions such as classifying data into different categories, with a view to endow artificial systems with the ability to improve their own performance in the light of new external stimuli.'
Six partners from five European countries (Italy, the Netherlands, Portugal, Switzerland and the UK) are particpating in SIMBAD — an international consortium that reflects an international field of research.'
The competences needed to achieve our goals cannot be found on a local or national level. The European dimension of the project guarantees a critical mass of researchers with complementary experiences and expertise, thereby boosting the likelihood of success,' Professor Pelillo says. 'Also, the potential impact of this research goes well beyond the national scale and the EU will benefit from presenting itself as an active player on the world scene of artificial cognitive systems, which is largely dominated by the USA.'
Available: [Online] http://ec.europa.eu/research/headlines/news/article_08_11_28_en.html
Thursday, November 27, 2008
The Future of the Web
Written by David Tow
By 2020 Web 2.0- The Social Web- will have developed into a complex multimedia interweaving of ideas, knowledge and social commentary, connecting over three billion people on the planet. In combination with the Semantic Web 3.0 it will automatically analyse, interpret and create new forms of layered knowledge beyond the world of today's blogs, wikis, social networks and virtual worlds.
Over the next decade these early trends will continue to evolve at a frenetic pace- social networks will both fragment and unify, catering to special groups, but also overlapping and creating within the next few years a global social network of networks. At the same time, social networks and virtual worlds such as Second Life will converge, along with powerful simulation technology, to create the first realistic virtual realities.
By 2030, Web 3.0- advanced versions of the Semantic Web- will have made many important contributions to new knowledge through network relationships, logical inference and artificial intelligence. It will be powered by a seamless, computational mesh, enveloping and connecting most human life and will encompass all facets of our social and business lives- always on and available to manage every need. It will connect not only most of the 8 billion individuals on the planet, but also link with other biological and artificial life forms, as well as countless everyday electronically controlled objects. The Semantic Web and Intelligent Web will have combined.
By 2040 Web 4.0- the Intelligent Web- will be ubiquitous- able to interact with the repository of almost all available knowledge of human civilisation- past and present, digitally coded and archived for automatic retrieval and analysis. Web 4.0 will mark the beginning of a new intelligent entity- a sentient and cognisant multidimensional network, powered not only by billions of ultra-fast tiny processors and unlimited communications bandwidth, but by the first quantum computers, capable of processing trillions of operations in parallel. Human intelligence will have co-joined with advanced forms of artificial intelligence, creating a higher or meta level of knowledge processing. This will be essential for supporting the complex decision-making and problem solving capacity, required for civilisation's future progress.
By 2050 Web 5.0- The Wise Web- will have emerged, embedding all biological and artificial life within a global cooperative intelligence. All critical decisions affecting our planet and life, including those relating to global warming, sharing vital resources and the ethical resolution of conflict and human rights, will be guided by this global intelligence.
The Wise Web will mark the beginning of a new threshold in human civilisation- a new form of global consciousness- in which all life will be embedded.
Available: [Online] http://www.faxts.com/index.php?option=com_content&view=article&id=917:the-future-of-the-web-&catid=92:david-tow&Itemid=101
By 2020 Web 2.0- The Social Web- will have developed into a complex multimedia interweaving of ideas, knowledge and social commentary, connecting over three billion people on the planet. In combination with the Semantic Web 3.0 it will automatically analyse, interpret and create new forms of layered knowledge beyond the world of today's blogs, wikis, social networks and virtual worlds.
Over the next decade these early trends will continue to evolve at a frenetic pace- social networks will both fragment and unify, catering to special groups, but also overlapping and creating within the next few years a global social network of networks. At the same time, social networks and virtual worlds such as Second Life will converge, along with powerful simulation technology, to create the first realistic virtual realities.
By 2030, Web 3.0- advanced versions of the Semantic Web- will have made many important contributions to new knowledge through network relationships, logical inference and artificial intelligence. It will be powered by a seamless, computational mesh, enveloping and connecting most human life and will encompass all facets of our social and business lives- always on and available to manage every need. It will connect not only most of the 8 billion individuals on the planet, but also link with other biological and artificial life forms, as well as countless everyday electronically controlled objects. The Semantic Web and Intelligent Web will have combined.
By 2040 Web 4.0- the Intelligent Web- will be ubiquitous- able to interact with the repository of almost all available knowledge of human civilisation- past and present, digitally coded and archived for automatic retrieval and analysis. Web 4.0 will mark the beginning of a new intelligent entity- a sentient and cognisant multidimensional network, powered not only by billions of ultra-fast tiny processors and unlimited communications bandwidth, but by the first quantum computers, capable of processing trillions of operations in parallel. Human intelligence will have co-joined with advanced forms of artificial intelligence, creating a higher or meta level of knowledge processing. This will be essential for supporting the complex decision-making and problem solving capacity, required for civilisation's future progress.
By 2050 Web 5.0- The Wise Web- will have emerged, embedding all biological and artificial life within a global cooperative intelligence. All critical decisions affecting our planet and life, including those relating to global warming, sharing vital resources and the ethical resolution of conflict and human rights, will be guided by this global intelligence.
The Wise Web will mark the beginning of a new threshold in human civilisation- a new form of global consciousness- in which all life will be embedded.
Available: [Online] http://www.faxts.com/index.php?option=com_content&view=article&id=917:the-future-of-the-web-&catid=92:david-tow&Itemid=101
Tuesday, November 25, 2008
Computers That Can Think Like Humans
By seeking inspiration from the structure, dynamics, function, and behavior of the brain, the IBM-led cognitive computing research team aims to break the conventional programmable machine paradigm. This team led by Dr. Dharmendra Modha, manager of IBM's cognitive computing initiative, hopes to rival the brain's low power consumption and small size by using nanoscale devices for synapses and neurons. This technology stands to bring about entirely new computing architectures and programming paradigms.
Cognitive computing offers the promise of systems that can integrate and analyze vast amounts of data from many sources in the blink of an eye, allowing businesses or individuals to make rapid decisions in time to have a significant impact. For example, bankers must make split-second decisions based on constantly changing data that flows at an ever-dizzying rate. And in the business of monitoring the world's water supply, a network of sensors and actuators constantly records and reports metrics such as temperature, pressure, wave height, acoustics and ocean tide.
In such cases, making sense of all the input would be a Herculean task for one person, or even for 100. A cognitive computer, acting as a global brain, could quickly and accurately put together the disparate pieces of this complex puzzle and help people make good decisions rapidly. The end result of this research is to ubiquitously deploy computers imbued with a new intelligence that can integrate information from a variety of sensors and sources, deal with ambiguity, respond in a context-dependent way, learn over time and carry out pattern recognition to solve difficult problems based on perception, action and cognition in complex, real-world environments.
Available [Online] http://www.enterpriser.in/India/Know_It/Computers_That_Can_Think_Like_Humans/551-95616-449.html
Cognitive computing offers the promise of systems that can integrate and analyze vast amounts of data from many sources in the blink of an eye, allowing businesses or individuals to make rapid decisions in time to have a significant impact. For example, bankers must make split-second decisions based on constantly changing data that flows at an ever-dizzying rate. And in the business of monitoring the world's water supply, a network of sensors and actuators constantly records and reports metrics such as temperature, pressure, wave height, acoustics and ocean tide.
In such cases, making sense of all the input would be a Herculean task for one person, or even for 100. A cognitive computer, acting as a global brain, could quickly and accurately put together the disparate pieces of this complex puzzle and help people make good decisions rapidly. The end result of this research is to ubiquitously deploy computers imbued with a new intelligence that can integrate information from a variety of sensors and sources, deal with ambiguity, respond in a context-dependent way, learn over time and carry out pattern recognition to solve difficult problems based on perception, action and cognition in complex, real-world environments.
Available [Online] http://www.enterpriser.in/India/Know_It/Computers_That_Can_Think_Like_Humans/551-95616-449.html
Friday, September 14, 2007
Virtual Worlds To Be Used As Artificial Intelligence Incubators
Written by Andy Chalk - 13 September 2007
Artificial intelligence research firm Novamente has created a new type of software that learns by interacting with avatars in virtual worlds such as Second Life.
The company said the AIs will start by being placed in virtual pets that grow more intelligent as they interact with their human owners, but would eventually be able to support more sophisticated creatures such as talking parrots and babies. Novamente said it had developed a "cognition engine" that would act as the thinking part of the AI, while "the virtual world provides the body," according to company founder Dr. Ben Goertzel.
Goertzel said both research and business reasons led Novamente to using virtual worlds for its AI development, and that there would likely be a strong market for "smart" virtual pets in them. "There are a lot of virtual pets out there and none of them have much intelligence," he said. "We have a pretty fully functioning animal brain right now and we are hooking it up to the different virtual worlds. There's not much doubt we can make really cool artificial animals."
He said virtual worlds would give the AIs a "relatively unsophisticated environment" in which to develop. "Robots have a lot of disadvantages, we have not solved all the problems of getting them to move around and see the world. It's a lot more practical to control virtual robots in simulated worlds than real robots," he said.
Using the AIs in gaming environments will be easier for the company in terms of acceptance, Goertzel said, because of the already commonplace use of limited AI in many videogames. While the script-based AI seen in most games won't compare to Novamente's effort, "The gaming industry has been one of the few places where AI has not been a dirty word," he said.
Novamente is scheduled to announce its first products at the San Jose Virtual Worlds conference in early October.
Available: [Online] http://www.escapistmagazine.com/news/view/76902-Virtual-Worlds-To-Be-Used-As-Artificial-Intelligence-Incubators
Artificial intelligence research firm Novamente has created a new type of software that learns by interacting with avatars in virtual worlds such as Second Life.
The company said the AIs will start by being placed in virtual pets that grow more intelligent as they interact with their human owners, but would eventually be able to support more sophisticated creatures such as talking parrots and babies. Novamente said it had developed a "cognition engine" that would act as the thinking part of the AI, while "the virtual world provides the body," according to company founder Dr. Ben Goertzel.
Goertzel said both research and business reasons led Novamente to using virtual worlds for its AI development, and that there would likely be a strong market for "smart" virtual pets in them. "There are a lot of virtual pets out there and none of them have much intelligence," he said. "We have a pretty fully functioning animal brain right now and we are hooking it up to the different virtual worlds. There's not much doubt we can make really cool artificial animals."
He said virtual worlds would give the AIs a "relatively unsophisticated environment" in which to develop. "Robots have a lot of disadvantages, we have not solved all the problems of getting them to move around and see the world. It's a lot more practical to control virtual robots in simulated worlds than real robots," he said.
Using the AIs in gaming environments will be easier for the company in terms of acceptance, Goertzel said, because of the already commonplace use of limited AI in many videogames. While the script-based AI seen in most games won't compare to Novamente's effort, "The gaming industry has been one of the few places where AI has not been a dirty word," he said.
Novamente is scheduled to announce its first products at the San Jose Virtual Worlds conference in early October.
Available: [Online] http://www.escapistmagazine.com/news/view/76902-Virtual-Worlds-To-Be-Used-As-Artificial-Intelligence-Incubators
Tuesday, September 11, 2007
The Singularity Summit
Written by Ronald Bailey - 11 September 2007
By 2030, or by 2050 at the latest, will a super-smart artificial intelligence decide to keep humans around as pets? Will it instead choose to turn the entire Earth, including the messy organic bits like us, into computronium? Or is there a third alternative?These were some of the questions pondered by the 600 or so technosavants meeting in the Palace of Fine Arts at the second annual Singularity Summit this past weekend. The meeting was convened by the Singularity Institute for Artificial Intelligence. The Institute's chief goal is to make sure that whatever smarter-than-human artificial intelligence is eventually spawned by exponentially accelerating information technology that it will be friendly to humans.
What is the "Singularity?" As Eliezer Yudkowsky, cofounder of the Singularity Institute, explained, the idea was first propounded by mathematician and sci-fi writer Vernor Vinge in the 1970s. Vinge found it difficult to write about a future in which greater than human intelligence arose. Why? Because humanity would stand in relation to that intelligence as an ant does to us today. For Vinge it was impossible to imagine what kind of future such superintelligences might craft. Vinge analogized that future to black holes which are singularities surrounded by an event horizon past which outside observers simply cannot see. Once the Singularity occurs the future gets very, very weird. According to Yudkowsky, the Event Horizon school is just one of the three main schools of thought about the Singularity. The other two are the Accelerationist and the Intelligence Explosion schools.
The best-known Accelerationist is inventor Ray Kurzweil whose recent book The Singularity is Near: When Humans Transcend Biology (2005) lays out the case for how exponentially accelerating information technology will spark the Singularity before 2050. In Kurzweil's vision of the Singularity, AIs don't take over the world: Humans will have so augmented themselves with computer intelligence that essentially we transform ourselves into super-intelligent AIs.
Yudkowsky identifies mathematician I.J. Good as the modern initiator of the idea of an Intelligence Explosion. To Good's way of thinking, technology arises from the application of intelligence. So what happens when intelligence applies technology to improving intelligence? That produces a positive feedback loop in which self-improving intelligence bootstraps its way to superintelligence. How intelligent? Yudkowsky offered a thought experiment which compared current brain processing speeds with computer processing speeds. Speeded up a million-fold, Yudkowsky noted, "you could do one year's worth of thinking every 31 physical seconds." While the three different schools of thought vary on details, Yudkowsky concluded, "They don't imply each other or require each other, but they support each other."
But is progress really accelerating? Google's director of research Peter Norvig cast some doubt of this claim. Norvig briefly looked at past technological forecasts and how they went wrong. For example, in Arthur C. Clarke's 1986 novel Songs of Distant Earth, set 1500 years in the future, the world was going to be destroyed as the sun went nova. So humanity had to cull through all the books ever written to decide which were good enough to scan and save for shipment in starships. Only a few billion pages could be stored and only one user at a time could search those pages to get an answer back in tens of seconds. Norvig pointed out that only 20 years later, Google saves tens of billions of pages and tens of thousands of users can query and answers back in tenths of a second.
Nevertheless, Norvig pointed out that accelerating growth doesn't characterize all aspects of our world. For example, global GDP over the past century has been growing at a pretty steady rate (1.6 percent per year) and shows no sign of acceleration. Same thing for average life expectancy.
Accelerationist Ray Kurzweil replied that generally he is focusing on infotech when he's projecting accelerating progress. In addition, Kurzweil made the excellent point that GDP figures do not account for the fact that most products are vastly more capable than earlier ones. For example, an Apple II with 48k of ram cost $2,275 in 1977 (about $7,800 in today's dollars). A new low-end iMac costs $1149.
So how might one go about trying to create a super-intelligent AI anyway? Most of the AI savants at the Summit rejected any notion of a pure top-down approach in which programmers would specify every detail of the AI's programming. Relying on the one currently existing example of intelligence, another approach to creating an AI would be to map human brains and then instantiate them and their detailed processes in simulations. Marcos Guillen of Artificial Development is pursuing some aspects of this pathway by build CCortex. CCortex is a simulation of the human cortex modeling 20 billion neurons and 20 trillion connections.
As far as I could tell, many of the would-be progenitors of independent AIs at the Summit are concluding that the best way to create an AI is to rear one like one would rear a human child. "The only pathway is way we walked ourselves," argued Sam Adams who honchoed IBM's Joshua Blue Project. That project aimed to create an artificial general intelligence (AGI) with the capabilities of a 3-year old toddler. Before beginning the project, Adams and his collaborators consulted the literature of developmental psychology and developmental neuroscience to model Joshua. Joshua was capable of learning about itself and the virtual environment in which it found itself. Adams also argued that in order to learn one must balance superstition with forgetfulness. Adams defined superstitions as false patterns that need to be aggressively forgotten.
In a similar vein, Novamente's Ben Goertzel is working to create self-improving AI avatars and let them loose in virtual worlds like Second Life. They could be virtual babies or pets that the denizens of Second Life would want to play with and teach. They would have virtual bodies and senses that enable them to explore their worlds and to become socialized.
However, unlike real babies, these AI babies have an unlimited capacity for boosting their level of intelligence. Imagine if an AI baby developed super-intelligence but had the emotional and moral stability of a teenage boy? Given its self-improving super-intelligence, what would prevent such an AI from escaping the confines of its virtual world and moving into the Web? As just a taste of what might happen with a rogue AI in the Web, transhumanist and executive director of the Institute for Ethics and Emerging Technologies (IEET), James Hughes pointed to the havoc currently being wreaked by the Storm worm. Storm has infected over 50 million computers and now has at its disposal more computing resources than 500 supercomputers. More disturbingly, when Storm detects attempts to thwart it, it launches massive denial-of-service attacks to defend itself. Hughes also speculated that self-willed minds could evolve from primitive AIs already inhabiting the infosphere's ecosystems.
On the other hand, founder of Adaptive A.I., Peter Voss outlined the advantages that super smart AIs could offer humanity. AIs would significantly lower costs, enable the production of better and safer products and services, and improve the standard of living around the world including the elimination of poverty in developing nations. Voss asked the conferees to imagine the effect that AIs equivalent to 100,000 Ph.D. scientists working on life extension and anti-aging research 24/7 would have. Voss also argued that AIs could help improve us, make us better people. He imagined that each of us could have a super smart AI assistant to guide us in making good moral choices. (One worry: if my AI "assistant" is so smart, could I really ignore its "suggestions"?)
Although Voss' views about AIs are relatively sunny, other participating technosavants weren't so sure. For example, computer scientist Stephen Omohundro argued that self-improving AIs would be ultra-rational economic agents, basically examples of homo economicus. Such AIs would exhibit four drives; efficiency, self-preservation, acquisition, and creativity. Regarding efficiency AIs optimizing their resource use would turn to nanotechnology and virtualization wherever possible. Self-preservation involves protecting its utility function from death which it would do by building in redundancy and embedding itself in mutually defensive social relations. The drive to acquire more resources means that AIs could be dangerously competitive with humans. If Omohundro is right, there are good reasons to doubt that an AI that is a relentless utility maximizer will be friendly to less than perfectly efficient humanity. The drive for creativity enables AIs (and us) to explore new possibilities for transforming and satisfying our utility functions. Omohundro's solution to for making AIs human-friendly? Try to teach AIs our highest human values, e.g., happiness, love, compassion, beauty and so forth.
On the question of AI morality, Institute for Molecular Manufacturing research fellow, J. Storrs Hall did a modern take on Asimov's Three Laws of Robotics. Hall noted that Asimov's whole point was that the Laws were inadequate. So what ethical rules might be adequate for controlling future AIs? According to Hall, the problem of setting moral rules in stone can be illustrated by trying to imagine how the Code of Hammurabi might apply to the Enron scandal. (Actually the Code did deal with commercial fraud. Rule 265: "If a herdsman, to whose care cattle or sheep have been entrusted, be guilty of fraud and make false returns of the natural increase, or sell them for money, then shall he be convicted and pay the owner ten times the loss.")
Eliezer Yudkowsky made a similar point when he asked us to imagine what values the ancient Greeks might have tried to instill in their AIs. Surely AIs incorporating ancient Greek values would have vetoed our civilization which outlawed slavery and gave women rights.
Hall suggested that instead of fixed moral rules (which a super smart AI with access to its own source code could change later anyway) progenitors should try to inculcate something like a conscience into the AIs they foster. A conscience allows humans to extend and apply moral rules flexibly in new and different contexts. One rule of thumb that Hall would like to see implemented in AIs is: "Ideas should compete; bodies should cooperate." He also suggested that AIs (robots) should be open source. Hall said that his friend economist Robin Hanson pointed out to him that we already live with superhuman psychopaths—modern corporations—and we're not all dead. Part of what reins in corporations is transparency, e.g., the requirement that outsiders audit their books. Indeed, governments are also superhuman psychopaths, and generally the less transparent a government the more likely it is to commit atrocities. So the idea here is that more AI source code is inspected, the more likely we are to trust them. Finally, Hall also suggested that AIs also be instilled with the Boy Scout Law.
Given these big concerns about how super smart AIs might treat humanity, should they be created at all? Famously, former Sun Microsystems chief scientist Bill Joy declared that they are too dangerous and that we should relinquish the drive to create them. Charles Harper, senior vice president of the Templeton Foundation, suggested there was a "dilemma of power." The dilemma is that "our science and technology create new forms of power but our cultures and civilizations do not easily create parallel capacities of stewardship required to utilize newly created technological powers for benevolent uses and to restrain them from malevolent uses."
Actually the arc of modern history strongly suggests that Harper's claim is wrong. More people than ever are wealthier and more educated and freer. Despite the tremendous toll of the 20th century, even social levels of violence per capita have been decreasing. We have been doing something more right than wrong as our technical powers have burgeoned. (It is worth noting the most of the 262 million people who died of violence in the 20th century died as the result of the actions of those superhuman psychopaths called governments using pretty crude technologies.)
Nevertheless, it is a reasonable question to ask if self-willed super smart AIs are too dangerous to unleash. The IEET's James Hughes suggested that one solution could be modeled on how the world currently handles nuclear weapons. If AIs are so dangerous, perhaps only governments should be allowed to own them. But this doesn't address the problem that governments themselves can be not-so-smart superhuman psychopaths. In addition, it seems unlikely that true human psychopaths (either individuals or collectives) can be permanently restrained from covertly creating AIs. If that is the case, we should all hope for and support the Singularity Institute's efforts to create friendly AI first.
When are AIs likely to arise? Ray Kurzweil, who joined the Summit by video link, predicted that computational power sufficient to simulate the human brain will be available on a laptop for $1000 in the next 15 years. Kurzweil believes that AIs will come into existence before 2030. Peter Voss was even more bullish, declaring, "In my opinion AIs will be developed almost certainly in less than 10 years and quite likely in less than five years."
If the Singularity Summiteers are right, buckle up and get ready for a really fast ride to the future. Let's hope their efforts will keep the ride from getting too rough.
Available: [Online] http://www.reason.com/news/show/122423.html
Please feel free to leave comments and questions.
By 2030, or by 2050 at the latest, will a super-smart artificial intelligence decide to keep humans around as pets? Will it instead choose to turn the entire Earth, including the messy organic bits like us, into computronium? Or is there a third alternative?These were some of the questions pondered by the 600 or so technosavants meeting in the Palace of Fine Arts at the second annual Singularity Summit this past weekend. The meeting was convened by the Singularity Institute for Artificial Intelligence. The Institute's chief goal is to make sure that whatever smarter-than-human artificial intelligence is eventually spawned by exponentially accelerating information technology that it will be friendly to humans.
What is the "Singularity?" As Eliezer Yudkowsky, cofounder of the Singularity Institute, explained, the idea was first propounded by mathematician and sci-fi writer Vernor Vinge in the 1970s. Vinge found it difficult to write about a future in which greater than human intelligence arose. Why? Because humanity would stand in relation to that intelligence as an ant does to us today. For Vinge it was impossible to imagine what kind of future such superintelligences might craft. Vinge analogized that future to black holes which are singularities surrounded by an event horizon past which outside observers simply cannot see. Once the Singularity occurs the future gets very, very weird. According to Yudkowsky, the Event Horizon school is just one of the three main schools of thought about the Singularity. The other two are the Accelerationist and the Intelligence Explosion schools.
The best-known Accelerationist is inventor Ray Kurzweil whose recent book The Singularity is Near: When Humans Transcend Biology (2005) lays out the case for how exponentially accelerating information technology will spark the Singularity before 2050. In Kurzweil's vision of the Singularity, AIs don't take over the world: Humans will have so augmented themselves with computer intelligence that essentially we transform ourselves into super-intelligent AIs.
Yudkowsky identifies mathematician I.J. Good as the modern initiator of the idea of an Intelligence Explosion. To Good's way of thinking, technology arises from the application of intelligence. So what happens when intelligence applies technology to improving intelligence? That produces a positive feedback loop in which self-improving intelligence bootstraps its way to superintelligence. How intelligent? Yudkowsky offered a thought experiment which compared current brain processing speeds with computer processing speeds. Speeded up a million-fold, Yudkowsky noted, "you could do one year's worth of thinking every 31 physical seconds." While the three different schools of thought vary on details, Yudkowsky concluded, "They don't imply each other or require each other, but they support each other."
But is progress really accelerating? Google's director of research Peter Norvig cast some doubt of this claim. Norvig briefly looked at past technological forecasts and how they went wrong. For example, in Arthur C. Clarke's 1986 novel Songs of Distant Earth, set 1500 years in the future, the world was going to be destroyed as the sun went nova. So humanity had to cull through all the books ever written to decide which were good enough to scan and save for shipment in starships. Only a few billion pages could be stored and only one user at a time could search those pages to get an answer back in tens of seconds. Norvig pointed out that only 20 years later, Google saves tens of billions of pages and tens of thousands of users can query and answers back in tenths of a second.
Nevertheless, Norvig pointed out that accelerating growth doesn't characterize all aspects of our world. For example, global GDP over the past century has been growing at a pretty steady rate (1.6 percent per year) and shows no sign of acceleration. Same thing for average life expectancy.
Accelerationist Ray Kurzweil replied that generally he is focusing on infotech when he's projecting accelerating progress. In addition, Kurzweil made the excellent point that GDP figures do not account for the fact that most products are vastly more capable than earlier ones. For example, an Apple II with 48k of ram cost $2,275 in 1977 (about $7,800 in today's dollars). A new low-end iMac costs $1149.
So how might one go about trying to create a super-intelligent AI anyway? Most of the AI savants at the Summit rejected any notion of a pure top-down approach in which programmers would specify every detail of the AI's programming. Relying on the one currently existing example of intelligence, another approach to creating an AI would be to map human brains and then instantiate them and their detailed processes in simulations. Marcos Guillen of Artificial Development is pursuing some aspects of this pathway by build CCortex. CCortex is a simulation of the human cortex modeling 20 billion neurons and 20 trillion connections.
As far as I could tell, many of the would-be progenitors of independent AIs at the Summit are concluding that the best way to create an AI is to rear one like one would rear a human child. "The only pathway is way we walked ourselves," argued Sam Adams who honchoed IBM's Joshua Blue Project. That project aimed to create an artificial general intelligence (AGI) with the capabilities of a 3-year old toddler. Before beginning the project, Adams and his collaborators consulted the literature of developmental psychology and developmental neuroscience to model Joshua. Joshua was capable of learning about itself and the virtual environment in which it found itself. Adams also argued that in order to learn one must balance superstition with forgetfulness. Adams defined superstitions as false patterns that need to be aggressively forgotten.
In a similar vein, Novamente's Ben Goertzel is working to create self-improving AI avatars and let them loose in virtual worlds like Second Life. They could be virtual babies or pets that the denizens of Second Life would want to play with and teach. They would have virtual bodies and senses that enable them to explore their worlds and to become socialized.
However, unlike real babies, these AI babies have an unlimited capacity for boosting their level of intelligence. Imagine if an AI baby developed super-intelligence but had the emotional and moral stability of a teenage boy? Given its self-improving super-intelligence, what would prevent such an AI from escaping the confines of its virtual world and moving into the Web? As just a taste of what might happen with a rogue AI in the Web, transhumanist and executive director of the Institute for Ethics and Emerging Technologies (IEET), James Hughes pointed to the havoc currently being wreaked by the Storm worm. Storm has infected over 50 million computers and now has at its disposal more computing resources than 500 supercomputers. More disturbingly, when Storm detects attempts to thwart it, it launches massive denial-of-service attacks to defend itself. Hughes also speculated that self-willed minds could evolve from primitive AIs already inhabiting the infosphere's ecosystems.
On the other hand, founder of Adaptive A.I., Peter Voss outlined the advantages that super smart AIs could offer humanity. AIs would significantly lower costs, enable the production of better and safer products and services, and improve the standard of living around the world including the elimination of poverty in developing nations. Voss asked the conferees to imagine the effect that AIs equivalent to 100,000 Ph.D. scientists working on life extension and anti-aging research 24/7 would have. Voss also argued that AIs could help improve us, make us better people. He imagined that each of us could have a super smart AI assistant to guide us in making good moral choices. (One worry: if my AI "assistant" is so smart, could I really ignore its "suggestions"?)
Although Voss' views about AIs are relatively sunny, other participating technosavants weren't so sure. For example, computer scientist Stephen Omohundro argued that self-improving AIs would be ultra-rational economic agents, basically examples of homo economicus. Such AIs would exhibit four drives; efficiency, self-preservation, acquisition, and creativity. Regarding efficiency AIs optimizing their resource use would turn to nanotechnology and virtualization wherever possible. Self-preservation involves protecting its utility function from death which it would do by building in redundancy and embedding itself in mutually defensive social relations. The drive to acquire more resources means that AIs could be dangerously competitive with humans. If Omohundro is right, there are good reasons to doubt that an AI that is a relentless utility maximizer will be friendly to less than perfectly efficient humanity. The drive for creativity enables AIs (and us) to explore new possibilities for transforming and satisfying our utility functions. Omohundro's solution to for making AIs human-friendly? Try to teach AIs our highest human values, e.g., happiness, love, compassion, beauty and so forth.
On the question of AI morality, Institute for Molecular Manufacturing research fellow, J. Storrs Hall did a modern take on Asimov's Three Laws of Robotics. Hall noted that Asimov's whole point was that the Laws were inadequate. So what ethical rules might be adequate for controlling future AIs? According to Hall, the problem of setting moral rules in stone can be illustrated by trying to imagine how the Code of Hammurabi might apply to the Enron scandal. (Actually the Code did deal with commercial fraud. Rule 265: "If a herdsman, to whose care cattle or sheep have been entrusted, be guilty of fraud and make false returns of the natural increase, or sell them for money, then shall he be convicted and pay the owner ten times the loss.")
Eliezer Yudkowsky made a similar point when he asked us to imagine what values the ancient Greeks might have tried to instill in their AIs. Surely AIs incorporating ancient Greek values would have vetoed our civilization which outlawed slavery and gave women rights.
Hall suggested that instead of fixed moral rules (which a super smart AI with access to its own source code could change later anyway) progenitors should try to inculcate something like a conscience into the AIs they foster. A conscience allows humans to extend and apply moral rules flexibly in new and different contexts. One rule of thumb that Hall would like to see implemented in AIs is: "Ideas should compete; bodies should cooperate." He also suggested that AIs (robots) should be open source. Hall said that his friend economist Robin Hanson pointed out to him that we already live with superhuman psychopaths—modern corporations—and we're not all dead. Part of what reins in corporations is transparency, e.g., the requirement that outsiders audit their books. Indeed, governments are also superhuman psychopaths, and generally the less transparent a government the more likely it is to commit atrocities. So the idea here is that more AI source code is inspected, the more likely we are to trust them. Finally, Hall also suggested that AIs also be instilled with the Boy Scout Law.
Given these big concerns about how super smart AIs might treat humanity, should they be created at all? Famously, former Sun Microsystems chief scientist Bill Joy declared that they are too dangerous and that we should relinquish the drive to create them. Charles Harper, senior vice president of the Templeton Foundation, suggested there was a "dilemma of power." The dilemma is that "our science and technology create new forms of power but our cultures and civilizations do not easily create parallel capacities of stewardship required to utilize newly created technological powers for benevolent uses and to restrain them from malevolent uses."
Actually the arc of modern history strongly suggests that Harper's claim is wrong. More people than ever are wealthier and more educated and freer. Despite the tremendous toll of the 20th century, even social levels of violence per capita have been decreasing. We have been doing something more right than wrong as our technical powers have burgeoned. (It is worth noting the most of the 262 million people who died of violence in the 20th century died as the result of the actions of those superhuman psychopaths called governments using pretty crude technologies.)
Nevertheless, it is a reasonable question to ask if self-willed super smart AIs are too dangerous to unleash. The IEET's James Hughes suggested that one solution could be modeled on how the world currently handles nuclear weapons. If AIs are so dangerous, perhaps only governments should be allowed to own them. But this doesn't address the problem that governments themselves can be not-so-smart superhuman psychopaths. In addition, it seems unlikely that true human psychopaths (either individuals or collectives) can be permanently restrained from covertly creating AIs. If that is the case, we should all hope for and support the Singularity Institute's efforts to create friendly AI first.
When are AIs likely to arise? Ray Kurzweil, who joined the Summit by video link, predicted that computational power sufficient to simulate the human brain will be available on a laptop for $1000 in the next 15 years. Kurzweil believes that AIs will come into existence before 2030. Peter Voss was even more bullish, declaring, "In my opinion AIs will be developed almost certainly in less than 10 years and quite likely in less than five years."
If the Singularity Summiteers are right, buckle up and get ready for a really fast ride to the future. Let's hope their efforts will keep the ride from getting too rough.
Available: [Online] http://www.reason.com/news/show/122423.html
Please feel free to leave comments and questions.
Sunday, September 9, 2007
Public meeting will re-examine future of artificial intelligence
Written by Tom Abate - 7 September 2007
For decades, scientists and writers have imagined a future with walking, talking robots that could do everything from cooking your eggs to enslaving your planet. Trouble is, this fabled artificial intelligence has never happened. But this weekend, more than 700 scientists and tech industry leaders will gather at San Francisco's Palace of Fine Arts Theatre to plan for the day - still decades away - when computers start improving themselves without the approval of their former masters. Participants wonder whether this will yield the kindly Commander Data of "Star Trek: The Next Generation" fame or the mob of killer machines that attempted a world takeover in the movie "I, Robot."
"The history of technology tells us that inventions can be used or misused for good or evil. It could be that an Orwellian state could use this technology, or it could lead to a world with more accountability and transparency," said technology financier Peter Thiel, a principal backer of the two-day event called "The Singularity Summit: AI and the Future of Humanity."
The Singularity is the term used to describe this anticipated - or feared - day when machines become smart and perhaps ambitious enough to reprogram themselves. This weekend's gathering expands on a similar event held last year at Stanford University.
Thiel, who holds philosophy and law degrees from Stanford, co-founded PayPal and sold it to eBay in 2002 for $1.5 billion. In addition to running Clarium Capital Management, the hedge fund he founded in San Francisco, Thiel, 39, is using his wealth and celebrity to raise public awareness of the stakes surrounding artificial intelligence, or AI.
So, why should society take AI seriously now when its promoters have oversold it so far?
"The pendulum has swung too far, and people now underestimate it," said Thiel, arguing that recent advances in computer hardware, software, cognitive science and computer networking have created a technological primordial ooze. Moreover, these primitive AI systems have already made themselves useful parts of everyday life.
"Google, to a certain extent, you could characterize as a limited artificial intelligence," said Tyler Emerson, executive director of the Singularity Institute for Artificial Intelligence, the Palo Alto nonprofit group sponsoring the weekend summit with financial support from Thiel.
As Emerson explained, computer scientists now freely admit that they vastly underestimated the complexity of human intelligence when they first defined AI as an all-knowing, smarter-than-human system that could do everything from calculating the trajectories of planet-killing asteroids to composing an opera.
But while that general-purpose vision of AI has proved elusive, Emerson said technologists have gradually built and deployed special-purpose artificial assistants that we already take for granted.
In addition to search engines, he cited the way nonhuman characters in online video games react to their human counterparts. He noted that 10 years ago, Deep Blue, the IBM computer, whipped world chess champion Garry Kasparov. And he acknowledged advances in robotics exemplified by Stanley, the Volkswagen that drove itself across the Mojave Desert in 2005 using onboard sensors and software developed by computer scientists at Stanford.
Recognizing the confluence of these and other developments, a growing number of computer scientists, ethicists, industrialists and forward-thinkers believe that, far from being improbable, machine intelligence seems to be evolving, if that biological term is adaptable to human inventions. And given that a strictly laissez-faire approach could cause a potentially evil genie to escape from the bottle, perhaps some forethought could coax technology in a more beneficial direction.
"There is still time to affect this," said Thiel. "There are choices to be made." Toward this end, the Singularity Summit will bring together experts in computer science, nanotechnology and related fields, including:
-- MIT robotics Professor Rodney Brooks, co-founder and chief technology officer of iRobot, the Massachusetts firm that sells self-directed machines ranging from automated vacuum cleaners to mechanical scouts and pack mules for the military.
-- Peter Norvig, director of research at Google and author of "Artificial Intelligence: A Modern Approach," the foremost textbook in the field.
-- Nanotechnology investor Steve Jurvetson and Foresight Nanotech Institute co-founder Christine Peterson.
According to Emerson, it was Peterson, the nanotech maven, who put Thiel in touch with the Singularity Institute. Peterson, who holds regular salons for Silicon Valley thinkers, invited Thiel to dine with South Bay mathematician Eliezer Yudkowsky, who co-founded the Singularity Institute in 2000. Thiel and Yudkowsky hit it off, and soon the financier was helping support this AI ethics outfit.
"It will be a watershed event," Emerson said, imaging what it would be like as machines start acting independently, especially to add powers their human creators had not envisioned. "We're dealing with the most powerful process we know at this time, which is the power of human intelligence," he said. "Intelligence is not a tool. It is something that creates tools."
Thiel declined to disclose how much money he has put into the institute, but Emerson said it was sufficient to boost the organization's profile such as with the coming conference. Last year's event at Stanford was free and drew 1,300 people. It was headlined by inventor Ray Kurzweil, author of "The Singularity is Near," the unofficial bible of this movement. The Palace of Fine Arts Theatre seats 962, Emerson said, but given that this time the group is charging $50 for the two-day event, organizers would be thrilled to sell out.
But however big or little the splash they make, Thiel and his band of scientific soul-searchers want to call public attention to where technological currents seem to be heading, and to presage both the promise and peril that may lie ahead.
"There's a lot of debate about whether computers can think," Thiel said. "It's good for humans to use their brains to think about the future every once in a while, and that's what the Singularity Summit is about."
Available: [Online] http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2007/09/07/MNK8RUU7J.DTL&feed=rss.business
For decades, scientists and writers have imagined a future with walking, talking robots that could do everything from cooking your eggs to enslaving your planet. Trouble is, this fabled artificial intelligence has never happened. But this weekend, more than 700 scientists and tech industry leaders will gather at San Francisco's Palace of Fine Arts Theatre to plan for the day - still decades away - when computers start improving themselves without the approval of their former masters. Participants wonder whether this will yield the kindly Commander Data of "Star Trek: The Next Generation" fame or the mob of killer machines that attempted a world takeover in the movie "I, Robot."
"The history of technology tells us that inventions can be used or misused for good or evil. It could be that an Orwellian state could use this technology, or it could lead to a world with more accountability and transparency," said technology financier Peter Thiel, a principal backer of the two-day event called "The Singularity Summit: AI and the Future of Humanity."
The Singularity is the term used to describe this anticipated - or feared - day when machines become smart and perhaps ambitious enough to reprogram themselves. This weekend's gathering expands on a similar event held last year at Stanford University.
Thiel, who holds philosophy and law degrees from Stanford, co-founded PayPal and sold it to eBay in 2002 for $1.5 billion. In addition to running Clarium Capital Management, the hedge fund he founded in San Francisco, Thiel, 39, is using his wealth and celebrity to raise public awareness of the stakes surrounding artificial intelligence, or AI.
So, why should society take AI seriously now when its promoters have oversold it so far?
"The pendulum has swung too far, and people now underestimate it," said Thiel, arguing that recent advances in computer hardware, software, cognitive science and computer networking have created a technological primordial ooze. Moreover, these primitive AI systems have already made themselves useful parts of everyday life.
"Google, to a certain extent, you could characterize as a limited artificial intelligence," said Tyler Emerson, executive director of the Singularity Institute for Artificial Intelligence, the Palo Alto nonprofit group sponsoring the weekend summit with financial support from Thiel.
As Emerson explained, computer scientists now freely admit that they vastly underestimated the complexity of human intelligence when they first defined AI as an all-knowing, smarter-than-human system that could do everything from calculating the trajectories of planet-killing asteroids to composing an opera.
But while that general-purpose vision of AI has proved elusive, Emerson said technologists have gradually built and deployed special-purpose artificial assistants that we already take for granted.
In addition to search engines, he cited the way nonhuman characters in online video games react to their human counterparts. He noted that 10 years ago, Deep Blue, the IBM computer, whipped world chess champion Garry Kasparov. And he acknowledged advances in robotics exemplified by Stanley, the Volkswagen that drove itself across the Mojave Desert in 2005 using onboard sensors and software developed by computer scientists at Stanford.
Recognizing the confluence of these and other developments, a growing number of computer scientists, ethicists, industrialists and forward-thinkers believe that, far from being improbable, machine intelligence seems to be evolving, if that biological term is adaptable to human inventions. And given that a strictly laissez-faire approach could cause a potentially evil genie to escape from the bottle, perhaps some forethought could coax technology in a more beneficial direction.
"There is still time to affect this," said Thiel. "There are choices to be made." Toward this end, the Singularity Summit will bring together experts in computer science, nanotechnology and related fields, including:
-- MIT robotics Professor Rodney Brooks, co-founder and chief technology officer of iRobot, the Massachusetts firm that sells self-directed machines ranging from automated vacuum cleaners to mechanical scouts and pack mules for the military.
-- Peter Norvig, director of research at Google and author of "Artificial Intelligence: A Modern Approach," the foremost textbook in the field.
-- Nanotechnology investor Steve Jurvetson and Foresight Nanotech Institute co-founder Christine Peterson.
According to Emerson, it was Peterson, the nanotech maven, who put Thiel in touch with the Singularity Institute. Peterson, who holds regular salons for Silicon Valley thinkers, invited Thiel to dine with South Bay mathematician Eliezer Yudkowsky, who co-founded the Singularity Institute in 2000. Thiel and Yudkowsky hit it off, and soon the financier was helping support this AI ethics outfit.
"It will be a watershed event," Emerson said, imaging what it would be like as machines start acting independently, especially to add powers their human creators had not envisioned. "We're dealing with the most powerful process we know at this time, which is the power of human intelligence," he said. "Intelligence is not a tool. It is something that creates tools."
Thiel declined to disclose how much money he has put into the institute, but Emerson said it was sufficient to boost the organization's profile such as with the coming conference. Last year's event at Stanford was free and drew 1,300 people. It was headlined by inventor Ray Kurzweil, author of "The Singularity is Near," the unofficial bible of this movement. The Palace of Fine Arts Theatre seats 962, Emerson said, but given that this time the group is charging $50 for the two-day event, organizers would be thrilled to sell out.
But however big or little the splash they make, Thiel and his band of scientific soul-searchers want to call public attention to where technological currents seem to be heading, and to presage both the promise and peril that may lie ahead.
"There's a lot of debate about whether computers can think," Thiel said. "It's good for humans to use their brains to think about the future every once in a while, and that's what the Singularity Summit is about."
Available: [Online] http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2007/09/07/MNK8RUU7J.DTL&feed=rss.business
Thursday, September 6, 2007
What is AI?
Written by John McCarthy / September 2007
Q. What is artificial intelligence?
A. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.
Q. Yes, but what is intelligence?
A. Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.
Q. Isn't there a solid definition of intelligence that doesn't depend on relating it to human intelligence?
A. Not yet. The problem is that we cannot yet characterize in general what kinds of computational procedures we want to call intelligent. We understand some of the mechanisms of intelligence and not others.
Q. Is intelligence a single thing so that one can ask a yes or no question ``Is this machine intelligent or not?''?
A. No. Intelligence involves mechanisms, and AI research has discovered how to make computers carry out some of them and not others. If doing a task requires only mechanisms that are well understood today, computer programs can give very impressive performances on these tasks. Such programs should be considered ``somewhat intelligent''.
Q. Isn't AI about simulating human intelligence?
A. Sometimes but not always or even usually. On the one hand, we can learn something about how to make machines solve problems by observing other people or just by observing our own methods. On the other hand, most work in AI involves studying the problems the world presents to intelligence rather than studying people or animals. AI researchers are free to use methods that are not observed in people or that involve much more computing than people can do.
Q. What about IQ? Do computer programs have IQs?
A. No. IQ is based on the rates at which intelligence develops in children. It is the ratio of the age at which a child normally makes a certain score to the child's age. The scale is extended to adults in a suitable way. IQ correlates well with various measures of success or failure in life, but making computers that can score high on IQ tests would be weakly correlated with their usefulness. For example, the ability of a child to repeat back a long sequence of digits correlates well with other intellectual abilities, perhaps because it measures how much information the child can compute with at once. However, ``digit span'' is trivial for even extremely limited
computers.
However, some of the problems on IQ tests are useful challenges for AI.
Q. What about other comparisons between human and computer intelligence?
A. Arthur R. Jensen [Jen98], a leading researcher in human intelligence, suggests ``as a heuristic hypothesis'' that all normal humans have the same intellectual mechanisms and that differences in intelligence are related to ``quantitative biochemical and physiological conditions''. I see them as speed, short term memory, and the ability to form accurate and retrievable long term memories.
Whether or not Jensen is right about human intelligence, the situation in AI today is the reverse.
Computer programs have plenty of speed and memory but their abilities correspond to the intellectual mechanisms that program designers understand well enough to put in programs. Some abilities that children normally don't develop till they are teenagers may be in, and some abilities possessed by two year olds are still out. The matter is further complicated by the fact that the cognitive sciences still have not succeeded in determining exactly what the human abilities are. Very likely the organization of the intellectual mechanisms for AI can usefully be different from that in people.
Whenever people do better than computers on some task or computers use a lot of computation to do as well as people, this demonstrates that the program designers lack understanding of the intellectual mechanisms required to do the task efficiently.
Q. When did AI research start?
A. After WWII, a number of people independently started to work on intelligent machines. The English mathematician Alan Turing may have been the first. He gave a lecture on it in 1947. He also may have been the first to decide that AI was best researched by programming computers rather than by building machines. By the late 1950s, there were many researchers on AI, and most of them were basing their work on programming computers.
Q. Does AI aim to put the human mind into the computer?
A. Some researchers say they have that objective, but maybe they are using the phrase metaphorically. The human mind has a lot of peculiarities, and I'm not sure anyone is serious about imitating all of them.
Q. What is the Turing test?
A. Alan Turing's 1950 article Computing Machinery and Intelligence [Tur50] discussed conditions for considering a machine to be intelligent. He argued that if the machine could successfully pretend to be human to a knowledgeable observer then you certainly should consider it intelligent. This test would satisfy most people but not all philosophers. The observer could interact with the machine and a human by teletype (to avoid requiring that the machine imitate the appearance or voice of the person), and the human would try to persuade the observer that it was human and the machine would try to fool the observer.
The Turing test is a one-sided test. A machine that passes the test should certainly be considered intelligent, but a machine could still be considered intelligent without knowing enough about humans to imitate a human.
Daniel Dennett's book Brainchildren [Den98] has an excellent discussion of the Turing test and the various partial Turing tests that have been implemented, i.e. with restrictions on the observer's knowledge of AI and the subject matter of questioning. It turns out that some people are easily led into believing that a rather dumb program is intelligent.
Q. Does AI aim at human-level intelligence?
A. Yes. The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans. However, many people involved in particular research areas are much less ambitious.
Q. How far is AI from reaching human-level intelligence? When will it happen?
A. A few people think that human-level intelligence can be achieved by writing large numbers of programs of the kind people are now writing and assembling vast knowledge bases of facts in the languages now used for expressing knowledge.
However, most AI researchers believe that new fundamental ideas are required, and therefore it cannot be predicted when human level intelligence will be achieved.
Q. Are computers the right kind of machine to be made intelligent?
A. Computers can be programmed to simulate any kind of machine.
Many researchers invented non-computer machines, hoping that they would be intelligent in different ways than the computer programs could be. However, they usually simulate their invented machines on a computer and come to doubt that the new machine is worth building. Because many billions of dollars that have been spent in making computers faster and faster, another kind of machine would have to be very fast to perform better than a program on a computer simulating the machine.
Q. Are computers fast enough to be intelligent?
A. Some people think much faster computers are required as well as new ideas. My own opinion is that the computers of 30 years ago were fast enough if only we knew how to program them. Of course, quite apart from the ambitions of AI researchers, computers will keep getting faster.
Q. What about parallel machines?
A. Machines with many processors are much faster than single processors can be. Parallelism itself presents no advantages, and parallel machines are somewhat awkward to program. When extreme speed is required, it is necessary to face this awkwardness.
Q. What about making a ``child machine'' that could improve by reading and by learning from experience?
A. This idea has been proposed many times, starting in the 1940s. Eventually, it will be made to work. However, AI programs haven't yet reached the level of being able to learn much of what a child learns from physical experience. Nor do present programs understand language well enough to learn much by reading.
Q. Might an AI system be able to bootstrap itself to higher and higher level intelligence by thinking about AI?
A. I think yes, but we aren't yet at a level of AI at which this process can begin.
Q. What about chess?
A. Alexander Kronrod, a Russian AI researcher, said ``Chess is the Drosophila of AI.'' He was making an analogy with geneticists' use of that fruit fly to study inheritance. Playing chess requires certain intellectual mechanisms and not others. Chess programs now play at grandmaster level, but they do it with limited intellectual mechanisms compared to those used by a human chess player, substituting large amounts of computation for understanding. Once we understand these mechanisms better, we can build human-level chess programs that do far less computation than do present programs.
Unfortunately, the competitive and commercial aspects of making computers play chess have taken precedence over using chess as a scientific domain. It is as if the geneticists after 1910 had organized fruit fly races and concentrated their efforts on breeding fruit flies that could win these races.
Q. What about Go?
A. The Chinese and Japanese game of Go is also a board game in which the players take turns moving. Go exposes the weakness of our present understanding of the intellectual mechanisms involved in human game playing. Go programs are very bad players, in spite of considerable effort (not as much as for chess). The problem seems to be that a position in Go has to be divided mentally into a collection of subpositions which are first analyzed separately followed by an analysis of their interaction. Humans use this in chess also, but chess programs consider the position as a whole. Chess programs compensate for the lack of this intellectual mechanism by doing thousands or, in the case of Deep Blue, many millions of times as much computation.
Sooner or later, AI research will overcome this scandalous weakness.
Q. Don't some people say that AI is a bad idea?
A. The philosopher John Searle says that the idea of a non-biological machine being intelligent is incoherent. He proposes the Chinese room argument www-formal.stanford.edu/jmc/chinese.html The philosopher Hubert Dreyfus says that AI is impossible. The computer scientist Joseph Weizenbaum says the idea is obscene, anti-human and immoral. Various people have said that since artificial intelligence hasn't reached human level by now, it must be impossible. Still other people are disappointed that companies they invested in went bankrupt.
Q. Aren't computability theory and computational complexity the keys to AI? [Note to the layman and beginners in computer science: These are quite technical branches of mathematical logic and computer science, and the answer to the question has to be somewhat technical.]
A. No. These theories are relevant but don't address the fundamental problems of AI.
In the 1930s mathematical logicians, especially Kurt Gödel and Alan Turing, established that there did not exist algorithms that were guaranteed to solve all problems in certain important mathematical domains. Whether a sentence of first order logic is a theorem is one example, and whether a polynomial equations in several variables has integer solutions is another. Humans solve problems in these domains all the time, and this has been offered as an argument (usually with some decorations) that computers are intrinsically incapable of doing what people do. Roger Penrose claims this. However, people can't guarantee to solve arbitrary problems in these domains either. See my Review of The Emperor's New Mind by Roger Penrose. More essays and reviews defending AI research are in [McC96a].
In the 1960s computer scientists, especially Steve Cook and Richard Karp developed the theory of NP-complete problem domains. Problems in these domains are solvable, but seem to take time exponential in the size of the problem. Which sentences of propositional calculus are satisfiable is a basic example of an NP-complete problem domain. Humans often solve problems in NP-complete domains in times much shorter than is guaranteed by the general algorithms, but can't solve them quickly in general.
What is important for AI is to have algorithms as capable as people at solving problems. The identification of subdomains for which good algorithms exist is important, but a lot of AI problem solvers are not associated with readily identified subdomains.
The theory of the difficulty of general classes of problems is called computational complexity. So far this theory hasn't interacted with AI as much as might have been hoped. Success in problem solving by humans and by AI programs seems to rely on properties of problems and problem solving methods that the neither the complexity researchers nor the AI community have been able to identify precisely.
Algorithmic complexity theory as developed by Solomonoff, Kolmogorov and Chaitin (independently of one another) is also relevant. It defines the complexity of a symbolic object as the length of the shortest program that will generate it. Proving that a candidate program is the shortest or close to the shortest is an unsolvable problem, but representing objects by short programs that generate them should sometimes be illuminating even when you can't prove that the program is the shortest.
Available: [Online] http://www-formal.stanford.edu/jmc/whatisai/node1.html
Q. What is artificial intelligence?
A. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.
Q. Yes, but what is intelligence?
A. Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.
Q. Isn't there a solid definition of intelligence that doesn't depend on relating it to human intelligence?
A. Not yet. The problem is that we cannot yet characterize in general what kinds of computational procedures we want to call intelligent. We understand some of the mechanisms of intelligence and not others.
Q. Is intelligence a single thing so that one can ask a yes or no question ``Is this machine intelligent or not?''?
A. No. Intelligence involves mechanisms, and AI research has discovered how to make computers carry out some of them and not others. If doing a task requires only mechanisms that are well understood today, computer programs can give very impressive performances on these tasks. Such programs should be considered ``somewhat intelligent''.
Q. Isn't AI about simulating human intelligence?
A. Sometimes but not always or even usually. On the one hand, we can learn something about how to make machines solve problems by observing other people or just by observing our own methods. On the other hand, most work in AI involves studying the problems the world presents to intelligence rather than studying people or animals. AI researchers are free to use methods that are not observed in people or that involve much more computing than people can do.
Q. What about IQ? Do computer programs have IQs?
A. No. IQ is based on the rates at which intelligence develops in children. It is the ratio of the age at which a child normally makes a certain score to the child's age. The scale is extended to adults in a suitable way. IQ correlates well with various measures of success or failure in life, but making computers that can score high on IQ tests would be weakly correlated with their usefulness. For example, the ability of a child to repeat back a long sequence of digits correlates well with other intellectual abilities, perhaps because it measures how much information the child can compute with at once. However, ``digit span'' is trivial for even extremely limited
computers.
However, some of the problems on IQ tests are useful challenges for AI.
Q. What about other comparisons between human and computer intelligence?
A. Arthur R. Jensen [Jen98], a leading researcher in human intelligence, suggests ``as a heuristic hypothesis'' that all normal humans have the same intellectual mechanisms and that differences in intelligence are related to ``quantitative biochemical and physiological conditions''. I see them as speed, short term memory, and the ability to form accurate and retrievable long term memories.
Whether or not Jensen is right about human intelligence, the situation in AI today is the reverse.
Computer programs have plenty of speed and memory but their abilities correspond to the intellectual mechanisms that program designers understand well enough to put in programs. Some abilities that children normally don't develop till they are teenagers may be in, and some abilities possessed by two year olds are still out. The matter is further complicated by the fact that the cognitive sciences still have not succeeded in determining exactly what the human abilities are. Very likely the organization of the intellectual mechanisms for AI can usefully be different from that in people.
Whenever people do better than computers on some task or computers use a lot of computation to do as well as people, this demonstrates that the program designers lack understanding of the intellectual mechanisms required to do the task efficiently.
Q. When did AI research start?
A. After WWII, a number of people independently started to work on intelligent machines. The English mathematician Alan Turing may have been the first. He gave a lecture on it in 1947. He also may have been the first to decide that AI was best researched by programming computers rather than by building machines. By the late 1950s, there were many researchers on AI, and most of them were basing their work on programming computers.
Q. Does AI aim to put the human mind into the computer?
A. Some researchers say they have that objective, but maybe they are using the phrase metaphorically. The human mind has a lot of peculiarities, and I'm not sure anyone is serious about imitating all of them.
Q. What is the Turing test?
A. Alan Turing's 1950 article Computing Machinery and Intelligence [Tur50] discussed conditions for considering a machine to be intelligent. He argued that if the machine could successfully pretend to be human to a knowledgeable observer then you certainly should consider it intelligent. This test would satisfy most people but not all philosophers. The observer could interact with the machine and a human by teletype (to avoid requiring that the machine imitate the appearance or voice of the person), and the human would try to persuade the observer that it was human and the machine would try to fool the observer.
The Turing test is a one-sided test. A machine that passes the test should certainly be considered intelligent, but a machine could still be considered intelligent without knowing enough about humans to imitate a human.
Daniel Dennett's book Brainchildren [Den98] has an excellent discussion of the Turing test and the various partial Turing tests that have been implemented, i.e. with restrictions on the observer's knowledge of AI and the subject matter of questioning. It turns out that some people are easily led into believing that a rather dumb program is intelligent.
Q. Does AI aim at human-level intelligence?
A. Yes. The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans. However, many people involved in particular research areas are much less ambitious.
Q. How far is AI from reaching human-level intelligence? When will it happen?
A. A few people think that human-level intelligence can be achieved by writing large numbers of programs of the kind people are now writing and assembling vast knowledge bases of facts in the languages now used for expressing knowledge.
However, most AI researchers believe that new fundamental ideas are required, and therefore it cannot be predicted when human level intelligence will be achieved.
Q. Are computers the right kind of machine to be made intelligent?
A. Computers can be programmed to simulate any kind of machine.
Many researchers invented non-computer machines, hoping that they would be intelligent in different ways than the computer programs could be. However, they usually simulate their invented machines on a computer and come to doubt that the new machine is worth building. Because many billions of dollars that have been spent in making computers faster and faster, another kind of machine would have to be very fast to perform better than a program on a computer simulating the machine.
Q. Are computers fast enough to be intelligent?
A. Some people think much faster computers are required as well as new ideas. My own opinion is that the computers of 30 years ago were fast enough if only we knew how to program them. Of course, quite apart from the ambitions of AI researchers, computers will keep getting faster.
Q. What about parallel machines?
A. Machines with many processors are much faster than single processors can be. Parallelism itself presents no advantages, and parallel machines are somewhat awkward to program. When extreme speed is required, it is necessary to face this awkwardness.
Q. What about making a ``child machine'' that could improve by reading and by learning from experience?
A. This idea has been proposed many times, starting in the 1940s. Eventually, it will be made to work. However, AI programs haven't yet reached the level of being able to learn much of what a child learns from physical experience. Nor do present programs understand language well enough to learn much by reading.
Q. Might an AI system be able to bootstrap itself to higher and higher level intelligence by thinking about AI?
A. I think yes, but we aren't yet at a level of AI at which this process can begin.
Q. What about chess?
A. Alexander Kronrod, a Russian AI researcher, said ``Chess is the Drosophila of AI.'' He was making an analogy with geneticists' use of that fruit fly to study inheritance. Playing chess requires certain intellectual mechanisms and not others. Chess programs now play at grandmaster level, but they do it with limited intellectual mechanisms compared to those used by a human chess player, substituting large amounts of computation for understanding. Once we understand these mechanisms better, we can build human-level chess programs that do far less computation than do present programs.
Unfortunately, the competitive and commercial aspects of making computers play chess have taken precedence over using chess as a scientific domain. It is as if the geneticists after 1910 had organized fruit fly races and concentrated their efforts on breeding fruit flies that could win these races.
Q. What about Go?
A. The Chinese and Japanese game of Go is also a board game in which the players take turns moving. Go exposes the weakness of our present understanding of the intellectual mechanisms involved in human game playing. Go programs are very bad players, in spite of considerable effort (not as much as for chess). The problem seems to be that a position in Go has to be divided mentally into a collection of subpositions which are first analyzed separately followed by an analysis of their interaction. Humans use this in chess also, but chess programs consider the position as a whole. Chess programs compensate for the lack of this intellectual mechanism by doing thousands or, in the case of Deep Blue, many millions of times as much computation.
Sooner or later, AI research will overcome this scandalous weakness.
Q. Don't some people say that AI is a bad idea?
A. The philosopher John Searle says that the idea of a non-biological machine being intelligent is incoherent. He proposes the Chinese room argument www-formal.stanford.edu/jmc/chinese.html The philosopher Hubert Dreyfus says that AI is impossible. The computer scientist Joseph Weizenbaum says the idea is obscene, anti-human and immoral. Various people have said that since artificial intelligence hasn't reached human level by now, it must be impossible. Still other people are disappointed that companies they invested in went bankrupt.
Q. Aren't computability theory and computational complexity the keys to AI? [Note to the layman and beginners in computer science: These are quite technical branches of mathematical logic and computer science, and the answer to the question has to be somewhat technical.]
A. No. These theories are relevant but don't address the fundamental problems of AI.
In the 1930s mathematical logicians, especially Kurt Gödel and Alan Turing, established that there did not exist algorithms that were guaranteed to solve all problems in certain important mathematical domains. Whether a sentence of first order logic is a theorem is one example, and whether a polynomial equations in several variables has integer solutions is another. Humans solve problems in these domains all the time, and this has been offered as an argument (usually with some decorations) that computers are intrinsically incapable of doing what people do. Roger Penrose claims this. However, people can't guarantee to solve arbitrary problems in these domains either. See my Review of The Emperor's New Mind by Roger Penrose. More essays and reviews defending AI research are in [McC96a].
In the 1960s computer scientists, especially Steve Cook and Richard Karp developed the theory of NP-complete problem domains. Problems in these domains are solvable, but seem to take time exponential in the size of the problem. Which sentences of propositional calculus are satisfiable is a basic example of an NP-complete problem domain. Humans often solve problems in NP-complete domains in times much shorter than is guaranteed by the general algorithms, but can't solve them quickly in general.
What is important for AI is to have algorithms as capable as people at solving problems. The identification of subdomains for which good algorithms exist is important, but a lot of AI problem solvers are not associated with readily identified subdomains.
The theory of the difficulty of general classes of problems is called computational complexity. So far this theory hasn't interacted with AI as much as might have been hoped. Success in problem solving by humans and by AI programs seems to rely on properties of problems and problem solving methods that the neither the complexity researchers nor the AI community have been able to identify precisely.
Algorithmic complexity theory as developed by Solomonoff, Kolmogorov and Chaitin (independently of one another) is also relevant. It defines the complexity of a symbolic object as the length of the shortest program that will generate it. Proving that a candidate program is the shortest or close to the shortest is an unsolvable problem, but representing objects by short programs that generate them should sometimes be illuminating even when you can't prove that the program is the shortest.
Available: [Online] http://www-formal.stanford.edu/jmc/whatisai/node1.html
Subscribe to:
Posts (Atom)