+ All Categories
Home > Documents > Converging Technologies and Pervasive Computing Cybertechnology is converging with non-...

Converging Technologies and Pervasive Computing Cybertechnology is converging with non-...

Date post: 30-Dec-2015
Category:
Upload: annabella-jackson
View: 220 times
Download: 3 times
Share this document with a friend
Popular Tags:
81
Converging Technologies and Pervasive Computing Cybertechnology is converging with non- cybertechnologies, including biotechnology and nanotechnology. Cybertechnology is also becoming pervasive as computing devices now pervade our public and private spaces. Pervasive computing and technological convergence are both facilitated by the miniaturization of computing devices.
Transcript

Converging Technologies and Pervasive Computing Cybertechnology is converging with

non-cybertechnologies, including biotechnology and nanotechnology.

Cybertechnology is also becoming pervasive as computing devices now pervade our public and private spaces.

Pervasive computing and technological convergence are both facilitated by the miniaturization of computing devices.

Converging Technologies (Continued) Computers are becoming less visible as

distinct entities, as they: (a) continue to be miniaturized and integrated

into ordinary objects, (b) blend unobtrusively into our surroundings. Cybertechnology is also becoming less

distinguishable from other technologies as boundaries that have previously separated them begin to blur because of convergence.

Technological Convergence Howard Rheingold (1992) notes that

technological convergence occurs when: unrelated technologies or technological paths

intersect or “converge unexpectedly” to create an entirely new field.

Convergence re cybertechnology is not new.

Rheingold notes that virtual-reality (VR) technology resulted from the convergence of video technology and computer hardware in the 1980s.

Technological Convergence (Continued) Cybertechnologies are converging

with non-cybertechnologies at an unprecedented pace.

Two areas involving convergence are: biotechnology and information technology

(resulting in the field of bioinformatics); nanotechnology and computing (giving rise

to the field of nanocomputing).

“Enabling Technologies” Rheingold notes that convergence often

depends on enabling technologies, which he defines as: technologies that make other technologies

possible. He points out that for VR, converging

elements had to wait for the enabling technologies of electronic miniaturization, computer simulation, and computer graphics to mature in the 1980s.

Miniaturization and Embedded/ Integrated Computing Devices

Technological convergence has been enabled by two key factors:

(1) the miniaturization of computers and computing devices;

(2) the embedding/integrating of computing devices into objects and environments.

Three Areas of Technological Convergence Affecting Ethics

Three converging technologies that raise ethical concerns are:

ambient intelligence (AmI) the convergence of (a) pervasive

computing, (b) ubiquitous communication, and (c) intelligent user interfaces;

bioinformatics; nanocomputing.

Ambient Intelligence (AmI) Ambient intelligence (or AmI) is

defined as a technology that enables people to live and work in

environments that respond to them in “intelligent ways” (Aarts and Marzano, 2003; Brey, 2005; and Weber et al., 2005).

Consider the example of the “intelligent home” (Raisinghani, et al., 2004) described in the text.

AmI (Continued) Three key technological

components make AmI possible:

pervasive computing; ubiquitous communication; intelligent user interfaces (IUIs).

Pervasive Computing According to the Centre for Pervasive

Computing, pervasive computing is defined as: a computing environment where information

and communication technology are “everywhere, for everyone, at all times.”

Computer technology is integrated in our environments – i.e., from “toys, milk cartons and desktops, to cars, factories, and whole city areas.”

Pervasive Computing (Continued) Pervasive computing is made possible

because of the increasing ease with which circuits can be embedded into objects, including wearable, even disposable items.

Bütschi, Courant, and Hilty (2005) note that computing has already begun to pervade many dimensions of our lives.

For example, it pervades the work sphere, cars, public transportation systems, the health sector, the market, and our homes.

Pervasive Computing (Continued) Pervasive computing is sometimes

also referred to as ubiquitous computing (or ubicomp).

“Ubiquitous computing” was coined by Mark Weiser (1991), who envisioned “omnipresent computers” that serve people in their everyday lives, both at home and at work.

Pervasive Computing (Continued) Adam Greenfield (2005) believes that

ubiquitous or pervasive computing will insinuate itself much more thoroughly into our day-to-day activities than current Internet- and Web-based technologies.

For pervasive computing to operate at its full potential, however, continuous and ubiquitous communication between devices is also needed.

Ubiquitous Communication Ubiquitous communication aims at

ensuring flexible and omnipresent communication between interlinked computer devices (Raisinghani et al., 2004) via:

wireless local area networks (W-LANs), wireless personal area networks (W-

PANs), wireless body area networks (W-BANs), Radio Frequency Identification (RFID).

Intelligent User Interfaces (IUIs)

Intelligent User Interfaces (or IUIs) have been made possible by developments in the field of artificial intelligence (AI).

Philip Brey (2005) notes that IUIs go beyond traditional interfaces such as a keyboard, mouse, and monitor.

IUIs (Continued) IUIs improve human interaction with

technology by making it more intuitive and more efficient than was previously possible with traditional interfaces.

With IUIs, computers can “know” and sense far more about a person than was possible with traditional interfaces, including information about that person’s situation, context, or environment.

IUIs (Continued) With IUIs, AmI remains in the background

and is virtually invisible to the user. Brey notes that with IUIs, people can be

surrounded with hundreds of intelligent networked computers that are “aware of their presence, personality, and needs.”

But users may not be aware of the existence of IUIs in their environments.

Ethical and Social Issues Affecting AmI

Three ethical and social issues affecting AmI:

freedom and autonomy; technological dependency; privacy, surveillance, and the

“Panopticon.”

Autonomy and Freedom Involving AmI Will human autonomy and freedom be

enhanced or diminished as a result of AmI technology?

AmI’s supporters suggest humans will gain more control over the environments with which they interact because technology will be more responsive to their needs.

Brey notes a paradoxical aspect of this claim, pointing out that “greater control” is presumed to be gained through a “delegation of control to machines.”

Autonomy and Freedom (Continued) Brey considers three ways in which

AmI may make the human environment more controllable, because it can:

(1) become more responsive to the voluntary actions, intentions, and needs of users;

(2) supply humans with detailed and personal information about their environment;

(3) do what people want without having to engage in any cognitive or physical effort.

Autonomy and Freedom (Continued) Brey also considers three ways that AmI can

diminish the amount of control that humans have over their environments, where users may lose control because a smart object can:

(1) make incorrect inferences about the user, the user’s actions, or the situation;

(2) require corrective actions on the part of the user;

(3) represent the needs and interests of parties other than the user.

Technological Dependency We have come to depend a great deal

on cybertechnology in conducting many activities in our day-to-day lives.

In the future, will humans depend on the kind of smart objects and smart environments made possible by AmI technology in ways that exceed our current dependency on cybertechnology?

Technological Dependency (Continued)

IUIs could relieve us of: (a) having to worry about performing

many of our routine day-to-day tasks, which can be considered tedious and boring, and

(b) much of the cognitive effort that has, in the past, enabled us to be fulfilled and to flourish as humans.

Technological Dependency (Continued) What would happen to us if we were

to lose some of our cognitive capacities because of an increased dependency on cybertechnology?

Review the futuristic scenario (in the textbook) described by E. M. Forster about what happens to a society when it becomes too dependent on machines.

Privacy, Surveillance, and the Panopticon Marc Langheinrich (2001) argues that

with respect to privacy and surveillance, four features differentiate AmI from other kinds of computing applications:

ubiquity, invisibility, sensing, memory application.

Privacy, Surveillance, and the Panopticon (Continued) Langheinrich notes that because: (1) computing devices are ubiquitous or

omnipresent in AmI environments, privacy threats are more pervasive in scope.

(2) computers are virtually invisible in AmI environments, it is likely that users will not always realize that computing devices are present and are being used to collect and disseminate personal data.

Privacy, Surveillance, and the Panopticon (Continued) Langheinrich also believes that AmI

poses a more significant threat to privacy than earlier computing technologies because:

Sensing devices associated with IUIs may become so sophisticated that they will be able to sense (private) human emotions like fear, stress, and excitement.

AmI has the unique potential to create a memory or “life-log” – i.e., a complete record of someone’s past.

Surveillance and the Panopticon

Johann Čas (2004) notes that in AmI environments, no one can be sure that he or she is not being observed.

An individual cannot be sure whether information about his or her presence at any location is being recorded.

Surveillance and the Panopticon (Continued) Čas believes that the only realistic attitude

is to assume that any activity (or inactivity) is being monitored and that this information may be used in any context in the future.

So, people in AmI environments are subject to a virtual “panopticon.”

Review the example of Bentham’s “Inspection House” (described in the textbook). Does it anticipate any threats posed by AmI?

Table 12-1 Ambient Intelligence

Technological Components

Ethical and Social Issues Generated

Pervasive Computing

Freedom and Autonomy

Ubiquitous Communication

Privacy and Surveillance

Intelligent User Interfaces (IUIs)

Technological Dependence

Bioinformatics Bioinformatics is a branch of informatics. Informatics involves the acquisition,

storage, manipulation, analyses, transmission, sharing, visualization, and simulation of information on a computer.

Bioinformatics is the application of the informatics model to the management of biological information.

Ethical Aspects of Bioinformatics

Three kinds of social and ethical concerns arise in bioinformatics research and development:

Privacy and Confidentiality; Autonomy and Informed Consent; Information Ownership and

Property Rights.

Privacy, Confidentiality, and the Role of Data Mining Review the deCODE Genetics case

(described in the textbook). Many individuals who donated DNA

samples to deCODE had the expectation that their personal genetic data was:

confidential information, protected by the company’s privacy policies

and by privacy laws.

Privacy, Confidentiality, and the Role of Data Mining (Continued)

Anton Vedder (2004) notes that privacy protection that applies to personal information about individuals does not necessarily apply to that information once it is:

aggregated, and crossed referenced with other

information (via data mining).

Privacy, Confidentiality, and Data Mining (Continued) Research subjects could be denied

employment, health insurance, or life insurance based on the results of data-mining technology used in genetic research.

For example, a person could end up in a “risky” category based on arbitrary associations and correlations that link trivial non-genetic information with sensitive information about one’s genetic data.

Privacy, Confidentiality, and Data Mining (Continued) Individuals who eventually become

identified or associated with newly-created groups may have no knowledge that the groups to which they have been assigned actually exist.

These people may also have no chance to correct any inaccuracies or errors that could result from their association with that group.

Autonomy and Informed Consent The process of informed consent used

in getting permissions from human research subjects who participate in genetic studies that use data mining is controversial.

In the deCODE case, did the genetic information acquired from “informed” volunteers meet the required conditions for valid informed consent?

Autonomy and Informed Consent (Continued) According to the Office of Technology

Assessment (OTA) Report, entitled Protecting Privacy in Computerized Medical Information, individuals must:

(i) have adequate disclosure of information about the data dissemination process;

(ii) be able to fully comprehend what they are being told about the procedure or treatment.

Autonomy and Informed Consent (Continued) Because of the way data-mining technology

can manipulate personal information that is authorized for use in one context only, the process of informed consent is opaque.

The conditions required for “valid” informed consent are difficult, if not impossible, to achieve in cases that involve secondary uses of personal genetic information.

So, it is not clear that these research subjects have full autonomy.

Intellectual Property Rights and Ownership Issues

Consider that deCODE Genetics was given exclusive rights (for 12 years) to the information included in the Icelandic health-records database.

This raises property-rights issues that also affect who should (and should not) have access to the information in that database.

Intellectual Property Rights and Ownership Issues (Continued) Who should own the personal genetic

information in deCODE’s database? Should deCODE have exclusive ownership

rights to all of the personal genetic information that resides in its databases?

Should individuals retain (at least) some rights over their personal genetic data when it is stored in a privately owned database?

Intellectual Property Rights and Ownership Issues (Continued) Have individuals that donated their DNA

samples to deCODE necessarily lost all rights to their personal genetic data, once it was stored in that company’s databases?

Should deCODE hold rights to this data in perpetuity, and should deCODE be permitted to do whatever it wishes with that data?

Intellectual Property Rights and Ownership Issues (Continued) Why are questions involving the

ownership of personal genetic information stored in commercial databases so controversial from an ethical point of view?

Recall our discussion in Chapter 5 of a commercial database containing personal information about customers that was owned by Toysmart.com, a now defunct on-line business.

Table 12-2: Ethical Issues Associated with Bioinformatics

Personal Privacy and Confidentiality

The aggregation of personal genetic data, via data mining, can generate privacy issues affecting “new groups” and “new facts” about individuals.

Informed Consent and Autonomy

The nontransparent (or “opaque”) consent process can preclude “valid” or “fully informed” consent, thereby threatening individual autonomy.

Intellectual Property Rights/Ownership

The storage of personal genetic data in electronic databases raises questions about who should have ownership rights and access to the data.

Ethical Guidelines and Legislation for Genetic Data/Bioinformatics

ELSI (Ethical, Legal, and Social Implications) Guidelines have been established for federally-funded genomics research.

ELSI requirements do not apply to genomics research in the commercial sector.

Ethical Guidelines and Legislation (Continued)

Some genetic-specific privacy laws and policies have been passed in response to concerns about potential misuses of personal genetic data.

In the U.S., laws affecting genetics have been enacted primarily at the state level.

Ethical Guidelines and Legislation (Continued) No U.S. federal laws protect

personal genetic data per se. The Health Insurance Portability and

Accountability Act (HIPAA) provides broad protection for personal medical information.

HIPAA protects the privacy of “individually identifiable health information” from “inappropriate use and disclosure.”

Ethical Guidelines and Legislation (Continued) Critics worry that HIPAA does not

provide any special privacy protection for personal genetic information.

It is not clear that HIPAA adequately addresses concerns affecting nonconsensual secondary uses of personal medical and genetic information (Baumer, Earp, and Payton, 2006).

Nanotechnology Rosalyn Berne (2005) defines

nanotechnology as: the study, design, and manipulation of natural

phenomena, artificial phenomena, and technological phenomena at the nanometer level.

K. Eric Drexler, who coined the term nanotechnology in the 1980s, describes the field as: a branch of engineering dedicated to the

development of extremely small electronic circuits and mechanical devices built at the molecular level of matter.

Nanotechnology and Nanocomputing Drexler (1991) predicted that developments

in nanotechnology will result in computers at the nano-scale, no bigger in size than bacteria, called nanocomputers.

Nanocomputers can be designed using various types of architectures.

An electronic nanocomputer would operate in a manner similar to present-day computers, differing primarily in terms of size and scale.

Nanotechnology and Nanocomputers (continued) To appreciate the scale of future

nanocomputers, imagine a mechanical or electronic device whose dimensions are measured in nanometers (billionths of a meter, or units of 10-9 meter).

Ralph Merkle (2001) predicts that nano-scale computers will be able to deliver a billion billion instructions per second – i.e., a billion times faster than today’s desktop computers.

Nanotechnology and Nanocomputing (continued) Although still in its early stages of

development, some primitive nanocomputing devices have already been tested.

At Hewlett Packard, computer memory devices with eight platinum wires that are 40 nanometers wide on a silicon wafer have been developed.

James Moor and John Weckert (2004) note that it would take more than one thousand of these chips to be the width of a human hair.

Optimistic View of Nanotechnology Bert Gordijn (2003) considers a “utopian

dream,” where nanotechnology would: be self-sufficient and “dirt free”; create unprecedented objects and materials; enable the production of inexpensive high

quality products; be used to fabricate food rather than having to

grow it; provide low priced and superior equipment for

healthcare; enable us to enhance our human capabilities

and properties.

Pros of Nanotechnology

Nanites could be used to clean up toxic spills and to eliminate other kinds of environmental hazards.

Nanites could also dismantle or "disassemble" garbage at the molecular level and recycle it again as food.

Pros of Nanotechnology (continued) Nano-particles inserted into bodies

could diagnose diseases and directly treat diseased cells.

Doctors could use nanites to make microscopic repairs on areas of the body that are difficult to operate on with conventional surgical tools.

With nanotechnology tools, the life signs of a patient could be better monitored.

Pessimistic View of Nanotechnology Gordign also considers the pessimistic

view, where nanotechnology developments could result in:

severe economic disruption; premeditated misuse in warfare and terrorism; surveillance with nano-level tracking devices; extensive environmental damage; uncontrolled self replication (sometimes

referred to as the “grey goo scenario”); misuse by criminals and terrorists (sometimes

referred to as the “black goo scenario”).

Cons of Nanotechnology All matter (objects and organisms) could

theoretically be disassembled and reassembled by nanite assemblers and disassemblers.

Since nanites could be created to be self-replicating, what would happen if strict "limiting mechanisms" were not built into them?

Theoretically, they could multiply endlessly like viruses.

Cons of Nanotechnology (Continued) Our movements could be tracked so

easily by others via nanoscopic devices such as molecular sized microphones, cameras, and homing beacons.

Our privacy and freedom could be further eroded because governments, businesses, and ordinary people could use these devices to monitor us.

Cons of Nanotechnology (continued) Nanite assemblers and disassemblers

could be used to create weapons. Nanites themselves could be used as

weapons. Andrew Chen (2002) notes that guns,

explosives, and electronic components of weapons could all be miniaturized.

Nanoethics: Identifying and Analyzing Ethical Issues in Nanotechnology

Moor and Weckert (2004) believe that assessing ethical issues that arise at the nano-scale is important because of the kinds of “policy vacuums” that are raised.

They do not argue that a separate field of applied ethics called nanoethics is necessary.

But they make a strong case for why an analysis of ethical issues at the nano-level is now critical.

Nanoethics (Continued) Moor and Weckert identify

three distinct kinds of ethical concerns at the nano-level that warrant analysis:

privacy and control; longevity; runaway nanobots.

Ethical Aspects of Nanotechnology: Privacy Issues We will be able to construct nano-

scale information gathering systems. It will become extremely easy to put

a nano-scale transmitter in a room, or onto someone’s clothing.

Individuals may have no idea that these devices are present or that they are being monitored and tracked by them.

Ethical Aspects of Nanotechnology: Longevity Issues

Moor and Weckert note that while many see longevity as a good thing, there could also be negative consequences.

There could be a population problem if the life expectancy of individuals were to change dramatically.

Ethical Aspects of Nanotechnology: Longevity Issues (Continued)

If fewer children are born relative to adults, there could be a concern about the lack of new ideas and “new blood.”

Also, questions could arise with regard to how many “family sets” couples, whose lives could be extended significantly, would be allowed to have during their expanded lifetime.

Ethical Aspects of Nanotechnology: Runaway Nanobots

Moor and Weckert note that when nanobots work to our benefit, they build what we desire.

But when nanobots work incorrectly, they can build what we don’t want.

The replication of these bots could get out of hand.

Should Computer Scientists Participate in Nanocomputing Research/Development?

Joseph Weizenbaum (1984) argues that computer science research that can have “irreversible and not entirely unforeseeable side effects” should not be undertaken.

Bill Joy (2000) argues that because developments in nanocomputing are threatening to make us an “endangered species,” the only realistic alternative is to limit its development.

Future Nanotechnology Research (Continued) Ralph Merkle (2001) argues that if

research in nanotechnology is prohibited, or even restricted, it will be done underground.

If this happens, nano research would not be regulated by governments and by professional agencies concerned with social responsibility.

Should Research Continue in Nanotechnology? John Weckert (2006) argues that potential

disadvantages that can result from research in a particular field are not in themselves sufficient grounds for halting research.

He suggests that there should be a presumption in favor of freedom in research.

But Weckert also argues that it should be permissible to restrict or even forbid research where it can be clearly shown that harm is more likely than not to result from that research.

Assessing Nanotechnology Risks: Applying the Precautionary Principle

Questions about how best to proceed in scientific research when there are concerns about harm to the public good are often examined via the precautionary principle.

Weckert and Moor (2004) define the precautionary principle in the following way: If some action has a possibility of causing harm,

then that action should not be undertaken or some measure should be put in its place to minimize or eliminate the potential harms.

The Precautionary Principle (Continued) Weckert offers the following strategy:

If a prima facie case can be made that some research will likely cause harm...then the burden of proof should be on those who want the research carried out to show that it is safe.

He also says that there should be: ...a presumption in favour of freedom until such time

a prima facie case is made that the research is dangerous. The burden of proof then shifts from those opposing the research to those supporting it. At that stage the research should not begin or be continued until a good case can be made that it is safe.

Nanotechnology and the Precautionary Principle (Continued) Weckert and Moor believe that when the

precautionary principle is applied to questions about nanotechnology research and development, it needs to be analyzed in terms of three different “categories of harm”:

direct harm, harm by misuse, harm by mistake or accident. The kinds of risks involved in each differs.

The Need for Clear Ethical Guidelines for Nanocomputing and Nanotechnology

Ray Kurzweill (2005) has suggested that an ELSI-like model should be developed and used to guide researchers working in nanotechnology.

Many consider the ELSI framework to be an ideal model because it is a “proactive” ethics framework.

The Need for Ethical Guidelines (Continued) In most scientific research areas, ethics

has had to play “catch up,” because guidelines were developed in response to cases where serious harm had already resulted.

Prior to the ELSI Program, ethics was typically “reactive” in the sense that it has followed scientific developments rather than informing scientific research.

Ethical Guidelines (Continued) Moor and Weckert (2004) are critical of the

ELSI model because it employs a scheme that they call an “ethics-first” framework.

This kind of framework has problems because it depends on a “factual determination” of the specific harms and benefits of a technology before an ethical assessment can be done.

In the case of nanotechnology, it is very difficult to know what the future will be.

Ethical Guidelines (Continued) If we developed an ELSI-like ethics

model, it might seem appropriate to put a moratorium on nanotechnology research until we get all of the facts.

Moor and Weckert argue that while a moratorium would halt technology developments, it will not advance ethics in the area of nanotechnology.

Ethical Guidelines (Continued) Moor and Weckert also argue that

turning back to an “ethics-last model” is not desirable either.

They note that once a technology is in place, much unnecessary harm may already have occurred.

So, for Moor and Weckert, neither an ethics-first nor an ethics-last model is satisfactory for nanotechnology.

Ethical Guidelines (Continued) Moor and Weckert argue that ethics is

something that needs to be done continually as:

technology develops, and its potential consequences become better

understood. They also point out that ethics is

“dynamic” in that the factual component on which it relies has to be continually updated.

Ethical Guidelines (Continued) Thus far, nanotechnology guidelines

in the private sector have been implemented by the Foresight Institute.

The U.S. Government has created the National Nanotechnology Initiative (NNI) to monitor and guide federally-funded research in nanotechnology.

Ethical Guidelines (Continued) Some worry that conflicts of interest

involving the military and national defense initiatives can easily arise.

Much of the funding for nanotechnology research has come from government agencies, including the:

National Science Foundation (NSF), Defense Advanced Research Projects Agency

(DARPA).

Ethical Guidelines (Continued) Andrew Chen (2002) believes that

in addition to NSF and DARPA, other stakeholders include:

researchers (independent and privately-funded),

nanotechnology users, potentially everyone (since all of us will

eventually be affected by developments in nanotechnology).

Ethical Guidelines (Continued) Chen proposes that a non-government

advisory council be formed to: monitor the research, and help formulate a broader set of ethical

guidelines and policies. The ethical guidelines would need to

be continually updated in light of ongoing developments in nanotechnology.


Recommended