How Will We Use Tomorrow’s P. C.
S?How Will We Use Tomorrow’s P. C. s?Tomorrow’s PCs are going to be different in many ways; they will be more powerful, they will include more facilities for multimedia, and looking further ahead, they may have features such as three dimensional displays, or wrap around virtual reality. These changes will shape the way which we use our PCs, but even without such advances, there are changes that can and will take place in the operating systems that enable us to make better use of PCs. I would like to focus here on some of the changes that I believe are desirable. So what is wrong with today’s operating systems ?Plenty.
Whereas the hardware for today’s desk top PCs has advanced at an ever increasing pace, the operating systems have not matched up to the hardware. To be sure, there has been progress. The world of windows is a significant advance on the primitive command line interface of the original PC operating systems. But this represents merely a catching up with the state of the art of thirty years ago.
The cost power ratio of current microprocessors would have amazed the pioneers at IBM who developed these things, but they would not have been too impressed with the operating system. One of the other things wrong with operating systems is just that – the name. I either have to spell it out all the time, or fall back on the somewhat cryptic OS; and what exactly does ‘operating system’ mean to the average PC user? It sounds more like something you expect to find in a hospital than on a home computer. What is needed is a name that is more user friendly, and represents better the relationship I believe should exist between the PC and the user.
In a world where the average user is well used to the infra red controller to zap the TV, hi-fi or VCR, I suggest that Controller is a better word to use than operating system, so that is what I shall use. Current controllers have evolved as being little more than a way of users getting application programs to operate on their PC. This is how they started, and that is, by and large, how they have remained. They have become prettier, more complex, larger, and aware of other PCs on a network; but their primary purpose is to enable the user to start or terminate a specific application (such as a word processor).
Partly because of this rationale, controllers have been too much oriented to the workings of the PC and the application programs, and not enough to the relationship between the user and the PC. Tomorrow’s controller will have to act as a mediator between the user and the various tasks and applications that are provided on the PC. To do this, it will have to be able to communicate both with the user, and the applications; it will also have to know more about both. To give a simple example – I use my PC most days, I have used it hundreds of times, but yet it doesn’t even know my name! In fact, it knows absolutely nothing about me at all. Every time I come to use the PC, it comes as a complete surprise to it.
It has no memory of me, of my habits, my working practices, my family, my friends or my interests. We are complete and utter strangers. I believe this has to change, and with the power of tomorrow’s PCs, it will change. Another complaint I have about controllers is that they are far too passive, they are not proactive. If I do nothing, then my PC does nothing.
Now I can, with some difficulty, arrange for various tasks to be carried out automatically without my being present, but this is only a beginning. Consider, I have to monitor the organisation and structuring of the files held on my PC. Why? I think the controller should do this for me, it knows about such things better than I do. It can get to know my requirements by observation, with confirmation by questioning me when necessary, and take care of it.
Perhaps it could cogitate over such matters at night. While I get my rest, it can clear things up so they are refreshed in the morning. Let it dream while I do, let it consider possibilities and permutations. One of the most common problems that the average user faces with computers is that they do not behave in ways that the user expects. People are used to dealing with other people; they talk to them, listen to them, watch their body language, and empathise with them.
This is tricky with computers. Even though the days of the cryptic numeric error messages is thankfully over (well, almost), it is still far too common for the user to be faced with some event or message that he neither expects nor comprehends. Tomorrow’s controllers are going to have to be better at explaining to the user what is going on, and why, and what options he has to influence the future course of events. This means that controllers are going to have to take more responsibility, and intervene more often between the user and the applications. The current separation between applications and the operating system has been forced on the user because of historical reasons, and the realities of the market place.
The user does not want this separation, he would prefer a ‘seamless’ way of moving between different tasks or requests that he is carrying out. In fact, I believe that the whole idea of separate applications will gradually change to a more homogenous view of the PC. The use will ‘talk’ to the PC and request information, give information, ask questions, give orders to be carried out, and respond to events that occur during the execution of those orders. In other words, people will interact with PCs in a manner that more closely resembles their interaction with other people.
In order to achieve this improvement, various things will have to change. One has only to glimpse at any science fiction video to know that the most natural way for people to communicate with computers is via speech. Current speech technology is nearing the point where this is viable, but using speech to communicate does not just rely on a speech recognition and synthesis chip, it also needs an adjustment to the thinking of the user interface. Most current applications of speech recognition are aimed at either dictation into a word processor, or using spoken commands to control an application. These are certainly useful steps, but consider also how useful it would be to be able to yell ‘Stop’, or ‘Quiet’ to your PC from across the other side of the room.
The use of speech from the PC to inform you of the progress of its actions is another example of improving the relationship between the PC and the user. The single best example of how speech is used that I have found so far is when my PC says to me that ‘you have new mail waiting’ when I log onto CompuServe. The point is that speech needs to be used for such unexpected, asynchronous events, as well as in the normal command and response sequences. In addition, the current hierarchy of icons, buttons, toolbars and menus will need to give way to a more ‘flat’ command structure, possibly within some kind of ‘topic’ areas – ‘Now, I’d like to talk about finance’. With the advent of Object Orientation in computer software, there is already a noticeable change in the way that application programs are being viewed. It is increasingly common to think of collecting software objects with particular expertise, perhaps from a variety of sources, and ‘gluing’ them together to provide a service for the user.
I believe that the PC controller should develop into an environment where such objects can be glued together. In other words, the controller becomes a mediator between the user and a large collection of ‘intelligent’ objects. The controller will need to know how to deal with such objects, how to talk to them, how to report on their activities, and generally how to keep them in order and doing useful things that the user wants. For example, I would like to be able to give a task to my PC and say ‘keep me updated’, or ‘report back tomorrow’, or next week, or whatever. I would also like to be able to say ‘don’t bother me now, come back later’, or ‘do not disturb’.
This sounds pretty much like a manager controlling a bunch of people who report to him, and I think this is a useful metaphor for how people will deal with PCs in future. I want to be able to give my PC various tasks to get on with, and then I would like it to report back to me how things are going, in ways which I want to control. I expect the PC controller to ‘delegate’ most tasks to particular software ‘experts’, and deal with them according to an agreed set of principles. If needed, I even expect the controller to tell me that there is no local ‘expert’ who can do the job, but that it knows where one can be contacted, and what that would cost. If I give the go ahead, I then expect the controller to access the relevant expert over some wide area service – i.
e. by dial up PSTN line, over cable, by cellular, or whatever other services exist by then. If there are several such experts available, as one would hope in a free market, then the controller should give me a choice, with some indications of relevant price and performance compromise, perhaps even a comparative analysis obtainable from another wide area service if I should think it worthwhile. Again, I think the controller should be pro active. It should spend some of its idle time trawling through selected services to look for new items which it thinks would interest me, and if it finds something particularly tasty, it can even interrupt me to tell me so. In fact, access to wide area services will be one of the major uses of PCs in the future.
With the growth of the Internet, there is already visible a tendency to locate sources of data, computing power or specialist hardware at specific sites of expertise – similar to the trend for concentrating research effort at centres of excellence. There is no need to copy or broadcast all information to everyone. With local data gathering capabilities provided by the PC, it is instead possible for individuals or groups to collect information relevant to their needs from the global data pool. Likewise, software object experts need not be located at each PC, they can also be interrogated or utilised from a global pool of such software objects.
The means of interacting with this global pool should be the responsibility of the PC controller. If I ask my PC a question, I should not need to know if it has the means to answer it ‘locally’. The controller should be able to find the answer by ‘talking’ to software objects on the PC, or on a LAN (local area network), or on a WAN (wide area network). It may need authority from me to spend money on getting the answer, though I should be able to delegate some financial authority to it – ‘you can spend up to ?10 per month without asking me all the time’.
Once again, I am relying on the analogy of the relationship between a manager and a subordinate. As the subordinate gets more experience, or gets smarter, I expect to be able to delegate more authority. Companies will still need to be able to make money by developing software objects and supplying them to the users of PCs. This can still be done by producing such objects and selling them through normal supply routes. But in addition, they will be able to offer them either for purchase, or in the case of specialist objects for hire, over a wide area service. The PC controller will steer the user through this global maze with advice and expertise that itself can be supplemented by software objects skilled in such matters – pretty much like human consultants, really.
Some of these objects could themselves be expert in dealing with various specialist classes of users. This phenomenon is already happening in a small way, with the appearance of software ‘agents’ who intervene between the user and other software services. In addition to dealing and negotiating with such objects, the controller will still have to manage other ‘peripheral’ attachments to the PC, such as printers, CD drives or other permanent storage media. We will still need paper copies of documents, pictures and photographs – even if only to communicate with those people who have not yet embraced the PC philosophy. With the increase in multimedia developments, such peripherals may also include TVs.
, hi-fis, VCRs, camcorders etc. However, I think current analysts have been carried away with ideas concerning the amalgamation of PCs and home entertainment systems. One has to carefully consider the life styles and movement patterns of the average family when coming to any conclusions about such developments. I happen to have three PCs in my home. One is in an office, and the other two are in my children’s bedrooms. Most of the home entertainment systems are in the living room.
I do not want a PC in the living room – I have had some bad experiences with games consoles. Still, I am sure that there will be many uses where connections between the PC and other electronics are needed. There may of course be several PCs in the home with differing styles and uses. Control of other home systems such as heating and lighting is an obvious example, and is already happening to some extent in the US. It is also likely that there will be some kind of co-operation between PCs and telephones, answering machines, faxes and videophones.
But just because such connections are possible does not mean that they will automatically happen. I can send faxes from my PC, and initiate voice calls; but the smooth blending of voice and data is something that has not yet happened even though it has been ‘forthcoming’ for these ten years past. Even so, the serious advent of ISDN (combined voice and data lines from BT) in the UK could speed up the process somewhat. Current trends on dealing with peripherals utilise graphical ‘icons’ to represent the peripheral, which can then be controlled or interrogated by selecting the icon.
With the increasing use of speech to interact with the PC, I believe that it would be natural for peripherals to be given a ‘persona’ with which the user can communicate. This persona would have individual speech characteristi How to get a date Ross T. Crooks October 11, 2000You have been eyeing that cutie that is in your math class for two weeks. You know her name, but you would like to get to know her a little bit better.
She sits next to you one day, but you don’t know what to say to her. Your lips attempt to move, but no sound comes out. The whole class period is spent contemplating how you will talk to her. Soon the bell rings and another opportunity is gone. This is a common situation for many young men.
Here are four simple steps to score a date with that mystery girl. The first and most important step in getting a date is to find the right girl. Obviously every guy has different taste in women, so it is necessary to find someone who you find attractive. Some young men prefer blondes over brunettes or curvy women as opposed to skinny ones.
Whatever your preferences may be, it is essential to find a girl that is appeals to your personal tastes. om. There are now far more specialist magazines than ever there were in the days before broadcast news and entertainment. Some old services may fade away, others will be born to replace them. Solutions can not be imposed on users; the brief history of technology has plenty of examples of failed attempts: quad hi-fi, DAT audio, the MCA bus. The solutions will develop, coalesce, merge and blend in ways that will be decided by that ultimate arbiter – the marketplace.
The one common feature that I believe must evolve is the basic way in which we communicate with these diverse products and services.BibliographySorry is noneComputers and Internet Essays