After about 8 years since KDE 4 was released, programmers are ready again to amaze the world: all the internet services are integrated into the desktop, so you can take a look at twitter, facebook, emails, blogs, youtube, directly from the desktop. Nepomuk can search through your social networks and helps you find the informations you needed: it has been integrated into the main browsers, so you can access its service from the mouse's right button menu. You are writing a new post and don't remember a quote (from an other post or a file)? No problem, just select the phrase you wrote and Nepomuk will look your disk and the social networks to find other phrases that have similar meaning. Furthermore, the communication with the computer works like a chat: a plasmoid allows you to talk to the system like talking with an human. Do you want to install Scribus? Just say “Please, can you install Scribus?”. Do you want to upload an image to your profile? Just say “Hi computer, upload cat.jpg to picasaweb's MyImages album”. The intelligent engine is also used by Nepomuk to discover similar meanings between phrases and classify them in semantic fields automatically. All the datas are stored (if possible) in a server using a cloud network so, no matters where you are, you have always your files with you. In these years KDE programmers have worked, also, with the Arduino platform, and we finally have integrated it into KDE, so there are some plasma widgets that allows you to control an Arduino-like device. For example, if your coffee machine use this board to work, you can use your computer to tell it that you want a coffee. Since about three years ago started the trend of having a mediacenter, today about every family has a computer in the living room. So why not use it to manage the house? In these years we will see, probably, the diffusion of domotic, and KDE will be ready to be used for controlling the house. Since KDE is available also for your mobile devices (using MeeGo or Android) and the plasmoids are remotely controllable, you can manage your home from the work, or when your are in holidays. This is the future, guys, and we are ready.
This is wonderful, but how does it work? I'm gonna ask it to the leader of the project, Dr. Emmett Brown:
L.T. - First of all, can you generally explain what is the new idea of desktop?
E.B. - Well, users wants to have everything handy. Years ago computer's desktops were used only to put there some files or, generally, objects. But in a real desktop you also work, right? So why not doing the same thing with computer's desktop. The first use of computers, today, is internet browsing (to watch videos, stay in touch with other people, manage photos, etc...) so we decided to allow the user to do all these activities directly from the desktop. It's also very simple because, fundamentally, every activity is done using plasmoids: they have substituted the older big program windows.
L.T. - Anyway, the most interesting part is the artificial intelligence brain...
E.B. - Sure... I can say we developed the first intelligent desktop environment. There is an engine that analyzes the text written by the user and tries to understand what is he saying. Then, the engine tells to the system what should be done. This engine works also to recognize the voice, so the user can speak instead of typing. The usual problem is that a computer use a very simple and precise language. The human languages, instead, are really complex, because they uses a great number of words, and a sentence can have different meanings in different situations. The problem can be solved with an artificial intelligent brain that takes the sentence from the user, deletes all the parts not absolutely useful to understand the meaning, and tries to grammatically analyze the sentence recognizing the function of a word thanks to a vocabulary (it checks if the word is a verb or noun, for example), then it tries to link all the informations to logically read the sentence (understand if a noun is a subject or an object, which noun an adjective is referred to, etc...). Finally, the engine read into a list (that is constantly updated with the corrections made by the user) to match the meaning of the user's sentence and what action the computer must do. It's just like a human brain, and like all humans it can do something wrong, sometimes, but if the user corrects it, the computer will not (hopefully) do the same error twice.