Google here...OK enjoy>>>

Popular Posts

Blog Archive

Tuesday, April 26, 2011

The History of Computers from 1st till 2011


Computers have wedged themselves into every facet of our lives—they are what we would use as the symbolic representation of the modern world.
But did you know that the history of computers dates back to the 1800s?
Indeed, the history and evolution of computers is quite extraordinary—and with many early computing technology innovations tied to defense contracts, much of this information were kept secret from the public for decades. In this article, we explore the development and progression of computers.

Mid-1800s-1930s: Early Mechanical Computers

The first computers were designed by Charles Babbage in the mid-1800s, and are sometimes collectively known as the Babbage Engines. These include the Difference Engine No. 1, the Analytical Engine, and the Difference Engine No. 2.

Difference Engine No. 2

The Difference Engine was constructed from designs by Charles Babbage. Photo by Allan J. Cronin
These early computers were never completed during Babbage’s lifetime, but their complete designs were preserved. Eventually, one was built in 2002.
While these early mechanical computers bore little resemblance to the computers in use today, they paved the way for a number of technologies that are used by modern computers, or were instrumental in their development. These concepts include of the idea of separating storage from processing, the logical structure of computers, and the way that data and instructions are inputted and outputted.
Z1
Other important mechanical computers are the Automatic Electrical Tabulating Machine—which was used in the U.S. Census of 1890 to handle data from more than 62 million Americans—and the first binary computer: Konrad Zuse’s Z1, which was developed in 1938 and was the precursor to the first electro-mechanical computer.

1930s: Electro-Mechanical Computers

Electro-mechanical computers generally worked with relays and/or vacuum tubes, which could be used as switches.
Some electro-mechanical computers—such as the Differential Analyzer built in 1930—used purely mechanical internals but employed electric motors to power them.
These early electro-mechanical computers were either analog or were digital—such as the Model K and the Complex Number Calculator, both produced by George Stibitz.
Stibitz, by the way, was also responsible for the first remote access computing, done at a conference at Dartmouth College in New Hampshire. He took a teleprinter to the conference, leaving his computer in New York City, and then proceeded to take problems posed by the audience. He then entered the problems on the keypad of his teleprinter, which outputted the answers afterward.
Z3

It was during the development of these early electro-mechanical computers that many of the technologies and concepts still used today were first developed. The Z3, a descendent of the Z1 developed by Konrad Zuse, was one such pioneering computer. The Z3 used floating-point numbers in computations and was the first program-controlled digital computer.
Other electro-mechanical computers included Bombes, which were used during WWII to decrypt German codes.

1940s: Electronic Computers

Colossus
The first electronic computers were developed during the World War II, with the earliest of those being the Colossus. The Colossus was developed to decrypt secret German codes during the war. It used vacuum tubes and paper tape and could perform a number of Boolean (e.g. true/false, yes/no) logical operations.

Williams Tube

Another notable early electronic computer was nicknamed "The Baby" (officially known as the Manchester Small-Scale Experimental Machine). While the computer itself wasn’t remarkable—it was the first computer to use the Williams Tube, a type ofrandom access memory (RAM) that used a cathode-ray tube.
Some early electronic computers used decimal numeric systems (such as the ENIAC and the Harvard Mark 1), while others—like the Atanasoff-Berry Computer and the Colossus Mark 2—used binary systems. With the exception of the Atanasoff-Berry Computer, all the major models were programmable, either using punch cards, patch cables and switches, or through stored programs in memory.

1950s: The First Commercial Computers

The first commercially available computers came in the 1950s. While computing up until this time had mainly focused on scientific, mathematical, and defense capabilities, new computers were designed for business functions, such as banking and accounting.
The J. Lyons Company, which was a British catering firm, invested heavily in some of these early computers. In 1951, LEO (Lyons Electronic Office) became the first computer to run a regular routine office job. By November of that year, they were using the LEO to run a weekly bakery valuations job.
UNIVAC

The UNIVAC was the first mass-produced computer.
The UNIVAC was the first commercial computer developed in the U.S., with its first unit delivered to the U.S. Census Bureau. It was the first mass-produced computer, with more than 45 units eventually produced and sold.
The IBM 701 was another notable development in early commercial computing; it was the first mainframe computer produced by IBM. It was around the same time that theFortran programming language was being developed (for the 704).

IBM 650

The IBM 650 would cost you $4 million dollars if you bought it today.
A smaller IBM 650 was developed in the mid-1950s, and was popular due to its smaller size and footprint (it still weighed over 900kg, with a separate 1350kg power supply).
They cost the equivalent of almost $4 million today (adjusted for inflation).

Mid-1950s: Transistor Computers

The development of transistors led to the replacement of vacuum tubes, and resulted in significantly smaller computers. In the beginning, they were less reliable than the vacuum tubes they replaced, but they also consumed significantly less power.
RAMAC

These transistors also led to developments in computer peripherals. The first disk drive, the IBM 350 RAMAC, was the first of these introduced in 1956. Remote terminals also became more common with these second-generation computers.

1960s: The Microchip and the Microprocessor

The microchip (or integrated circuit) is one of the most important advances in computing technology. Many overlaps in history existed between microchip-based computers and transistor-based computers throughout the 1960s, and even into the early 1970s.


The microchip spurred the production of minicomputers and microcomputers, which were small and inexpensive enough for small businesses and even individuals to own. The microchip also led to the microprocessor, another breakthrough technology that was important in the development of the personal computer.
There were three microprocessor designs that came out at about the same time. The first was produced by Intel (the 4004). Soon after, models from Texas Instruments (the TMS 1000) and Garret AiResearch (the Central Air Data Computer, or CADC) followed.
The first processors were 4-bit, but 8-bit models quickly followed by 1972.
16-bit models were produced in 1973, and 32-bit models soon followed. AT&T Bell Labs created the first fully 32-bit single-chip microprocessor, which used 32-bit buses, 32-bit data paths, and 32-bit addresses, in 1980.
The first 64-bit microprocessors were in use in the early 1990s in some markets, though they didn’t appear in the PC market until the early 2000s.

1970s: Personal Computers

The first personal computers were built in the early 1970s. Most of these were limited-production runs, and worked based on small-scale integrated circuits and multi-chip CPUs.



The Altair 8800 was the first popular computer using a single-chip microprocessor. It was also sold in kit form to electronics hobbyists, meaning purchasers had to assemble their own computers.
Clones of this machine quickly cropped up, and soon there was an entire market based on the design and architecture of the 8800. It also spawned a club based around hobbyist computer builders, the Homebrew Computer Club.
1977 saw the rise of the "Trinity" (based on a reference in Byte magazine): the Commodore PET, the Apple II, and the Tandy Corporation’s TRS-80. These three computer models eventually went on to sell millions.
These early PCs had between 4kB and 48kB of RAM. The Apple II was the only one with a full-color, graphics-capable display, and eventually became the best-seller among the trinity, with more than 4 million units sold.

1980s-1990s: The Early Notebooks and Laptops

One particularly notable development in the 1980s was the advent of the commercially available portable computer.


The first of these was the Osborne 1, in 1981. It had a tiny 5" monitor and was large and heavy compared to modern laptops (weighing in at 23.5 pounds). Portable computers continued to develop, though, and eventually became streamlined and easily portable, as the notebooks we have today are.
These early portable computers were portable only in the most technical sense of the word. Generally, they were anywhere from the size of a large electric typewriter to the size of a suitcase.


The first laptop with a flip form factor, was produced in 1982, but the first portable computer that was actually marketed as a "laptop" was the Gavilan SC in 1983.
Early models had monochrome displays, though there were color displays available starting in 1984 (the Commodore SX-64).
Laptops grew in popularity as they became smaller and lighter. By 1988, displays had reached VGA resolution, and by 1993 they had 256-color screens. From there, resolutions and colors progressed quickly. Other hardware features added during the 1990s and early 2000s included high-capacity hard drives and optical drives.


Laptops are generally broken down into a three different categories:
  • Desktop replacements
  • Standard notebooks
  • Subnotebooks
Desktop replacements are usually larger, with displays of 15-17" and performance comparable with some better desktop computers.
Standard notebooks usually have displays of 13-15" and are a good compromise between performance and portability.
Subnotebooks, including netbooks, have displays smaller than 13" and fewer features than standard notebooks.

2000s: The Rise of Mobile Computing

Mobile computing is one of the most recent major milestones in the history of computers.
Many smartphones today have higher processor speeds and more memory than desktop PCs had even ten years ago. With phones like the iPhone and the Motorola Droid, it’s becoming possible to perform most of the functions once reserved for desktop PCs from anywhere.

Mobile computing really got its start in the 1980s, with the pocket PCs of the era. These were something like a cross between a calculator, a small home computer and a PDA. They largely fell out of favor by the 1990s. During the 1990s, PDAs (Personal Digital Assistant) became popular.
A number of manufacturers had models, including Apple and Palm. The main feature PDAs had that not all pocket PCs had was a touchscreen interface. PDAs are still manufactured and used today, though they’ve largely been replaced by smartphones.
Smartphones have truly revolutionized mobile computing. Most basic computing functions can now be done on a smartphone, such as email, browsing the internet, and uploading photos and videos.

Late 2000s: Netbooks

Another recent progression in computing history is the development of netbook computers. Netbooks are smaller and more portable than standard laptops, while still being capable of performing most functions average computer users need (using the Internet, managing email, and using basic office programs). Some netbooks go as far as to have not only built-in WiFi capabilities, but also built-in mobile broadband connectivity options.
2000s: The Rise of Mobile Computing
Netbooks

The first mass-produced netbook was the Asus Eee PC 700, released in 2007. They were originally released in Asia, but were released in the US not long afterward.
Other manufacturers quickly followed suit, releasing additional models throughout 2008 and 2009.
One of the main advantages of netbooks is their lower cost (generally ranging from around US$200-$600). Some mobile broadband providers have even offered netbooks for free with an extended service contract. Comcast also had a promotion in 2009 that offered a free netbook when you signed up for their cable internet services.
Most netbooks now come with Windows or Linux installed, and soon, there will be Android-based netbooks available from Asus and other manufacturers.
The history of computing spans nearly two centuries at this point, much longer than most people realize. From the mechanical computers of the 1800s to the room-sized mainframes of the mid-20th century, all the way up to the netbooks and smartphones of today, computers have evolved radically throughout their history.
The past 100 years have brought technological leaps and bounds to computing, and there’s no telling what the next 100 years might bring.

Sunday, April 24, 2011

More Windows 8 leaks reveal cloud syncing


More details about Microsoft's upcoming Windows 8 have apparently leaked onto the Web, revealing new surprises and some makeovers to existing features.
A greater peek into the next version of Windows is coming from Windows 8 Milestone 1, also known as Build 7850, which started popping up at various FTP and file-sharing sites last week. This build was released to Microsoft partners late last year. Milestone 2 has since been released, while Milestone 3 has reportedly been compiled. But combined with details from more recent builds, Windows users can get a glimpse into what the future may hold.

One new feature, reportedly first revealed by the Italian Web site Windows8Italia.com (Google Translate version), is a cloud synchronization tool. The feature would let users sync files and folders from their local PCs with a cloud-based service. Liveside.net sees this as confirmation that Windows 8 will include integration with Windows Live accounts, though it's as yet unknown whether this would directly tap into Windows Live SkyDrive to store data.
Another feature, also reportedly uncovered by Windows8Italia.com (Google Translate version), is a hybrid boot option. This feature is designed to reduce your PCs bootup time after you shut it down by putting it in a form of hibernation. CNET sister site ZDNet reports on this as an evolution of the current hibernation function that puts the core operating system files to sleep.Windows 8 Center sees this as a setting for more advanced users and one that changes the shutdown process more than the start-up routine.



Windows 8's hybrid boot option.
Yet another new feature revealed would allow users to carry Windows 8 with them on a USB drive. As described by Windows 8 Center, this portable workspace, or Windows on a stick option, would let people load a bootable copy of Windows 8 from a USB device and may be geared toward business users who could take their corporate version of Windows 8 with them from the job to work at home.


Windows 8 on a stick.

Beyond these and other new features, Windows 8 users should also see improvements to some old standbys.
The Windows Task Manager, which lets you view and manage open programs and processes, will apparently get a much-needed retrofit. Combining the current Task Manager with Microsoft's Resource Monitor, according to WinRumors, the new and advanced Task Manager would provide more details on running processes and system resources and offer a better way to shut down a misbehaving application.
The Windows Disk Cleanup tool, which can help get rid of unnecessary or duplicate files, will also apparently be beefed up. Based on information and a screenshot published by Windows 8 Center, the revamped tool would provide users with more options for cleaning up their drives and show how much space is taken up by specific types of files, such as videos and music.


Windows 8's revamped Disk Cleanup tool.

Ubuntu is a secure, intuitive operating system that powers desktops, servers, netbooks and laptops.

Super-fast and great-looking, Ubuntu is a secure, intuitive operating system that powers desktops, servers, netbooks and laptops. Ubuntu is, and always will be, absolutely free.



Created by the best open-source experts from all over the world, Ubuntu is available in 24 languages and ready for download today.

 







Ubuntu Linux Vs. Windows Vista: The Battle For Your Desktop

Is Linux finally ready to take on Windows as a desktop OS? We tried out both Vista and Ubuntu on individual PCs to see which works better. Here's who won.  

For some time, the folks at Ubuntu have been trying their best to make Granny -- and most everyone else -- happy. They've attempted to build a Linux distribution that's easy to install, use, configure, and maintain -- one that's at least as easy as Windows, and whenever possible, even easier. As a result, Ubuntu is one of the Linux distributions that has been most directly touted as an alternative to Windows. 

In this feature, I'm going to compare the newly-released Ubuntu 7.04 (codenamed "Feisty Fawn") with Microsoft Windows Vista in a number of categories. To keep the playing field as level as possible, I'm looking wherever I can at applications -- not just in the sense of "programs," but in the sense of what the average user is going to do with the OS in a workday. Sometimes the differences between the two OSes are profound, but sometimes the playing field levels itself -- OpenOffice.org, for instance, is installed by default in Ubuntu, but adding it to Vista isn't terribly difficult.

I tried to stick whenever possible with preinstalled software, although this rule sometimes had to be bent a little -- for instance, to see what backup solutions were available for Ubuntu through its own software catalog.

Also, while I was tempted to compare Vista's Aero interface to the Beryl window manager (which has a similar palette of visual effects), I decided that pretty graphics, while nice, had more to do with personal preference than efficiency. In addition, Beryl isn't installed by default in Ubuntu, and Aero isn't available on all PCs.
In each case, I've tried to look at practical benefits rather than theoretical ones -- what works, what doesn't, and what you have to do to get certain things done. I should also note that, despite being a big fan of Vista, I've tried to keep my enthusiasm for it from overriding my judgment. Everyone needs something different, and not everyone needs (or wants) Vista -- or Ubuntu -- so I've done my best to keep my mind, and my eyes, wide open.
  

Saturday, April 23, 2011

New Version Of C++ " C++ 0X "




The work on C++0x has entered a decisive phase. The ISO C++ committee aims for C++0x to become C++09. It follows that the standard must be complete for ratification by the ISO member nations in 2008. The set of facilities offered will be chosen from those currently being considered. To finish in time, the committee has stopped looking for new proposals and concentrates on the ones already being considered.
This paper briefly outlines the guiding principles of the work on C++0x, presents a few examples of likely language extensions, and lists some proposed new standard libraries.


Design and Evolution of C++ -Bjarne  Stroustrup(C++ In-venter) & Herb Sutter (Microsoft Crater of C++ 0X )






An Overview of the Coming C++ (C++0x) Standard







Language Features

Let�s see how code using new C++0x features might look:
template<class T> using Vec = vector<T,My_alloc<T>>;
Vec<double> v = { 2.3, 1.2, 6.7, 4.5  };
sort(v);
for(auto p = v.begin(); p!=v.end(); ++p)
    cout << *p << endl;
Each line except the last is illegal in C++98, and in C++98 we�d have to write more (error-prone) code to get the work done. I hope you can guess the meaning of this code without explanation, but let�s look each line individually.
template<class T> using Vec = vector<T,My_alloc<T>>;
Here, we define Vec<T> to be an alias of vector<T,My_alloc<T>>. That is, we define a vector called Vec that works exactly like vector except that it uses my allocator (My_alloc) rather than the default allocator. The ability to define such aliases and to bind some but not all parameters of a template has been missing from C++. It has traditionally been referred to as a "template typedefs" because typedef is what we typically use for defining type aliases, but for technical reasons, we preferred using. One advantage of this syntax is that it introduces the name being defined where it is easy for the human reader to spot. Note also another detail. I didn�t write
template<class T> using Vec = vector< T,My_alloc<T> >;
It will no longer be necessary to add that space between the terminating >'s. These two extensions have already been accepted in principle.
Next we define and initialize a Vec:

Vec<double> v = { 2.3, 1.2, 6.7, 4.5  };
Initializing a user-defined container (vector<double,My_allocator<double>>) with an initializer list is new. In C++98, we can only use such initializer lists for aggregates (arrays and classic structs). Exactly how this extension will be achieved is still being discussed, but the solution will most likely involve a new kind of constructor—a "sequence constructor". Allowing the above implies that C++ better meets one of its fundamental design criteria: support user-defined and built-in types equally well. In C++98 arrays have a notational advantage over vectors. In C++0x, that will no longer be the case.


Next, we sort the vector:
sort(v); 
To do that within the framework of the STL we must overload sort for containers and for iterators. For example:
template<Container C> // sort container using <
    void sort(C& c);
      
template<Container C, Predicate Cmp> // sort container using Cmp
    where Can_call_with<Cmp,typename C::value_type>
    void sort(C& c, Cmp less);
      
template<Random_access_iterator Ran> // sort sequence using <
    void sort(Ran first, Ran last);
      
template<Random_access_iterator Ran, Predicate Cmp> // sort sequence using Cmp 
    where Can_call_with<Cmp,typename Ran::value_type>
    void sort(Ran first, Ran last, Cmp less);
This illustrates the most significant proposed C++0x language extension that is likely to be accepted: concepts. Basically, a concept is the type of a type; it specifies the properties required of a type. In this case, the concept Container is used to specify that the two first versions of sort need an argument that meets the standard library container requirements. The where-clauses are used to specify the required relationship between the template arguments: that the predicates can be applied to the containers' element types. Given concepts we can provide far better error messages than is currently possible and distinguish between templates taking the same number of arguments, such as
sort(v, Case_insensitive_less());   // container and predicate
and
sort(v.begin(), v.end());           // two random access iterators
The difficulty in the design of �concept� is to maintain the flexibility of templates so that we don�t require template arguments to fit into class hierarchies or require all operations to be accessed through virtual functions (as for Java and C# generics). In �generics�, an argument must be of a class derived from an interface (the C++ equivalent to �interface� is �abstract class�) specified in the definition of the generic. That means that all generic argument types must fit into a hierarchy. That imposes unnecessary constraints on designs requires unreasonable foresight on the part of developers. For example, if you write a generic and I define a class, people can't use my class as an argument to your generic unless I knew about the interface you specified and had derived my class from it. That's rigid.There are workarounds, of course, but they complicate code. Another problem is that you cannot use built-in types directly with generics, because built-in types, such as int, are not classes and don't have the functions required by interfaces specified by a generic—you then have to make wrapper classes for holding built-in types and access elements indirectly through pointers. Also, the typical operation on a generic is implemented as a virtual function call. That can be very expensive (compared to just using a simple built-in operation, such as + or <). Implemented that way, generics are simply syntactic sugar for abstract classes.
Given "concepts", templates will retain their flexibility and performance. There is still much work left before the committee can accept a specific and detailed concept design. However, concepts are a most likely extension because they promise significantly better type checking, much better error messages, and greater expressive power. That should lead to significantly better library interfaces, starting with the current standard containers, iterators, and algorithms.
Finally, consider the last line that outputs the elements of our vector:
for (auto p = v.begin(); p!=v.end(); ++p)
    cout << *p << endl; 
The difference from C++98 here is that we don�t have to mention the type of the iterator: auto means �deduce the type of the declared variable from the initializer�. Such uses of auto are far less verbose and also less error-prone than current alternatives, such as:
for (vector< double, My_alloc<double> >::const_iterator p = v.begin(); p!=v.end(); ++p)
    cout << *p << endl; 
The new language features mentioned here are all aimed at simplifying generic programming. The reason is that generic programming has become so popular that it is seriously strains the language facilities. Many �modern� generic programming techniques border on �write only� techniques and threaten to isolate its users. To make generic programming mainstream, as object-oriented programming was made mainstream, we must make template code easier to read, write, and use. Many current uses are too clever for their own good. Good code is simple (relative to what it is trying to do), easy to check, and easy to optimize (i.e., efficient). This implies that a wide range of simple ideas can be expressed simply in C++0x and that the resulting code is uncompromisingly efficient. The former is not the case in C++98—at least not for a sufficiently large range of techniques relying on templates. Better type checking and more extensive use of type information to shorten code will make code shorter and clearer, and easier to maintain, as well as more likely to be correct.

Library Facilities

Ideally, we�d leave the C++ language mostly unchanged and focus on adding standard libraries. However, libraries that are sufficiently general to be standard are not easy to design and the standards committee is—as usual—short of resources. We are a relatively small group of volunteers and all have �day jobs�. This puts unfortunate limits on how adventurous we can be with new libraries. On the other hand, the committee started early and a technical report on libraries ("The Library TR") was recently approved by vote. It provides several facilities that are directly useful to programmers:
  • Hash Tables
  • Regular Expressions
  • General Purpose Smart Pointers
  • Extensible Random Number Facility
  • Mathematical Special Functions
I particularly appreciate having standard versions of regular expression matching and hash tables (called unordered_maps) available. In addition, the Library TR provides extensive facilities for builders of generic libraries building on the STL:
  • Polymorphic Function Object Wrapper
  • Tuple Types
  • Type Traits
  • Enhanced Member Pointer Adaptor
  • Reference Wrapper
  • Uniform Method for Computing Function Object Return Types
  • Enhanced Binder
This is not the place to go into details about these libraries or into the further facilities that the committee would like to provide. If you are interested, I suggest you look at the proposals on the WG21 site (see �information sources� below), the libraries �wish list� (on my home pages), and the BOOST libraries (www.boost.org). I personally would like to see more libraries that are immediately useful to applications builders, such as Beman Dawes� library for manipulating files and directories (currently a BOOST library) and a socket library.The list of proposals is still quite modest and not anywhere as ambitious as I�d like. However, more proposals from the committee's large backlog of suggestions are being considered and more libraries will appear either as part of the C++0x standard itself or as further committee technical reports. Unfortunately, lack of resources (time, money, skills, people, etc.) will continue to limit progress in this direction. Sadly, I cannot offer hope for the most frequently wished for new standard library: a standard GUI library. A GUI library is simply too large a task for the volunteers of the C++ standards committee to handle and too difficult a task given the many (non-standard but huge, useful, and supported) GUI libraries available. Please notice that even though they are not standard, the major C++ GUIs have more users than most programming languages and are often better supported.
In addition to these general-purpose libraries, the committee presented a library interface to the most basic level of hardware in its �Performance TR�. That TR is primarily aimed to help embedded systems programmers and to disprove myths about poor performance of C++ code and about C++ being unsuitable for low-level tasks.

Putting It All Together

�Drawing all shapes in an array� is a classical example of object-oriented programming (going back to the early Simula days). Using generic programming, we can generalize that to drawing each element of any container holding (pointers to) shapes:
template<Container C>
void draw_all(C& c) 
where Usable_as<typename C::value_type,Shape*>
{
    for_each(c, mem_fun(&Shape::draw));
}
In C++0x, we hope to have Container as a standard concept and Usable_as as a standard predicate. The for_each algorithm is already in C++98, but the version that takes a container (rather than a pair of iterators) will have to wait for concepts in C++0x. The where-clause is a mechanism through which an algorithm can express requirements on its arguments. Here, draw_all() requires (obviously) that the elements of the container must be usable as (implicitly convertible to) Shape*. In this case, the where-clause gives us a degree of flexibility/generality not offered by simply requiring a container of Shape*'s. In addition to any container of Shape*'s, we can use any container with elements that can be used as Shape*'s, such as a list<shared_ptr<Shape*>>(where shared_ptr is a likely C++0x standard library class) or a container of pointers to a class derived from Shape*, such as deque<Circle*>.Assuming that we have points p1p2, and p3, we can test draw_all() like this
vector<Shape*> v = {
    new Circle(p1,20),
    new Triangle(p1,p2,p3),
    new Rectangle(p3,30,20)
};

draw_all(v);

list<shared_ptr<Shape*>> v2 = {
    new Circle(p1,20),
    new Triangle(p1,p2,p3),
    new Rectangle(p3,30,20)
};

draw_all(v2);
The "draw all shapes" example is important because when you can do that well, you can do much of what�s key to object-oriented programming. As written here, the example demonstrates the power of multi-paradigm programming by also employing generic programming (concepts and templates), conventional programming (e.g. the free-standing standard-library function mem_fun()), and simple data abstraction (the function object returned by mem_fun()). Thus, this simple example opens the door to a host of elegant and efficient programming techniques.I hope that after looking a bit at this example, your reaction will be "How simple!" rather than "How clever! How advanced!" In my opinion, many people are trying too hard to be clever and advanced. The real aim of design and programming is to produce the simplest solution that does the job and express it in the clearest possible way. The aim of the C++0x design is to better support such simple solutions. 

Information Sources

My web pages (http://www.research.att.com/~bs) contain much useful information. There you will find information about my own work (books, articles, interviews, FAQs, etc.) and links to sources that I find most helpful, such as a list of interesting C++ applications, a list of C++ compilers, and links to useful libraries (e.g., BOOST). In connection with C++0x, you can find:
  • "Wish lists" for language features and library facilities
  • The Standard: IOC/IEC 14882—International Standard for Information Systems�Programming Language C++
  • The Performance TR: ISO/IEC PDTR 18015—Technical Report on C++ Performance.
  • The Library TR: JTC1.22.19768 ISO/IEC TR 19768—C++ Library Extensions.
  • A link to the WG21 (ISO C++ Standards Committee) site, where you can find all the proposals being considered
  • A page with some of my proposals (including "concepts") to the committee. (Please remember that not all proposals are accepted and that essentially all proposals that are accepted incorporate major changes and improvements before acceptance.)

New Phone From Google Nexus|S




Nexus S is the first phone to run Gingerbread, the fastest version of Android yet.

Gingerbread builds on some of the most popular Android features like multi-tasking and Wi-Fi hotspot and adds a refreshed user interface, an improved keyboard, near field communication (NFC) support, and more.

Internet calling (VoIP/ SIP support)

Gingerbread allows Nexus S to place Internet calls with a SIP account. This allows for enhanced VoIP dialing to other SIP accounts and even phone numbers.




One-touch word selection and improved copy-paste
With Gingerbread, copying and pasting text is easier than ever. Use the new text selection tool to copy and paste when entering text or viewing a webpage.

One-touch word selection and improved copy-paste
With Gingerbread, copying and pasting text is easier than ever. Use the new text selection tool to copy and paste when entering text or viewing a webpage.