Senin, 28 April 2014

CLOUD COMPUTING




Cloud computing is a type of computing that relies on sharing computing resources rather than having local servers or personal devices to handle applications.
In cloud computing, the word cloud (also phrased as "the cloud") is used as a metaphor for "the Internet," so the phrase cloud computing means "a type of Internet-based computing," where different services — such as servers, storage and applications are delivered to an organization's computers and devices through the Internet.
Cloud computing is comparable to grid computing, a type of computing where unused processing cycles of all computers in a network are harnesses to solve problems too intensive for any stand-alone machine.

The world of the cloud has lots of participants:
·         The end user who doesn’t have to know anything about the underlying technology.
·         Business management who needs to take responsibility for the governance of data or services living in a cloud. Cloud service providers must provide a predictable and guaranteed service level and security to all their constituents.

·         The cloud service provider who is responsible for IT assets and maintenance.

Description: https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg3KeKXq1_OyYWxM3KTym6Exyjta0EUnWA7zWxPNtRLfuYDlX9LgMEaFDkaLq8AbGNRyMUGAxvhtw9lGnnEufjlGQkFCf6PHjtlTHn0I2cR7PMYILsbJTbXsrWUvA_P5zqnCqiRsFZRevcz/s1600/1.jpg


Advantages of cloud computing

1.  Worldwide Access. Cloud computing increases mobility, as you can access your documents from any device in any part of the world. For businesses, this means that employees can work from home or on business trips, without having to carry around documents. This increases productivity and allows faster exchange of information. Employees can also work on the same document without having to be in the same place.
2.      More Storage. In the past, memory was limited by the particular device in question. If you ran out of memory, you would need a USB drive to backup your current device. Cloud computing provides increased storage, so you won’t have to worry about running out of space on your hard drive.
3.      Easy Set-Up. You can set up a cloud computing service in a matter of minutes. Adjusting your individual settings, such as choosing a password or selecting which devices you want to connect to the network, is similarly simple. After that, you can immediately start using the resources, software, or information in question.
4.      Automatic Updates. The cloud computing provider is responsible for making sure that updates are available – you just have to download them. This saves you time, and furthermore, you don’t need to be an expert to update your device; the cloud computing provider will automatically notify you and provide you with instructions.
5.      Reduced Cost. Cloud computing is often inexpensive. The software is already installed online, so you won’t need to install it yourself. There are numerous cloud computing applications available for free, such as Dropbox, and increasing storage size and memory is affordable. If you need to pay for a cloud computing service, it is paid for incrementally on a monthly or yearly basis. By choosing a plan that has no contract, you can terminate your use of the services at any time; therefore, you only pay for the services when you need them


The working principle of cloud computing

The principle of cloud computing is almost same with another computer,  just the different of that is in cloud computing, is coupled with another present computer.  In regular computer, file from software when we used is stored in hardisk or another storage media. But on computer clouds if viewed from the side of the user, the files from software we use is in another computer.
In other words we are connected to multiple computers on a network server, but the data we store it was in the data center or in center, so that not only we can open the file that we save but computers or other users can open it and vice versa (Public). Also in a lot of infrastructure servers that we can use and we only pay as needed.

 Characteristics cloud computing

1. On-demand self-service. This means provisioning or de-provisioning computing resources as needed in an automated fashion without human intervention. An analogy to this is electricity as a utility where a consumer can turn on or off a switch on-demand to use as much electricity as required.
2. Ubiquitous network access. This means that computing facilities can be accessed from anywhere over the network using any sort of thin or thick clients (for example smartphones, tablets, laptops, personal computers and so on).
3. Resource pooling. This means that computing resources are pooled to meet the demand of the consumers so that resources (physical or virtual) can be dynamically assigned, reassigned or de-allocated as per the requirement. Generally the consumers are not aware of the exact location of computing resources. However, they may be able to specify location (country, city, region and the like) for their need. For example, I as a consumer might want to host my services with a cloud provider that has cloud data centers within the boundaries of Australia.
4. Rapid elasticity. Cloud computing provides an illusion of infinite computing resources to the users. In cloud models, resources can be elastically provisioned or released according to demand. For example, my cloud-based online services should be able to handle a sudden peak in traffic demand by expanding the resources elastically. When the peak subsides, unnecessary resources can be released automatically.
5. Measured service. This means that consumers only pay for the computing resources they have used. This concept is similar to utilities like water or electricity.

SECURITY

Security. When using a cloud computing service, you are essentially handing over your data to a third party. The fact that the entity, as well as users from all over the world, are accessing the same server can cause a security issue. Companies handling confidential information might be particularly concerned about using cloud computing, as data could possibly be harmed by viruses and other malware. That said, some servers like Google Cloud Connect come with customizable spam filtering, email encryption, and SSL enforcement for secure HTTPS access, among other security measures.

The biggest question most have with Cloud Computing is will it be Safe? The answer is “NO”  Reason why is everything that Cloud Computing is based on is mechanical, although it seems virtual. The Safety of the data (information), is only as Safe as the will and determination of the individual that wants to have at it.

THE CONCEPT OF CLOUD COMPUTING

Description: https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1EOzeKiZRUZf9SOaGy01Kw76lqmYaaQD6-nPsxmR93Y0PmQOyn5UE0JMdd8hJN97AJvecQz354JV9gRVz4qf-ADUBvKuNY21J4DiZDycLYF19bBwUYvn-cWFkf87dfLZhU2JdUKXYtjQd/s1600/1.jpg

The first building block is the infrastructure where the cloud will be implemented. Some people make the assumption that environment should be virtualized, but as cloud is a way to request resources in an on-demand way and if you have solutions to provide  on bare metal, then why not? The infrastructure will support the different types of cloud (IaaS, PaaS, SaaS, BPaaS).
To be able to provide these services you will need Operating System Services (OSS), which will be in charge of deploying the requested service, and Business System Services (BSS), mainly used to validate the request and create the invoice for the requested services. Any metrics could be used to create the invoice (for example, number of users, number of CPUs, memory, usage hours/month). It is very flexible and depends on the service provider.
A cloud computing environment will also need to provide interfaces and tools for the service creators and users. This is the role of the Cloud Service Creator and Cloud Service Consumer components.
Now, let’s see how it works in reality.
Generally, you log in to a portal (enterprise or public wise) and you order your services through the Cloud Service Consumer. This service has been created by the cloud service provider and can be a simple virtual machine (VM) based on an image, some network components, an application service such as an WebApp environment and a service such as MongoDB. It depends on the provider and type of resources and services.
The cloud provider will validate, through the BSS, your request and if the validation is okay (credit card, contract), it will provision the request through the OSS.
You will receive, in one way or another, the credentials to access your requested services and you will usually receive a monthly invoice for your consumption.


Reference :


http://12285-if-unsika.blogspot.com/2012/10/prinsip-kerja-cloud-computing-atau.html



Rabu, 02 April 2014

Computing Technology Developments

Ø Personal Computer.
Computers are tools used to process the data according to the orders which have been formulated. Computer word originally used to describe people who perkerjaannya perform arithmetic calculations, with or without the tools, but the meaning of this word is then transferred to the machine itself. Origins, processing information almost exclusively related to arithmetical problems, but modern computers are used for many tasks unrelated to mathematics.

Broadly, the computer can be defined as an electronic device that consists of several components, which can cooperate between the components with one another to produce a program and information based on existing data. The computer components are included: Screen Monitor, CPU, Keyboard, Mouse and Printer (as a complement). Without a computer printer can still do its job as a data processor, but not limited to the monitor screen looks in print form (paper).

In such a definition there is a tool like a slide rule, mechanical calculators types ranging from abacus and so on, until all contemporary electronic computers. The term better suited for a broad sense as "computer" is "that process information" or "information processing systems."

Nowadays, computers are becoming more sophisticated. However, before the computer is not small, sophisticated, cool and light now. In the history of computers, there are 5 generations of computer history.

  • The first generation
With the onset of the Second World War, the countries involved in the war sought to develop computers to exploit their potential strategic computer. This increased funding for computer development and accelerate technical progress. In 1941, Konrad Zuse, a German engineer to build a computer, the Z3, to design airplanes and missiles.

Party allies also made other progress in the development of computer power. In 1943, the British completed a secret code-breaking computer called Colossus to decode secret German. The Colossus's impact influenced the development of the computer industry because of two reasons. First, Colossus is not a versatile computer (general-purpose computer), it was only designed to decode secret messages. Second, the existence of the machine was kept secret until decades after the war ended.

The work done by the Americans at that time produced a broader achievement. Howard H. Aiken (1900-1973), a Harvard engineer working with IBM, succeeded in producing electronic calculators for the U.S. Navy. The calculator is a length of half a football field and has a range of 500 miles along the cable. The Harvard-IBM Automatic Sequence Controlled Calculator, or Mark I, an electronic relay computer. He uses electromagnetic signals to move mechanical components. The machine was slow (it takes 3-5 seconds per calculation) and inflexible (order calculations can not be changed). The calculator can perform basic arithmetic and more complex equations.

The development of the present day computer was the Electronic Numerical Integrator and Computer (ENIAC), which is created by the cooperation between the U.S. government and the University of Pennsylvania. Consisting of 18,000 vacuum tubes, 70,000 resistors and 5 million soldered joints, the computer is a machine that consume enormous power of 160kW.

This computer was designed by John Presper Eckert (1919-1995) and John W. Mauchly (1907-1980), ENIAC is a versatile computer (general purpose computer) that work 1000 times faster than the Mark I.
In the mid-1940s, John von Neumann (1903-1957) joined the University of Pennsylvania team, initiating concepts in computer design that is up to 40 years is still used in computer engineering. Von Neumann designed the Electronic Discrete Variable Automatic Computer (EDVAC) in 1945 with a memory to hold both programs and data. This technique allows the computer to stop at some point and then resume her job back. Key to the von Neumann architecture is a central processing unit (CPU), which allowed all computer functions to be coordinated through a single source. In 1951, the UNIVAC I (Universal Automatic Computer I) made by Remington Rand, became the first commercial computer that utilizes a model of the Von Neumann architecture.

Neither the U.S. Census Bureau and General Electric have UNIVAC. One of the impressive results achieved by the UNIVAC dalah success in predicting victory Dwilight D. Eisenhower in the 1952 presidential election.

First generation computers were characterized by the fact that operating instructions were made specifically for a particular task. Each computer has a different binary code program called "machine language" (machine language). This causes the computer difficult to program and the speed limit. Another feature is the use of first generation of computer vacuum tube (which makes the computer at that time very large) and magnetic cylinders for the storage of data.

  • The second generation
In 1948, the invention of the transistor greatly influenced the development of the computer. The transistor replaced the vacuum tube in televisions, radios, and computers. As a result, the size of the electric machines is reduced drastically.

The transistor used in computers began in 1956. Another invention is the development of magnetic-core memory is the development of second generation computers smaller, faster, more reliable, and more energy efficient than their predecessors. The first machine that utilizes this new technology is the supercomputer. IBM makes supercomputer named Stretch, and Sprery-Rand makes a computer named LARC. These computers, which was developed for atomic energy laboratories, could handle large amounts of data, a capability much in demand by atomic scientists. The machine is very expensive and tend to be too complex for business computing needs, thereby limiting. There are only two LARC ever installed and used: one at the Lawrence Radiation Labs in Livermore, California, and the other in the U.S. Navy Research and Development Center in Washington, DC The second generation of computers replacing machine language to assembly language. Assembly language is a language that uses abbreviations to replace the binary code.

In the early 1960s, computers began to appear successful second generation in the business, in universities, and in government. The second generation of computers is a computer which used transistors. They also have components that can be associated with the modern day computer: printers, storage, disk, memory, operating system, and programs.

One important example is the computer at this time in 1401 that is widely accepted in the industry. In 1965, almost all large businesses use computers second generation to financial memprosesinformasi.

The program stored in the computer programming language that is in it gives flexibility to the computer. Flexibility is increased performance at a reasonable price for business use. With this concept, the computer can print customer invoices and minutes later design products or calculate paychecks. Some programming languages ​​began to appear at that time. Programming language Common Business-Oriented Language (COBOL) and FORTRAN (Formula Translator) came into common use. These languages ​​replaced cryptic binary machine code with words, sentences, and mathematical formulas are more easily understood by humans. This makes it easy for someone to program a computer. A wide variety of new careers (programmer, systems analyst, and expert computer systems). Industr software also began to appear and grow during this second-generation computer.

  • The third generation
Although the transistors in many respects the vacuum tube, but transistors generate considerable heat, which can potentially damage the internal parts of the computer. Quartz stone (quartz rock) eliminates this problem. Jack Kilby, an engineer at Texas Instruments, developed the integrated circuit (IC: integrated circuit) in 1958. IC combined three electronic components onto a small silicon disc, made from quartz sand. Scientists later managed to fit more components into a single chip, called a semiconductor. As a result, computers became ever smaller as more components were squeezed onto the chip. Other third-generation development is the use of the operating system (operating system) which allows the engine to run many different programs at once with a central program that monitored and coordinated the computer's memory.



  • The fourth generation
After IC, the only place to go was down the size of circuits and electrical components. Large Scale Integration (LSI) could fit hundreds of components on a chip. In the 1980s, Very Large Scale Integration (VLSI) contains thousands of components in a single chip.

Ultra-Large Scale Integration (ULSI) increased that number into the millions. The ability to install so many components in a chip that is half the coins berukurang encourage lower prices and the size of the computer. It also increased their power, efficiency and reliability. Intel's chips are made in the year 4004 1971membawa advances in IC by putting all the components of a computer (central processing unit, memory, and control input / output) in a very small chip. Previously, the IC made to do a certain task specific. Now, a microprocessor could be manufactured and then programmed to meet all the requirements. Not long after, each household devices such as microwave ovens, televisions, and cars with electronic fuel injection (EFI) is equipped with a microprocessor.

Such developments allow ordinary people to use a regular computer. The computer is no longer a dominance of large corporations or government agencies. In the mid 1970s, computer assemblers offer their computer products to the general public. These computers, called minicomputers, sold with a software package that is easy to use by the layman. The software is most popular when it was word processing and spreadsheets. In the early 1980s, such as the Atari 2600 video game consumer interest in home computers are more sophisticated and can be programmed.

In 1981, IBM introduced the use of Personal Computer (PC) for use in homes, offices, and schools. The number of PCs in use jumped from 2 million units in 1981 to 5.5 million units in 1982. Ten years later, 65 million PCs in use. Computers continued their trend toward a smaller size, from computers that are on the table (desktop computer) to a computer that can be inserted into the bag (laptop), or even a computer that can be hand held (palmtop).

IBM PC to compete with Apple's Macintosh line, introduced in. Apple Macintosh became famous for popularizing the computer graphics system, while his rival was still using a text-based computer. Macintosh also popularized the use of mouse devices.

At the present time, we know the journey IBM compatible with the use of the CPU: IBM PC/486, Pentium, Pentium II, Pentium III, Pentium IV (series of CPUs made by Intel). Also we know AMD K6, Athlon, etc.. This is all included in the class of fourth generation computers.

Along with the proliferation of computer usage in the workplace, new ways to explore the potential to be developed. Along with the increased strength of a small computer, these computers can be connected together in a network to share a memory, software, information, and also to be able to communicate with each other. The computer network allows a single computer to establish electronic collaboration to complete a task process. By using the direct cabling (also called a Local Area Network or LAN), or [telephone cable, the network can be very large.

  • The fifth generation
Defining the fifth generation computer becomes quite difficult because this stage is still very young. Examples imaginative fifth generation computer is the fictional HAL9000 computer from the novel by Arthur C. Clarke's 2001: A Space Odyssey. HAL displays all the desired functionality of a fifth generation computers. With artificial intelligence (artificial intelligence or AI), HAL may have enough reason to hold conversations with humans, using visual input, and learn from his own experience.

Although it may be the realization of HAL9000 is still far from reality, many of the functions that had been established. Some computers can receive verbal instructions and imitate human reasoning. The ability to translate a foreign language also becomes possible. The facility is deceptively simple. However, such facilities become much more complicated than expected when programmers realized that human understanding relies heavily on context and meaning rather than just translate the words directly.

Many advances in the field of computer design and technology are increasingly enabling the manufacture of fifth generation computers. Two such engineering advances are parallel processing capabilities, which will replace the non-Neumann model. Non Neumann model will be replaced with a system that is able to coordinate many CPUs to work in unison. Another advance is superconductor technology, which allows the flow of electricity with no resistance, which in turn can accelerate the speed of information.

Japan is a country well known in some attributes of fifth generation computers. Institutions ICOT (Institute for new Computer Technology) was also set up to make it happen. Many news stating that the project has failed, but some other information that the success of this fifth generation computer project will bring new changes to the paradigm of computerization in the world.

Ø Teknlogi 1G-4G
The development of communication technology in the world has been growing very rapidly. Start of 0G, continues to 0.5g, 1G, 1.5G, until now used is 2G and 3G. To see what it actually predecessor technologies 2G and 3G, then let us discuss each one. However, before starting the discussion, it should be noted that the discussion provided here is not entirely complete, since this paper are the only 2G and 3G technology.

  • 0G, 0.5g (Zero Generation)
0G technology is a communication technology that initiate the formation of the next generation of telecommunications. Actually, at the beginning of this technology is found not to be named with technology 0G (Zero Generation). The beginning of this technology are named with a mobile radio telephone (mobile telephone radio).

This technology uses radio-based network (radiotelephone)-specific, which means separate and closed off from other similar networks-as well as the limited network coverage. Even so, the network is able to connect to the telephone network today. Some of the many telecommunications standard used by this generation are:

PTT (Push-to-Talk or Press-to-Transmit)
Is a communication network technology that uses half-duplex method (which is similar to a walkie-talkie, only this technology is connected with the cellular network) used to communicate. Until today PTT is implemented on the cellular network, but nothing to Indonesian carriers that support this technology.

MTS (Mobile Telephone System)
Is a half-duplex radiotelephone technology that is cultivated by the Bell System and the first time in St. implemetasikan. Louis on June 17, 1946. At first there were only 3 channels of communication, then increased to 32 channels with 3 frequencies to serve all customers. Kekurangnnya handset is its weight reaches 80 pounds or 29 kg, and a network that is confined to the urban areas alone. In the 1980s, this technology has not been used in America.

IMTS (Improved Mobile Telephone Service)
Is a full-duplex radiotelephone technology that uses wave Low VHF (35-44 MHz, 9 channels), High VHF (152-158 MHz, 11 channels), and UHF (454-460 MHz, 12 channels). Introduced in 1969 as a substitute for MTS technology.

AMTS (Advanced Mobile Telephone System)
OLT (Offentlig Landmobil Telephony or Public Land Mobile Telephony)
MTD (Mobilelefonisystem Mobile telephony system D or D)
Autotel / PALM (Automated Public Land Mobile)
ARP (Autoradiopuhelin or car radio phone)

B-Netz
In 0G generation, mobile phone systems (mobile telephone) can be distinguished from early radio telephone system (mobile radio telephone). The difference is in the mobile telephone system for communication should be through commercial services Public Switched Telephone Network (PSTN), which serves as the operator to direct the call. While the system does not need a radio telephone network, because of direct communication between the sender and the recipient of the call through a closed network. Radiotelephone communications systems commonly applied to the initial police radio network or a taxi. Radio telephone system is known by the trade name WCCs (Wireline Common Carriers, AKA telephone companies), RCCs (Radio Common Carriers), and two-way radio dealers.
Mobile phone system (mobile telephone) is generally installed in a car or truck, also some are shaped like a briefcase. Usually, the components of the transmitter and receiver or transceiver (transmitter-receiver) mounted in the trunk of the vehicle and connected to the "head" (dial, display, and handset) is located near the driver's seat.
Tabel 1 Advantages and Disadvantages Technolog 0G, 0.5G
Advantages
Disadvantages
Could serve only voice communications and an early technology of mobile communication (mobile) are implemented and commercialized
Method of transmission is half-duplex, although the development of supporting full-duplex
Limited number of subscriber
Limited reach of its network
Does not support data communications

  • 1G, 1.5G (First Generation)
1G technology is a first-generation wireless technologies such as cellular telephone (cellphone, there is also a mobile phone call). This technology is the standard for analog cellular phones were introduced around the 1980s. Communication tool in the generation of this technology was originally used for military purposes, but in its development the general public who use this communication technology.

Communication techniques used in this generation is a Frequency Division Multiple Access (FDMA). This technique allows the sharing of frequency allocation on a cell to use all existing customers in these cells, meant for each customer while talks will have its own distinct frequency with the frequency of other customers in the same cell. This principle is similar to the workings of a radio station that broadcast each use a different frequency from one station to another station). Most of the many telecommunications standard 1G include:

NMT (Nordisk MobilTelefoni or Nordic Mobile Telephony)
1G technology is evolving around the 1980's that are still in operation in 30 countries in Europe generally. This technology consists of NMT450 (Nordic Mobile Telephones/450) developed by Ericsson and Nokia in 1981 that operates at 450 MHz using the system FDD (Frequency Division Duplex) based FDMA. Then NMT-F which is the French version of NMT900 introduced in 1986 that operates at 900 MHz.


AMPS (Advanced Mobile Phone System) or IS-136
1G is a technology developed by Bell Labs circa 1970, used in the United States and no longer in use around 2000. This technology uses a frequency of 800 MHz Cellular FM band, how this technology works similar to the existing technology at IMTS 0G.

CDPD (Cellular Digital Packet Data)
1G technology was introduced in 1992. Technology that operates at a frequency of 800 MHz and 900 MHz this gives the ability to D-AMPS/AMPS technology for voice and data communications networks to use channel up speed of 19.2 Kbps. As data packets on the network, this technology can run applications Internet Protocol (IP) and also acts as an extension of the internet where users can find online continuously. Then in May 2000 AT & T introduced a service that is PocketNet HDML mobile internet applications (similar to the WAP) using CDPD. Handsets that support this service then created with the ability to transfer data, voice, and mobile internet. CDPD is a byproduct of the AMPS technology for data services only, but does not grow because it is expensive and fails to compete.
Table 2 Advantages and Disadvantages of Technology 1G, 1.5G
Advantages
Disadvantages
Serving voice communication and small data
Can not serve data communication in high speed and large
Small traffic capacity
The number of customers that can be accommodated in one cell slightly
Wasteful use of the frequency spectrum for the user to use a single frequency channel
Intemodulasi noise (the sound is not clear)

  • 2G (Second Generation)
2G is a second-generation communication technologies that emerge as the market demands and the need for better quality. Generation 2G already using digital technology, as well as the mechanisms of Time Division Multiple Access (TDMA) and Code Division Multiple Access (CDMA) communication techniques.

2G standard technologies based on TDMA is:
D-AMPS (Digital AMPS) or IS-54 or IS-136 in the United States and Canada
Is TDMA-based 2G technology which is the development of the AMPS (Advanced Mobile Phone System). Be operating at a frequency of:
  1. 800 MHz (based on the IS-54 standard, the frequency range 824-849 MHz and 869-894 MHz)
  2. 1900 MHz (based on the IS-136 standard for dual-band support 800 MHz and 1900 MHz)
D-AMPS is a digital mobile phone already, but the network still supports analog AMPS network.
GSM (Global System for Mobile Communications) in Europe and Asia
2G TDMA-based technology is being developed by the study group called the Groupe Special Mobile (GSM) to study and develop a public telecommunication system in Europe. In 1989, this task is left to the European Telecommunication Standards Institute (ETSI) and the GSM Phase I launched in mid-1991.

The reason for the emergence of GSM with the requirements of the new network system that can be applicable networking standards and can be applied throughout the European region. In the new system there should also be the ability to anticipate the user mobility and the ability to serve more users to accommodate the addition of new users.

GSM network is the most widely used network in the world, in 1993, there were 36 GSM networks in 22 countries, and the end of 1993 to 48 countries with 70 operators and customers amounted to 1 billion. GSM is now used in 212 countries by the number of subscribers reached 2 billion worldwide.

GSM also supports 14.4 Kbps speed data communication (just enough to serve SMS, download an image, or a ringtone MIDI only).
Table 3 Frequencies Used By Network GSM (ETS By 05.05)
Sistem
Frequencies (MHz)
Uplink Frequency
(MHz)
Downlink Frequency
(MHz)
Frequency Channel
GSM 400
450
450,4-457,6
460,4-467,6
259-293
GSM 400
480
478,8-486,0
488,8-496,0
306-340
GSM 850
850
824,0-849,0
869,0-894,0
128-251
GSM 900 (P-GSM)
900
890,0-915,0
935,0-960,0
1-124
GSM 900 (E-GSM)
900
880,0-915,0
925,0-960,0
0-124 & 975-1023
GSM-R (R-GSM)
900
876,0-880,0
921,0-925,0
955-973
DCS 1800
1800
1710,0-1785,0
1805,0-1880,0
512-885
PCS 1900
1900
1850,0-1910,0
1930,0-1990,0
512-810

The term other than GSM in some countries:
1. A1-Net (GSM 900 MHz) in Austria
2. E-Netz (GSM 1800 MHz) in Germany
3. DCS (Digital Communications Systems) in the United States
4. PCS (Personal Communications Service) in the United States (similar NCDMA standard and GSM 1900 networks operating at frequencies 1850 to 1990 MHz)
PDC (Personal Digital Celluler) in Japan
Is TDMA-based 2G technology which was first launched in March 1993. Is a telecommunications network based on TDMA developed Japan and applies only in Japan alone. Basic technology is the same as GSM. Operated by NTT DoCoMo on the frequency:
1. 800 MHz (downlink 810-888 MHz, 893-958 MHz uplink)
2. 1500 MHz (downlink 1477-1501 MHz, 1429-1453 MHz uplink)

PHS (Personal Handy System) or PAS (Personal Access System) in China, Japan, Taiwan, and several Asian countries
Is TDMA-based 2G technology which has the capability of two-way calling, roaming, high-speed services of data, voice clear, and handover. PHS in Japan operated by J-Phone, the range of frequencies between 1895-1918 MHz.

CSD (Circuit Switched Data) in the United States
Is TDMA-based 2G technology that uses a single radio time slot to transmit data at a speed of 9.6 kbps on GSM and Switching Subsystem. It can also be connected to a modem to the regular telephone network (PSTN) and dial-up service.

HSCSD (High Speed ​​Circuit Switched Data)
Is TDMA-based 2G technology which has a mechanism of circuit-switched data transfer (similar to GSM). But have an advantage in the ability to use more than one timeslot of 8 timeslot in GSM data packets to one connection (GSM only can use one timeslot for a connection). This capability makes HSCSD can achieve data transfer speeds of up to 57.6 kbps (HSCSD technology is supporting the GSM network to the data, but not as wasteful komersilkan timeslot and replaced by a better GPRS).

iDEN (Integrated Digital Enhanced Network) in the United States, Canada, Argentina, Brazil, Chile, China, Colombia, El Salvador, Ecuador, Guam, Israel, Jepan, Jordan, South Korea, Mexico, Peru, Philippines, Puerto Rico, Saudi Arabia , and Singapore
Is TDMA-based 2G technology developed by Motorola with the number of networks in 20 countries. Be operating at 25 KHz channels, radio utilized for trucks and mobile phones.
While technology-based 2G CDMA standards are:
CdmaOne or IS-95 (Interim Standard 95) or IS-95 CDMA or TIAEIA-95 in the USA, South Korea, Canada, Mexico, India, Israel, Australia, Sri Lanka, Venezuela, Brazil, and China
CDMA is a 2G technology that operates based on two classes of wave Band Class 0 (800 MHz) and Band Class 1 (1900 MHz). Introduced by Qualcomm in the mid-1990s and supported by AT & T, Motorola, Lucent, ALPS, GSIC, Prime Co., Samsung, Sony, U.S. West, Sprint, Bell Atlantic, and Time Warner.


Table 4 Comparison of AMPS, GSM, and CDMAone
AMPS
GSM
CDMA/IS-95
Multiple Access
FDMA
TDMA
DS-CDMA
Modulation
FM
GMSK
QPSK
Bandwidth RF
30 KHz
200 Khz
1,25 MHz
Channel/Carrier RF
1
8
20-30
Uplink Frequency
824-849 MHz
890-915 MHz
824-849 MHz
Downlink Frequency
869-894 MHz
935-960 MHz
869-894 MHz

The three main advantages over its older 2G networks are digitally encrypted telephone conversations, 2G systems were significantly more efficient on the spectrum allowing greater penetration rate, and 2G introduced data reception-delivery service for mobile devices started with a short message (SMS).
Table 5 Advantages and Disadvantages of Technology 2G, 2.5G, 2.75G
Advantages
Disadvantages
More services such as voice communication, SMS (Short Message Service; bidirectional service for sending short messages of 160 characters), voice mail, call waiting, and transfer data at a maximum speed of 9600 bps (for SMS, download images, or ringtone MIDI)
The data transfer rate is low
Greater user capacity
Can not efficiently for low traffic
The resulting sound is more clear as a digital-based (analog voice signal is converted into a digital signal before it is sent). This change allows the voice signal can be repaired damage due to noise disturbance or interference of other frequencies. Improvements were made in the receiver, and then returned again in the form of an analog signal
Network coverage is still limited and highly dependent on the presence of base stations (cell towers)
Efficiency of spectrum / frequency be increased, as well as the ability
Optimization of the system as indicated by the ability of digital compression and coding of data
Power needed for signal slightly so as to save battery, the handset can be used for longer, and the size of the battery can be smaller

  • 2.5G, 2.75G (and Half Second Generation)
For terms 2G and 3G has been officially defined, but not for the 2.5G. Naming 2.5G used for marketing purposes only.

The technology is referred to as 2.5G communication technology which is an improvement of the technology, especially in the 2G GSM base platform which has undergone improvements, particularly for data applications. For GSM-based (TDMA) technology implemented in 2.5G GPRS (General Packet Radio Services) and Widen (Wideband Integrated Dispatch Enhanced Network), while based cdmaOne (CDMA) is implemented in CDMA2000-1x Release 0/RTT (1 Times Radio Transmission Technology) or IS-2000 (based on ITU standards) or CDMA2000 (3GPP2 standards based).

Provider 3G 2.5G provides some advantages (such as packet-switched) and can use some of the existing 2G infrastructure in GSM and CDMA. GPRS is a 2.5G technology used by GSM operators. Some protocols, such as EDGE for GSM and CDMA2000 1x-RTT for CDMA, 3G services can be qualified as (because they have a data transfer rate of over 144 Kbps), but was later termed as 2.5G services (or as some are calling it the 2.75G sound more sophisticated) because they are several times slower than 3G services "true".

GPRS (General Packet Radio Services)
Are the 2.5G technology that is inserted (overlay) on top of the GSM network to handle data communication on the network. In other words, by using GPRS handsets, fixed data communication takes place over a GSM network (with GSM still handle voice communications and data transfer is handled by GPRS). Development of GPRS over GSM technology can be done effectively without removing the old infrastructure, with the addition of several new hardware and software upgrades to the terminal / GSM station and servers. GPRS data transfer speed can reach up to 160 Kbps. 3 features GPRS technology has advantages, namely:
  1. Always Online. GPRS dial mechanism to eliminate the user wants to access data at the time, so it is said GPRS always online due to the transfer of data is sent in the form of packets and does not depend on the connection time.
  2. An upgrade to existing networks (GSM and TDMA). Adoption of the GPRS system does not need to eliminate the old system because the GPRS is run on top of existing infrastructure.
  3. An Integral part of EDGE and WCDMA. GPRS is the core of the data packet delivery mechanism for the next 3G technology.

GPRS is divided into three classes based on their ability, namely:
  1. Class A
Can be connected to GPRS and GSM networks (voice and SMS) at the time of its consumer Simultaneously, the device supports A-class are still available to this day.
  1. Class B
Can be connected to GPRS and GSM networks (voice and SMS) but only one can be used at the same time. When the GSM service (call or SMS) is used, then the GPRS will have to wait and automatically switches back after the GSM service (call or SMS) is terminated. Most GPRS devices are included in class B.



  1. Class C
To connect GPRS or GSM service (voice and SMS), substitute service must be done manually between the two services (almost the same as class B turnover only active network is not automatic). 

The benefits of GPRS technology:
  1. Client-Server Services that allows accessing data stored in a database. Examples of the application of these applications are accessing the web through a browser.
  2. Messaging Services are intended for communication between individual users by making use of the storage server for handling messages as a temporary message storage / intermediate before it is received by the user. Examples of services that is the result of the application of Multimedia Message Service (MMS) is used for multimedia data transmission through the GSM network using a mobile phone.
  3. Real-time Conversational Services that provide two-way communication services to users in real-time. Some examples of its application are on the Internet and multimedia applications such as Voice over IP (VoIP) and video conferencing.
  4. Tele-action services that provide services to the transmission and reception of data volume slightly. Examples such as credit card validation, lottery transactions, and indoor surveillance camera system.

Widen (Wideband Integrated Dispatch Enhanced Network)
Are the 2.5G technology which is the development of iDEN (2G) of the software developed by Motorola and was introduced in 1993. Widen able to transfer data at up to speeds of 100 Kbps and has been used in 20 countries.
CDMA2000-1x Release 0/RTT (1 Times Radio Transmission Technology) or IS-2000 (based on ITU standards) or CDMA2000 (3GPP2 standards based)
Are the 2.5G technology which is the development of cdmaOne technology with the addition of Traffic on its services and operates at a frequency of 400 MHz, 800 MHz, 900 MHz, 1700 MHz, 1800 MHz, 1900 MHz, and 2100 MHz (depending on frequency regulation of each country).
Table 6 Regional Implementation of CDMA2000
Regional
Operators
United States
Verizon Wireless, Sprint PCS, Alltel, MetroPCS, Cellular South, U.S. Cellular, Cellcom, and Cricket Communications (Leap Wireless to)
South Africa
Neotel (800 MHz)
Bangladesh
Pacific Telecom's CityCell
Brazil
VIVO
China
China Unicom
Estonia
Eesti Energia (450 MHz)
India
BSNL, Reliance Communications and Tata Teleservices
Indonesia
Mobile-8, Bakrie Telkom, Telkom Flexi, and Indosat Starone
Canada
SaskTel, Manitoba Telecom Services, Bell Mobility, Aliant, and TELUS Mobility
Kenya
Telcom Kenya, Flashcom LTD, and E.M. Communications Ltd
Latvia
Lavia Triatel
Marocco
Wana
Mexico
Iusacell and Unefon
Moldova
Moldova Unite
Nepal
Nepal Telecom and United Telecom Limited
Pakistan
PTCL, World Call, and GoCdma
New Zealand
New Zealand Telecom
Sri Lanka
Sri Lanka Telecom (SLT), Suntel, and Lanka Bell (800 MHz), dan DBN and Tritel (450 MHz)
Ukraina
PEOPLEnet
Venezuela
Movilnet and Movistar

  • 3G (Third Generation)
3G technology is third-generation communication technologies are becoming a standard mobile phone technology (mobile phone), replacing 2.5G. It is based on the ITU (International Telecommunication Union) to the IMT-2000 standard.

3G networks enable network operators to offer a wider range of advanced facilities while achieving greater network capacity through improved efficiency of spectrum usage. His ability to include voice communications in a range of wireless wide area (wide-area wireless voice telephony), video calls (video calls), and wireless high-speed data lines (broadband wireless data), and it works in all mobile devices (mobile). Additional amenities also include HSPA data transmission which is able to transmit data at speeds up to 14.4 Mbps on the downlink and 5.8 Mbps in the uplink.
Mendefisikan ITU as 3G technologies:
1. Has a data transfer speed of 144 Kbps on users moving at a speed of 100 km / h.
2. Has a data transfer speed of 384 Kbps to users who walk away.
3. Has a data transfer rate of 2 Mbps in stationary users (stationary).
3G technology was introduced in the beginning is for the following purposes:
1. Adding to the efficiency and capacity of the network.
2. Adds roaming capabilities (roaming).
3. To achieve data transfer speeds higher.
4. Improved quality of service (QoS or Quality of Service).
5. Support the needs of mobile internet (mobile internet).
Frequencies used by 3G technology, namely:
1. Frequency of admission (downlink) 1920-1980 MHz
2. Frequency deliveries (uplink) from 2110 to 2170 MHz

3G technology which includes:
EDGE (Enhanced Data Rates for GSM Evolution) or E-GPRS (Enhanced General Packet Radio Services-)
Is a 3G technology that is one standard for wireless data is implemented on a GSM cellular network. First introduced in 2003 and is a further stage in the evolution towards mobile multimedia communication.
Originally called 2.75G EDGE technology. However, since mid-2000, International technology platform GERAN (GSM EDGE Radio Access Network) has adopted all over the 3GPP specifications (one of which is the same data transfer speed with 3G) EDGE technology making in the group of third-generation UMTS 3G technology.

With EDGE, the service provider can provide data communication services with high-speed Iebih than GPRS, where GPRS is only capable of sending data at a rate of about 25 Kbps. So also when compared to other platforms, EDGE capabilities reach 3-4 times the speed of access to the telephone line cable (usually around 30-40 Kbps) and nearly 2 times the speed of CDMA2000-1x which is only about 70-80 Kbps. EDGE data transfer speed can even reach speeds of up to 236.8 Kbps using 4 timeslots and 473.6 Kbps using 8 timeslots.

EDGE technology-based services capable of providing a wide range of applications of third generation services, namely: high quality audio streaming, video streaming, online gaming, high speed download, high speed network connection, push to talk, and others. As of November 2006, EDGE has implemented GSM 156 network operators in 92 countries and will continue to evolve into a GSM 213 network operators in 118 countries.

W-CDMA (Wideband-Coded Division Multiple Access) or UMTS (Universal Mobile Telecommunication System)
3G technology is being developed in Europe and mualai introduced in 2004. Standardization of UMTS is done by ETSI (European Telecommunication Standard Institution), in addition to the ITU-T (International Telecommunications Union Telecommunication Standardization intertational Sector) working on a similar system called IMT 2000 (International Mobile Telecommunation System 2000). Both of these standardization bodies can cooperate to form a system for the future.

UMTS is designed so as to provide a bandwidth of 2 Mbps. UMTS services can be provided to meet user demand pursued wherever located, which means that UMTS is expected to serve an area as wide as possible, if there is no UMTS cells in an area, it can be a route via satellite.

UMTS can be used by offices, homes and vehicles. The same service can be provided to users indoors and outdoors, public areas and private areas, urban and rural.

Radio frequencies are allocated for UMTS 1885-2025 MHz and 2110-2200 MHz. The tape will be used by small cells (pico cell) so as to provide a large capacity in UMTS. Multiple access is used to dynamically allocate bandwidth according to customer requirements. RACE (Research and Technology Development in Advanced Communications Technologies in Europe) has developed two types of multiple access, CDMA and TDMA ie, from both of these to be used has not been decided.

W-CDMA has been in implentasikan in Japan, Europe, and Asia, and will be developed in 55 countries in 2006. UMTS frequency different regions:
1. Asian and European (mostly) at a frequency of 2100 MHz (downlink) and 1900 MHz (uplink)
2. United States (by AT & T Mobility) at a frequency of 1900 MHz MHz/850.
3. America at a frequency of 2100 MHz (downlink) 1700 MHz (uplink).
4. Europe at a frequency of 900 MHz.
5. Australia and Japan at a frequency of 800 MHz.

CDMA2000-1x EV / DV (Evolution / Data / Voice) and CDMA2000-1x EV-DO (Data Only / Data Optimized) or IS-856
3G technology is supported by the North American CDMA community, led by the CDG (CDMA Development Group). CDMA2000-1x EV (Evolution) and CDMA2000 1x-EV-DO technology is the development of CDMA2000-1x or CDMA2000 Release 0/RTT (2.5G). At first CDMA2000-1x EV-DO (Revision 0) can only send data up to 2.4 Mbps, but then evolved so CDMA2000-1x-EV-DO (data only) have speed like the chart below.
Table 7 Distribution of CDMA2000-1x Speed
Speed
Supported Applications
CDMA2000-1x EV-DO Revision A (T-1 speeds)
2,45-3,1 Mbps
Video conference
CDMA2000-1x EV-DO Revision B
Average 300 Kbps, maximum 73,5 Mbps
Data transmisi
CDMA2000-1x EV-DV
Average 300 Kbps, maximum 3,09 Mbps
Integration of voice and multimedia services of high speed packet data simultaneously
CDMA2000-1x EV-DO Revision C or UMB (Ultra Mobile Broadband)
A maximum of 280 Mbps at peak condition, 275 Mbps downstream, 75 Mbps upstream (so it can be categorized in 4G)
Voice over IP (VoIP), multimedia, broadband, information, entertainment , commercial electronic services, and supports full wireless network services in a mobile environment (thus the same as Wi-Fi, WiMAX, and UWB)

TD-CDMA (Time Division Code Division Multiple Access) or UMTS-TDD (Universal Mobile Telecommunication System Time Division Duplexing-) in Europe
3G data network technology is built on a standard mobile phone networks UMTS / WCDMA in which both the UMTS / WCDMA and TD-CDMA/UMTS-TDD not support each other because of differences in the workings, design, technology and frequencies used. In Europe the frequency used UMTS-TDD is on 2010-2020 MHz which can transfer data at a speed of 16 Mbps (when the maximum speed downlink and uplink). 

GAN (Generic Access Network) or UMA (Unlicensed Mobile Access)
3G technology is intended to allow roaming to telecommunications systems and can handle a LAN (WLAN) and wireless WAN phone simultaneously (adopted by 3GPP).

HSPA (High-Speed ​​Packet Access)
3G technology is the technology of which is the union of the previous protocol of mobile technology, thus expanding and adding capabilities (especially in terms of data transfer rate) of the UMTS protocols that have been there before. Because of differences in Traffic (downlink and uplink) the HSPA standard is divided into 2, namely:
  1. HSDPA (High Speed ​​Downlink Packet Access)
An HSPA standard with the capability of its transfer speed downlink (from the network to the handset), which can reach speeds of HSDPA 7.2 Mbps downlink and in theory can ditinggkatkan up to 14.4 Mbps with a maximum speed of 384 kbps uplink. HSDPA but can be used by mobile phones but can also be used by the notebook to access the data at high speed.
  1. HSUPA (High Speed ​​Uplink Packet Access)
An HSPA standard with the capability of its uplink transfer speed (from the handset to the network), which can reach speeds HSUPA uplink speeds theoretically up to 5.76 Mbps, but this does not implentasikan HSUPA (commercialized) and its handsets are not made.
HSPA + (HSPA Evolution)
3G technology is HSPA dikembangankan. This technology has a data transfer rate up to 42 Mbps on the downlink and 11 Mbps on the uplink.
FOMA (Freedom of Mobile Multimedia Access)
Technology is the world's first 3G WCDMA mengimplentasikan. FOMA 3G service by naming an operator NTT DoCoMo in Japan.
HSOPA (High Speed ​​OFDM Packet Access)
3G technology is primarily dikembangankan of UMTS antenna technology that uses OFDM (Orthogonal Frequency Division Multiplexing) and MIMO (Multiple-Input Multiple-Output). HSOPA also known as Super 3G can download data transfer speeds up to 100 Mbps in the downlink and 50 Mbps on the uplink.


TD-SCDMA (Time Division Synchronous Code Division Multiple Access)
3G technology is still being developed China by CATT (Chinese Academy of Telecommunications Technology), Come, and Siemens AG on a proposal from the group CWTS (China Wireless Telecommunication Standards) to the ITU in 1999. The technology was developed to eliminate the dependence on western technology, but lacking much in demand by operators in Asia because the equipment requires a completely new and can not use the previous technology (CDMA2000-1x). TD-SCDMA uses 2010-2025 MHz frequency, with data transfer speeds of 9.6 Kbps to 2048 Kbps.

Table 8 Advantages and Disadvantages of 3G technology, 3.5G, 3.75G
Advantages
Disadvantages
Has a fast data transfer speeds (144 Kbps - 2 Mbps); 2 Mbps for local / indoor / slow - moving access ; 384 Kbps for wide area access
Requires power control "ideal"
Broadband data services such as Internet , video conferencing , video streaming , video on demand , music on demand , games on demand
Not to insufficiency of data transfer speed in serving multimedia services that require a qualified speed
The sound quality is better
Security is assured
Support multiple simultaneous connections (users can browse the Internet simultaneously by passing the call)
Shared infrastructure can support many operators in the same location
Interconnect to other mobile and fixed users
National and international roaming
Can handle packet - and circuit-switched service , including internet ( IP ) and video conferencing , is also high data rate communication services of data transmission and asymmetric
Efiensi good spectrum , so as to utilize the maximum bandwidth is limited
Support for multiple cell layers
Co -existance and interconnection with satellite - based services
The new billing mechanism depends on the volume of data , quality of service , and time

  • 3.5G, 3.75G (Third and A Half Generation)
3.5G or 3G is also known as Beyond 3G technology improvement, especially in improving data transfer speeds over 3G technology (above 2 Mbps) so that it can serve multimedia communications such as Internet access and video sharing. Included in this technology are:

HSDPA (High Speed ​​Downlink Packet Access)
Are the 3.5G technology which is the evolution of Ericsson's WCDMA. HSDPA is an additional protocol to the system WCDMA (Wideband CDMA), which is capable of transmitting high-speed data.
The first phase of HSDPA 4.1 Mbps capacity. Then followed a second phase with a capacity of 11 Mbps and maximum capacities downlink peak data rate of up to 14 Mbps.
HSDPA network speed in a residential area can perform data download speed 3.7 Mbps. A person who was driving on the motorway speed of 100 km / h can access the internet speed of 1.2 Mbps. Meanwhile, users in an office environment that is solid still can enjoy streaming video despite only gaining 300 Kbps.
The advantages of HSDPA is to reduce the delay (delay) and provide a faster response when the user uses an interactive application such as a mobile office or high-speed internet access, which can be accompanied by a gaming facility or downloaded audio and video. Another advantage of HSDPA, increase system capacity without requiring additional frequency spectrum, so it would reduce the cost of mobile data services significantly.

WiBro (Wireless Broadband)
Are the 3.5G technology that Samsung developed jointly by ETRI (Electronics and Technology Research Institute) and has been certified by the WiMAX Forum. WiBro is part of the policy of South Korea's information technology policy, known as 839. WiBro is able to transmit data at speeds up to 50 Mbps. The data transfer speed of HSDPA platform is able to surpass the speed of the speed up to 14 Mbps.

  • 4G (Forth Generation)
4G technology (also known as Beyond 3G) communication technology is a term used to describe the next evolution in wireless communications. According to the 4G working group (working groups 4G), infrastructure and terminals used 4G will have almost all the standards from 2G to 3G applied. 4G systems will also act as an open platform where the new innovations can flourish. 4G technology will be able to provide Internet Protocol (IP) Comprehensive where voice, data and streamed multimedia can be given to the users "anytime, anywhere", and the data transmission rate is higher than the previous generation.
Many companies already define their own meaning of the 4G to declare that they already have 4G, WiMAX launch such an experiment, and even some other company that says it's made a prototype system called 4G. Although it may be some technology that comes now this can be a part of 4G, until the 4G standard has been defined, it is impossible for any company today is in providing certainty wireless solutions that can be called 4G mobile network in accordance with the appropriate international standards for 4G. Things like that are messed statement about the "existence" of 4G services so that investors and analysts tend to confuse the wireless industry. Most of the standards that prepare the way for 4G technologies include:

UMTS Revision 8 LTE or 3GPP (Third Generation Partnership Project Long Term Evolution)
4G technology is still under development by the 3GPP (Third Generation Partnership Project). This technology is planned to have an average download speed of 100 Mbps and an average upload speed of 50 Mbps, so it supports all network-based Internet Protocol (IP).

WiMAX (Worldwide Interoperability for Microwave Access)
4G is the technology that has the ability to transfer data wirelessly remote, also point to point access to support full access mobile phone (mobile phone), so it can be an alternative to wired broadband networks and DSL. In WiMAX applications using frequencies ranging from 3.3 GHz, 3.5 GHz, 2.3 GHz, 2.5 GHz, and 5 GHz (depending on frequency regulation of each country). WiMAX can theoretically transmit data up to 70 Mbps speed within 48 Km, but in prateknya WiMAX is only able to transmit data at a speed of 10 Mbps within a distance of 10 Km for interference-free area (suburbs) and 10 Mbps within a distance of 2 km for urban areas (urban).

UMB (Ultra Mobile Broadband) or CDMA2000-1x EV-DO Revision C
Table 9 Advantages of 4G Technology
Advantages
Supports interactive multimedia services, teleconferencing, wireless intenet
Large bandwidth to support multimedia service
Bit rates greater than 3G
Global mobility (scalability for mobile networks), service portability, low-cost service (low cost up to 100 Mbps)
Fully for packet-switched networks
Network security is a powerful data
Ø Broadband Technology
Broadband technology is generally defined as a network or the Internet service that has a high transfer speed for large data path width. Although the data lines provided to its very wide, broadband technology is usually split lanes wide with surrounding users. But if no one is using, the user will use the fully broadband.
Broadband or wideband technology is one technology that supports media transminsi many frequencies, ranging from the frequency of the sound to the video. This technology can carry multiple signals by dividing the capacity (very large) in a channel bandwidth. Each channel operates at a specific frequency. Simply put, the term broadband technology is used to describe a connection speed of 500 Kbps or more. But the FCC defines broadband with a minimum speed of 200 Kbps. There are two common types of broadband, DSL and cable modems ie, capable of transferring 512 Kbps or more, roughly 9 times faster than modems that use a standard telephone cable. Currently, the broadband wireless technology is the ultimate goal of the evolution of telecommunication technology.
What is offered by broadband service? Of course, high-speed data access multimedia services in the form of images, audio, and video, including video streaming, video downloading, video telephony, and video messaging. Through devices that support this technology, users can also access mobile TV entertainment and download music, and perform real-time communication using fixed-mobile technology, such as a webcam through a mobile phone.
Broadband is a high speed connection that allows fast access to the Internet and always-connected or "always on". If traced back, the history of the discovery of mobile broadband from a fiber optic cable in 1950, where previously the needs of data communication in high speed is not required. Only in the 1990s appeared a great need to transfer high-speed data and broadband era began. At that time, the flagship over fiber optic cable.

In 1999, the development of large capacity and data transfer speed is more often used, especially with the rise of cable TV service which requires a cable modem. At that time, no less than 1.5 million cable TV subscribers increasingly animate a new era, broadband. However, because the fiber optic cable is quite expensive, the development of broadband may be relatively slow, and the user is limited.
Later, though cable TV has been a lot of customers, more development is triggered by the advent of technology ADSL (asymmetric digital subscriber line). ADSL could miss the millions of bits of information in a matter of seconds on the regular telephone network. ADSL broadband works on two speeds, receive and send data, so it is suitable for browsing and sending or receiving e-mail. Data transmission speed, slower than receiving data. ADSL standards receive data or information on speed 2 Mbps (35 times faster than a standard modem) and send data at speeds of 256 Kbps (five times faster). However, broadband capacity generally ranges between 256 Kbps and 10 Mbps.
In addition to ADSL, SHDSL Broadband exist (symmetric high bit rate DSL), which is able to send and receive data at the same speed, which is up to 2 Mbps. Therefore, SHDSL is suitable for a variety of businesses that require large amounts of data and high speed, such as sending and receiving e-mails with large attachments, audio and video files. Broadband is increasingly showing rapid development. By the end of 2004 the number of subscribers has reached 140 million and growing very fast.
Research Yankee Group estimates that in 2008 there will be 325 million next customer. Therefore, broadband is arguably the fastest growing technology in history. If the mobile phone (mobile phone) takes 5.5 years to grow from 10 million to 100 million users worldwide, the broadband achieve in just 3.5 years.
Rapid growth is largely driven by developments in the Asia Pacific region, particularly Japan and South Korea. With a population of 48.6 million people, where 10 million people live in Seoul, Korea in 2004 Internet users has reached 35.7 million. At the same time, of that amount, 84 percent (30 million) are broadband subscribers, either using a DSL or cable modem. In 2008, Korea is targeting to achieve 100% broadband customers.
On the other hand, although may use a variety of technologies, but the operator can not provide all types of technology, and on the contrary there is no one technology for all purposes of broadband services. A wide variety of choices and business aspects that are based on developmental needs, so as to provide optimal results, both in service and business acquisition, should be considered forward strategically.
Future developments, it seems no longer stuck in contrasting between DSL vs. cable modem or fixed-line vs. wireless. Although the development of wireless 3G or 4G service to the equally thrill. This time forward, it seems there will be plenty of options, ranging from wired connection to wireless, ranging from ADSL, ADSL2 +, VDSL, VDSL2, Ethernet, up to Wi-Fi, 802.16 (WiMAX), and FTTH (fiber-to-the-home ) or FTTB (fiber-to-the-building). Later, it will also evolve MBWA (Mobile Broadband Wireless Access). Hybrid approach, which combines several capabilities, by John Giametto, President of Nortel Networks Asia, referred to as "ultrabroadband". This is a logical approach to serving the diverse needs of the broadband. Ultrabroadband refers to various combinations of the needs of service providers.

For countries such as Indonesia and Thailand, building wiring is not only difficult, but also expensive, alternative wireless becomes more logical. This is evidenced by the effort to hold Telkom ADSL services with brand TelkomLink Multi Media Access (MMA). Later Telkom Speedy also appeared with the product.

As another example, India. In the land of Bollywood, there are 40 million landlines and about 4 million computers. With a market where every house that has only one-tenth phones have a PC, then you should not develop high-speed Internet access, but directly develop video services, because almost every house must have a TV. Therefore, the development of broadband should support the so-called value-added broadband, which is able to provide a new experience as easy as a simple turn on the TV, regardless of the device used.
However, the challenge does not stop there, because to provide such services, which means it requires multi-access technology, required a high level of interoperability, making it easier for network management and customers. Another challenge is how operators can cooperate with a number of content providers and services to further enrich its content.
The challenge to provide broadband services based on the customer, thus, should be pursued. Flagship, this time, of course, not only in wired networks, but also wireless. However, in the future there are at least some prospective technology for it, which is regarded as the next step of development of broadband technology, among others: Metro Ethernet, VDSL / ADSL 2 +, FTTH, IP Wireless, CDMA 1x EV-DO and WiMAX.

Sources:
http://id.wikipedia.org/wiki/Sejarah_perkembangan_komputer
http://tips-watan.blogspot.com/2012/11/teknologi-0g-1g-2g-25g-3g-35g-dan-4g.html

http://hamam21.blogspot.com/2009/03/apa-itu-broadband.html