Tuesday, July 19, 2011

Development of the ESIA Slot, ISA and PCI

A computer can not be separated from the motherboard, other than a place to install the processor and memory, there is also the installation of other cards such as VGA Card, Sound Card, Modem Card, Lan Card, etc..

In its development, computers have evolved, sepertihalnya for VGA slot, progressing from EISA slot using Mono chrome VGA card, and then developed with a color VGA ISA slot, and VGA grown steadily apart from the ISA slot to slot AGP 2x, AGP 4x to 8x, and continues to experience growth until now known as PCI-Express slot for VGA card.

For others like Sound card, Lan Card, to model today's motherboards are already using Onboard facilities, even to vga onboard as well. So the smaller the motherboard because there is no slot ESIA and only comes as much as 2 ISA slots.

To know the development of a card slot on the computer let's try to discuss it:

EISA

Bus EISA (Extended / Enhanced Industry Standard Architecture) bus is an I / O, introduced in September 1988 as a response to the launch by IBM's MCA bus, given that IBM wanted to "monopolize" the bus MCA to require others to pay royalties to license the MCA. This standard was developed by several vendors IBM PC Compatible, in addition to IBM, although that greatly contributed to the Compaq Computer Corporation. Compaq EISA also likely to form the Committee, a nonprofit organization designed specifically to regulate the development of the EISA bus. In addition to Compaq, there are several other companies that developed the EISA if sorted, then the collection company can be termed as WATCHZONE:

    Wyse
    AT & T
    Tandy Corporation
    Compaq Computer Corporation
    Hewlett-Packard
    Zenith
    Olivetti
    NEC
    Epson

Although offering significant improvement when compared with the 16-bit ISA, EISA-based cards only a few on the market (or developed). That was just a hard disk array controller card (SCSI / RAID), and the server network card.

EISA bus is basically a 32-bit version of an ordinary ISA bus. Not as MCA from IBM really new (architecture and design their slots), users can still use 8-bit ISA cards or 16-bit long into the EISA slots, so this has added value: backward compatibility (backward compatibility ). Like the MCA bus, EISA EISA card also allows the configuration automatically using the software, so you could say EISA and MCA is the pioneer of "plug-and-play", though still primitive.

ISA
ISA (Industry Standard Architecture) is a bus architecture with a data bus width of 8-bit IBM PC was introduced in 5150 on 12 August 1981. ISA bus is updated by adding the data bus width to 16-bit on IBM PC / AT in 1984, so this type of outstanding ISA bus is divided into two parts, namely the ISA 16-bit and 8-bit ISA. ISA is the basic and most common bus used in IBM PC until 1995, before being replaced by PCI bus, which was launched in 1992.

8-bit ISA

8-bit ISA bus is a variant of the ISA bus, the bus width of 8-bit data, which are used in IBM PC 5150 (initial PC model). This bus has been abandoned in modern systems to the top but Intel 286/386 systems still have it. This bus speed is 4.77 MHz (same as Intel 8088 processor in the IBM PC), before 8:33 MHz upgraded to the IBM PC / AT. Because it has a bandwidth of 8-bit, then it has a maximum transfer rate is 4.77 Mbyte / sec or 8:33 Mbyte / sec. Despite having a slow transfer rate, including the sufficiency of this bus at the time, because the buses I / O ports like serial, parallel ports, floppy disk controller, keyboard controller and the other is very slow. This slot has 62 connectors.

Although a simple design, IBM did not immediately publish the specifications when it launched in 1981, but had to wait until 1987, so that the supporting device manufacturers a little inconvenience to make the 8-bit ISA-based.

16-bit ISA
16-bit ISA bus is an ISA bus has a bandwidth 16-bit, allowing the transfer rate is two times faster than the 8-bit ISA at the same speed. This bus was introduced in 1984, when IBM released the IBM PC / AT with the Intel 80286 microprocessor in it. Why IBM increases to 16-bit ISA is because the Intel 80286 has a data bus which has a width of 16-bit, so that communication between the processor, memory, and motherboard should be done in 16-bit ordinal. Although these processors can be installed on the motherboard that has the I / O bus with a bandwidth of 8-bit, this can menyababkan the bottleneck on the bus system in question.

Instead of creating a bus I / O is a new, IBM was just a little overhaul of the design of ISA 8-bit long, ie, by adding a connector 16-bit extensions (which add 36 connectors, making it 98 connector), which was first launched in August 1984, the same year when IBM PC / AT was launched. It's also the reason why 16-bit ISA referred to as the AT-bus. This indeed makes the interference with some 8-bit ISA cards, so IBM left this design, to a design in which two slots are combined into one slot.

PCI

PCI (an extension of the English language: Peripheral Component Interconnect) bus which is designed to handle some hardware. PCI bus standard was developed by a consortium of the PCI Special Interest Group formed by the Intel Corporation and several other companies, in 1992. Purpose of the establishment of this bus is to replace the bus ISA / EISA previously used in the IBM PC or compatibles.

Old computer using ISA slot, which is a slow bus. Since its emergence around 1992, the PCI bus still in use today, to get out the latest version of PCI Express (add-on).



Enhanced by Zemanta

Monday, July 18, 2011

Functions and Benefits of Computers

In today if someone does not understand computers practically obsolete, ancient and clueless. While in the field of work of all people are in demand to understand using the computer, because the computer is a means to simplify and speed up the work.

Everyone has different reasons when they were asked about what functions the computer, their answers include:

    Job easier.
    As a communication tool
    As a tool for entertainment

Most people will answer the three reasons above. Also many parents who were born in the days of the past did not understand at all the benefits of computers, so they do not think computers are important and result in the generation penerusnyapun they do not emphasize or teach their children to and must understand the computer field.

In the computer world all things, all subject areas, classroom and in the computer business, education and science even more complete when obtained from a computer with Internet media compared with that in science lessons or get from the school.

Now let's discuss what functions and benefits of computers to us:

1. As a means to facilitate the computer:

With a lot of computer work that can be resolved easily, if the first person to type the letter must be with a typewriter, and if there are errors then the paper will be torn and re-type back, but that with the typewriter that has the type of document can not be edit again, while using the computer we can type a document, edit and save the document to be edited repeatedly.

2. Kompter As a Means of Communication

Ancient times when communicating with someone who was away from us, we can use the telephone facilities, but that we only heard friends or relatives. With computers we can:

    Talking with friends or relatives we
    As he spoke we could see them using webcams
    Can write our words to them (chat!)
    Also we can write to them (Email)
    We can send pictures or files to them, etc.

3. Computer as a Tool for Entertainment

In the past we only radio entertainment tools, tape, television and the streets to find out the conditions and situation of a region. With computers we can mengghibur dirikita with various facilities contained on a computer include:

    Hear the songs or music via CD / DVD or via the Internet
    Watch vidoe tapes via CD / DVD or via the Internet
    Playing Game, with gaming applications that we own or Game Install online with the Internet.
    In touch with friends through chat! Facilities, or Webcam
    Watch TV, the TV channel reciver that we put on the computer or through a TV Channel Online with Internet

4. Computer as a Tool Education

Formerly only educational facility we get through school, and in addition to the school of education information can be obtained through the medium of Radio, Television, Newspapers, and places of course. By using a computer connected to the Internet or not we can get the education and science, among others:

    Of the applications that we install. Every application that we install must have his help menu (Help) is a tutorial how to use the program and training.
    Install applications that we can make us become experts in several fields such as: Adobe photoshope, with this application we can become a proficient field of graphics, Power Point; with this application we become an adept in the field of percentage, Autocad; with this application we could become an adept in the field of architectural design, etc..
    Apart from the applications that we install, when we are connected to the Internet which we can obtain an education or science such as: History, Culture, Mathematics, Social Sciences, Medicine or Health, Economics, Politics, Website designe, language (any language can be obtained) , etc. Religious knowledge.

5. As Computer Information Facility
With computers we can see or obtain the information we need such as:

    Educational information, Places of Education
    Entertainment information, look for places Entertainment
    Travel information, find and book a ticket transportation has
    Product information, find the product you want.
    Job information, job menjari.
    Information News, look for events or news at home and abroad
    Weather information, find out the current weather conditions
    Traffic information, knowing the traffic situation
    Health information, health tips and tempat2 looking for treatment.
    Political Information
    Trade Information
    Business information, look for opportunities to open a business
    and much more information can be obtained from the computer connected to the Internet.

6. Computers For Business Facilities

In addition to communications, Easing employment, and entertainment devices, computers can also be used as a tool to do a lot of businesses that bring in revenue for us, among others:

    Creating Computer Rental
    Creating Warnet
    Make Business printing
    Creating a business vidoe Editing
    Creating Business Ringtones and Wallpapers to the Hp
    Make Business Sablon
    Creating business and Install Computer Software Service
    Architectural Design to open a business.
    Opening Business Graphic design for advertising
    Opening Business Website Design
    Opening Business akounting programmers and financial
    Opening Business Computer Course
    Creating a Web Service as a medium of information
    Fabricate and create online books that can be sold
    etc.

7. Computer In Control Measures

In some factories, Hospitality, and many computer companies used as a means to control or operate the system such as:

    Controlling the security camera
    Robots control the operation of machinery plant
    Controlling Escalator
    Pengontorlan recording studio lighting for lights
    Control Video Editing equipment
    Control of road traffic lighting
    Network Control System Network
    etc.

But of all the positive things that can be obtained from the computer, many of the negative side of that caused by computer users include:

    With the computer facilities as a means of entertainment such as chat and gaming, many people who fall asleep so that they forget about their duties and responsibilities jawap, such as learning, working etc..
    By accessing situs2 displaying pornographic images and videos can damage a person's character.
    Many acts of fraud by creating a website to earn money easily.
    Programmers known as hackers, who can take a person's data to be traded, and can damage the system.

But it's all good and bad computer functions depending on the wearer, and I am sure that the computer was not created to make a bad thing, but to help humans in facilitating all areas of work.

So my first exposure on the functions and benefits of computers
Enhanced by Zemanta

Danger of Anti Virus program

Since the discovery of the virus has undergone considerable technological development, as well as existing antivirus program. Unfortunately the development Antivirus usually only pursue the development of the virus and not try to intercept them. Antivirus is up (technology) can actually invite danger to the wearer.

When the viruses were detected its existence, the new viruses are always popping up with more advanced technology that makes antivirus become helpless. Antivirus old for example, can always be made with stealth technology, so when the antivirus is trying to detect files that other, was actually a stealth virus that spread itself to every file that is checked.

In various magazines of course, you often see any antivirus programs special unit (specific) which aim to detect one type of virus. Normally the antivirus makers do not make the correct ways to use this antivirus program, although specific antiviral are at great risk if not used correctly.

Antivirus can detect only one specific type of virus (and possibly some of its variants) and is usually able to disable the virus in memory. If you find a virus and you are sure the name you can use the Antivirus virus of this kind, but if you do not know, you should not try. If you find that active virus is another virus, which is certainly not detected by antivirus, the antivirus can actually spread the virus exist throughout the program files are examined.

The danger is even scarier is that if one of the antivirus detects a virus and one to clean so that the program file you are trying to fix it become damaged. This incident never happened for example in the case DenHard virus, the virus is really similar to the die hard, but these viruses use different techniques to restore the original header file, some antivirus trying to clean it destroys the virus's program files are located. In addition to the case DenHard virus, even this case ever (and probably still will continue to) occur in some viruses. One reason the virus makers make a similar virus is a virus that is difficult to clean, because the antivirus makers do not like it if the virus can easily be cleaned by the user.

HAZARD SOURCE ANTIVIRUS PROGRAM

Antivirus programs can be dangerous because of the following reasons:

    Some antivirus programs using only simple techniques that can easily be devised by the virus for example an antivirus program to check only a few bytes at the beginning of the virus, the virus could have made other versions of the same virus at the beginning but it differs in important parts, for example in routine encryption / decryption of the original file header. This would make a destroyer antivirus program file is not the savior file. Some antivirus can also be tricked by varying the antivirus signature files. Signature file is a file that contains the ID of any known viruses by antivirus software, if the ID is the change then the antivirus will not know him. A good antivirus should be able to check if the file signature is changed.
    Antivirus programs do not create a backup file is cleared. Often the antivirus program (mainly specific) does not provide a means to create backups of the files cleaned up, but this is very important if the cleaning process failed.
    Antivirus programs do not do self check. Self check is necessary, the antivirus program can only be changed by someone else before getting into the hands of users. Commercial antivirus programs usually do a self check to make sure he was not modified by anyone, but some are not and this is dangerous. At the local antivirus programs, which often included several articles on computers, typically include source code, you should compile your own source if you are in doubt on the authenticity of his exe file.
    Antivirus programs can be resident in the off easily Antivirus resident who either should not be detected and uninstalled easily. Examples of poor resident antivirus is VSAFE (in the DOS package). VSAFE can be detected and disabled by using the interrupt (try you learn / debug vsafe existing programs in DOS so you understand). Users will get a false sense of security by using this kind of antivirus. There is no sense of security was better than a false sense of security.
    Antivirus programs do not give warnings expired. Over time, viruses are popping up more and more sophisticated techniques. A good antivirus program should give a warning if the Antivirus used is too out of date. This is important so that antiviral events that spread the virus did not recur.

HERE'S WHAT YOU NEED TO DO AS A USER

As the user's antivirus program there are several things you can do to minimize the risks of using antiviral

    Look for a good antivirus, well here it means the program can be trusted to detect and eradicate viruses that exist. Do not be seduced by the promises offered by the antivirus vendor, and do not be lulled by the name of the brand is quite famous. Try to find comparisons between various antivirus in various magazines / websites on the internet.
    Always use the latest Antivirus, you can get it from the Internet or from magazines. Antivirus old are at great risk if used (more than 6 months have been very dangerous).
    Make a backup for your data and programs are important.
    Make the process of cleaning the virus correctly if you find a virus
    Make sure that your Antivirus programs can is the original, it is likely someone has changed the antivirus, or perhaps menularinya with a virus.
    Call an expert if you feel unable to cope with a virus on your computer or network.

A good step cleaning process is as follows:

If you run a personal computer

    Boot your computer with a startup floppy disk clean of viruses (and write-protected)
    Run the program a virus scanner / cleaner on an infected file
    Try running the file, if the file becomes corrupted, do not go on anymore
    If the program can run smoothly, piloted once again on some files (look for the small size, the medium and large). The file size is huge need to be checked, this file usually contains internal overlay that makes the file is damaged if exposed to the virus.

If you are a network administrator, you should take a sample of the virus to a floppy disk and try to clean it on another computer, this was done not to disturb the work was probably done by someone else. It is also to anticipate, the possibility of a new virus that is similar to other viruses (imagine what would happen if something goes wrong cleaning so the entire program on the network becomes unusable!). If it fails to clean you need to call an expert to deal with, or seek further information on the Internet. Experiments on several files aim to prevent false or incorrect detection and repair by antivirus programs. If the virus is considered dangerous and activities using the network can be postponed temporarily, perhaps to temporarily shut down the network.

PROGRAMMER IF YOU DO THIS YOU NEED

Nowadays to be a good antivirus programmer is not easy, you need to know the techniques of programming a virus that is every day more and more difficult. Antivirus program that you create should also follow the development of virus technology. To make a good antivirus program is not easy, but there are some things you need to remember as a maker of antivirus if you want to program you used another person, and not endanger the person

    Your program should be able to turn off the virus in memory, and can give a warning if there is something strange on the user's computer memory (eg a large base to less than 640 Kb)
    In making the ID viruses select multiple locations, a good location is at the beginning of the virus and the important viruses (eg the decryption header in the original program) is to make sure nothing is changing locations and encryption system (if any) the original program header.
    If the data / header encrypt, verify the data obtained from the calculations, for instance see whether the CS and the original IP in the can from the calculations still within the limits of the file, or whether the first instruction in the COM file jmp reasonable (less than the length of the file).
    Create a backup file if the file is cleaned feared damaged
    Do a self check at the beginning of the program. If not all parts of the program can be self check, the ID viruses need to be examined whether changed or not (eg checksum).
    Make a clear explanation of how the use of antiviral
    If the program can only be run in DOS when the program is run always check whether the program is actually running in DOS
    If you want to create an antivirus program resident, do not wear ID viruses which are not encrypted in memory, other antivirus which antivirus you are not familiar with it, it will assume the existence of a (or several) active virus in memory. This could happen, because some antivirus scan all memory of the ID virus.
     For a non-resident antivirus technique No. 8 also needs to be used, is necessary for the other antivirus programs do not think the program is exposed to the virus. Sometimes the programs also leave scars in memory, which may be suspected by other antivirus as a virus. If you do not want to apply these techniques, you can erase the memory variable ID virus after use.
    If possible, for polymorphic viruses that use heuristic methods (or emulation) to scan and emulation techniques to decrypt, or restore the original program.

10 things that should be enough, you can manually add it if necessary. Eg problems scanning speed and others.

Presumably after reading the above article, users and antivirus programmers can gain new knowledge about the computer's antivirus. For users of antivirus, you should more carefully, and diligently to update your antivirus. It is very necessary, especially for those who are connected to the Internet, many viruses that spread themselves via e-mail, and by utilizing some of the bugs from the email client you some viruses can be spread without your knowing it (the time this article was made, there are reports from reliable sources that there is a bug in Outlook that allow attachment in execution without the knowledge of the user).

For the programmer antivirus, presumably you are moved to learn some more about the techniques of virus, and techniques to eradicate. Today virus writers in Indonesia have not been too much, but later when it emerged a variety of high-tech artificial virus with the nation itself, of course, we should be able to remove it (properly of course), would not we be ashamed, if you have to rely on foreign-made antiviral ?.

This article is not a complete article about the making of a good antivirus program, nor is it a complete tutorial on the use of antiviral properly, but only a short article for the users and programmers more aware of viruses with more attention to aspects of the antivirus.



Enhanced by Zemanta

Sunday, July 17, 2011

Power Supply

A computer other than the main component of Motherboard, Processor, Memory, and others, they all need a Power supply. Power Supply is also an important component to a computer. A power supply has several connectors for power distribution to antaralain computer components to the motherboard, CD / DVD room, HDD etc..

Power supply has developed, adapted to the development of the motherboard and processor. There are several types of power supply among others:

Power To Motherboard

1. Power Supply AT
Has a 12 pin connector, this connector is used for older computers are pentium 2 down. There are two power cables to the motherboard consists of 6 pins. Mining section called Power 8 (P8) and the second part is called Power 9 (P9). With mounting between P8 and P9 when in Align the black cable in the midst. Apart from the power cable to the motherboard is also included with the cable and molex connector Berg

2. 20 Pin ATX power supply
Power supply is used for computer pentium 2 and pentium 3, in which power is distributed to the motherboard using a connector ATX 20 pin connector and coupled with a connector for the floppy disk connectors (Berg) and connectors for hard drives, DVDs etc. (Molex connector) which has 2 voltage is 12 volts and 5 volts.

3. Poser Supply ATX 20 +4 Pin
Development of the motherboard and processor, as used in the Pentium 4 and above, the number of pin power connector to the motherboard there are two connectors with 20 pins and 4 pins and also comes with several connectors Molex and Berg.

Now let us explain one by one cable connector at the power supply apart from the power for the motherboard.

1. Berg Connector: This connector is a small, used of power floppy disk, this is usually used for older computers are Pentium 4 down, today many power supply already negate this berg connector for floppy disks are not used anymore replaced with flash disk. On this connector there is a voltage of 12 V (black (-) and Yellow (+)) and 5 volts (black (-) and red (+))

2. Molex connector, this connector consists of 4 pins with a voltage of 12 volts (black (-) and Yellow (+)) and 5 volts (black (-) and red (+)), used as a power to the hard drive, CD / DVD Room.

3. Sata Power Connector, The evolving technology where computer equipment has been using sata, then over time the Molex connectors are no longer held in the power supply, sata connector is also used as a power for the HDD and CD / DVD. Bisasanya for the power supply still uses Molex connectors to connect the Molex connector to the sata cable.

4. Connector Pin Intel 12 v 4
This connector is used as a power supply to the motherboard using an Intel Pentium processor, and some AMD processors. Starting from sebahagian Intel pentium 4 and pentium dual core and above, its function as a provider of an additional voltage of 12 volts to the processor.

5. Connector 6 pin PCI-E connector is rarely found on the PC. Usually this power supply berfungis to add power to the PCI-E graphics card, which will be used to process Grafick work, video editing, and 3d Game. Power supply not all have this connector, only a special power supply 6 pin PCI-E can be purchased in the market, and if the PCI-E VGA card you have a socket for power.

dunovteck
Enhanced by Zemanta

Saturday, July 16, 2011

Controlling Electrical Appliances with Computers

Many computer use, not only for typing, playing games, the Internet and also for entertainment, can also serve as a tool to control some household appliances that use electricity.

Here is the circuit to use the printer port of a PC, for control application using software and some hardware interfaces. Interface circuit with the given software can be used with the printer port of any PC for controlling up to eight electrical equipment.

Interface circuit shown in images taken just one device, which is controlled by D0 bit at pin 2 of the 25-pin parallel port. Identical circuits for the remaining data bits D1 through D7 (available at pins 3 through 9) must use the same cable. The use of opto-coupler ensures complete isolation from the PC relay driver circuit.

Many ways to control the hardware can be implemented using software. In C / C + + one can use the outportb (portno, value) function is the address where portno parallel port (usually 378hex to LPT1) and 'value' is the data to be sent to the port. For value = 0 all outputs (D0-D7) is not active. For value = 1 D0 ON, value = 2 D1 ON, value = 4, D2 is ON and so on. eg. If the decimal value = 29 () = 00011101 (binary) -> D0, D2, D3, D4 are ON and the rest OFF.
Enhanced by Zemanta

Friday, July 15, 2011

How Ink Cartridges Work

The ink cartridge is an important component that is used to print various documents. This component can be refilled or can be replaced with a new cartridge. The process of printing is not possible without the ink cartridges. This article is to know about the ink cartridges and how they help in printing with superb quality for the print-out.

In the inkjet printer ink cartridge the process by which the technical work that way. by finding a partition in which the reservoir ink cartridge is placed and heated by a small piece of metal. At the time the printer is given a signal to the printer, the flow across the metal plate, which warms the metal. After the metal is heated in the ink cartridge will evaporate and be converted into small bubbles. This is a way out of bubbles from the nozzle one by one. This style will drip ink onto paper. This process is very fast and difficult to take a few milliseconds and printing processes that occur within a short time.

The process of printing through ink cartridges can take place smoothly and quickly only if the flow of ink cartridges through a smooth path. So to smooth the ink, the ink should be stored in liquid form. Sometimes there are complications in the printing process, this may be one of the main reasons for the causes mentioned above. However, if problems should arise in which the ink is dry, dry ink can be removed using the process of printing through ink cartridges can take place smoothly and quickly only if the flow of ink cartridges through a smooth path. So to smooth the ink, the ink should be stored in liquid form. Sometimes there are complications in the printing process, this may be one of the main reasons for the causes mentioned above. However, if problems should arise in which the ink is dry, dry ink can be removed by using isopropyl alcohol to gently rub in headnya.

Cartridge filled with a separate color. A cartridge filled with black ink and the other with three primary colors: red, blue and green and the presence of each color in the cartridge is a must. In addition, there are certain other cartridges are mainly used in photo printer.

Some of the best cartridges available in the market are Epson ink cartridges, which is a very popular choice. Ink in it is high quality and fast enough to dry on a paper that makes them great success among other products. This cartridge can be obtained from office supply stores and also on the web.

Each ink cartridge contains a number of separate ink tubes. High end manufacturer of ink cartridges adding electronic components to them. This makes it easy to communicate with the printer cartridge. Popular printer manufacturers, HP uses a thermal sensor in their cartridge, which allows the nozzle to spread the ink evenly over the paper in response to signals sent by the printer. ink begins to dry on the print head, be a need to change or refill the cartridges immediately. The delay in charging can damage the print head as the ink in the cartridges act as coolant that protects the heating element in it. After cooling to dry out and become a hot thermal sensors, heating elements to burn, causing damage Each ink cartridge contains a number of separate ink tubes. High end manufacturer of ink cartridges adding electronic components to them. This makes it easy to communicate with the printer cartridge. Popular printer manufacturers, HP uses a thermal sensor in their cartridge, which allows the nozzle to spread the ink evenly over the paper in response to signals sent by the printer. ink begins to dry on the print head, be a need to change or refill the cartridges immediately. The delay in charging can damage the print head as the ink in the cartridges act as coolant that protects the heating element in it. After cooling to dry out and become a hot thermal sensors, heating elements to burn, causing permanent damage to the printer head (Head Printers).

Usually the original ink cartridges can be very expensive. Therefore, people often turn to compatible ink cartridges. Compatible ink cartridges is an attractive alternative to original cartridges and are available at various online stores.
dunovteck 
Enhanced by Zemanta

Thursday, July 14, 2011

Processor


Development of technology, especially computer technology is very rapid, not to mention satisfied using the product we buy has appeared new products and old products from the computer into Diskontinue or no market. Especially with the processor, the processor is very rapid development, the development has sparked the development of a computer processor such as memory and other motherboard, not just the development of hardware and even software development is increasing, so our old computer was outdated when we bought new two years.

Processor is a hardware device that serves as a central regulator of the overall control and data processing activities on the computer.

Parts - Inside Processor

1. Register
Part processor that has a high transfer rate and is useful as a container while at the processor is processing data. Specifically, the register storing the location where the instruction will be taken, save when the decoded instruction, storing data at the ALU processing, and as well as storing the calculation results of the ALU. Register consists of 16.32, and 64 bits.

2. ALU (Arithmetic Logic Unit)
Part processor that is used to perform calculations and logic operations at the time the processor works.

3. CU (Control Unit)
Serves to translate the instructions into commands and executes the command.

What to Look on the processor

FSB (Front Side Bus)

FSB serves as a data transport path to the processor. If the FSB higher, the amount of data to be transferred to the higher. For Intel processors, the FSB is using a technology called quad pumping with units of Hz (hertz). FSB Quad Pumping is 4 times that of native FSB. For example, if the FSB is written on the brochure of 800 MHz, then the original FSB processor is only 200 MHz.

While AMD processors using the technology called HyperTransport FSB with units of T / s (transfers per second), which can double the bus width is used. Actually this technology as technology DDR (dual data rate) which can run at the time of the signal up or down signal. For example, if the original FSB running at 1000 MHz HyperTransport effective then running at 2000 MT / s.

Cache Memory

Cache Memory is part of the processor that serves to store data to be processed by the processor, cache size determines the amount of data that can be processed by the processor in one go. The units used in the cache memory is a byte. Levels in the cache of the most rapid are:
L1 Cache: fastest (currently around 30 GB / s)
L2 Cache: slower than L1 Cache (currently around 12 GB / s)
L3 Cache: slower than L2 cache (not present on most processors)

Core / Clock Speed

Core / Clock Speed ??is the speed of the processor used to process the data, with units of Hz.

Pin
Pin is the foot that can be mounted directly on the processor socket on the motherboard. But to pin LGA processor is not located in a processor but the motherboard.



Enhanced by Zemanta

Wednesday, July 13, 2011

Parts of the Disk

As we know the hard drive is the storage of data and documents, as well as the system OS and application programs installed. Hardisk actually be in the grade with a Memory, memory that is permanent, because the stored data and documents will not be lost after a system failure.
In the HDD, there are several critical components, by knowing the components of this Notebook we can better preserve our hard drive so that we secure documents and data stored on it. Because if you have important data, so if your hard drive is damaged then the data you also were damaged. But when Mother Boards or other components are not damaged while the hard drive is damaged, you can replace the other components and install your hard drive and the data in it remains secure.

Here are some important components of the HDD:



Platter

The shape of a plate or dish that serves as a store data.Berbentuk round, a compact disc, a magnetic pattern on the flanks of permukaanya.Platter made of metal that contains millions of tiny magnets, called the magnetic-domain domain.Domain This is set in one or two directions to represent binary "1" and "0"

The disc consists of several tracks, and some sectors, where track and sctor this is where the data storage and file system. For example our hard drives with a capacity of 40 GB, when the format is its capacity to 40 Gb. because there must be a trac and sectors used to store the ID identifier of formatting the hard drive.

The number plate of each disk is different, depending on the technology used and the capacity of each hard-disk harddisk.Untuk latest output, usually a plate has a capacity of 10 to 20 Gigabyte.Contohnya a 40 Gigabyte hard drive capacity, usually consisting of two plates, each with a capacity of 20 Gigabyte.

Spindle

Spindle is a place to put platter.Poros shaft has a drive that functions to rotate the plate with a spindle drive which is called here the role motor.Spimdle participate in determining the quality of putaranya hard drive because the sooner, the better the quality harddisknya.Satuan means for measuring the velocity is Rotation Per Minutes or so-called RPM.Ukuran that we often hear for the speed of this rotation include 5400 RPM, 7200 RPM or 10,000 RPM.

Head

This tool serves to read data on the surface of the plate and record information to the hard drive dalamnya.Setiap plate has two head.Satu above the surface and one below the surface.

This head in the form of an electromagnetic device which is placed on the plate surface and attaches to a slider.Slider attached to a stalk attached to the actuator arm mounted arms.Actuator dead on the actuator shaft by the Head of this form of electromagnetic devices placed on the surface of the plate and attached to the a slider.Slider attached to a stalk attached to the actuator arms arms.Actuator actuator mounted on the shaft to death by a board called the logic board.

Therefore, when the hard work should not be any shock or vibration, because the head can swipe a hard disk so that it will lead to Bad Sector, and can also cause damage to hard drive so the hard drive head is therefore at the time of hard work should not be any shock or vibration, because head can rub hard disk so that it will lead to Bad Sector, and can also cause damage to Head Hard disk so it can no longer read the disk track and sector of the hard disk.

Logic Board

Logic Board is a board on the hard disk operation, where the logic board so that there is a Bios HDD hard drive when connected to the Mother Board automatically recognize the hard drive, such as Maxtor, Seagete etc.. in addition to the Bios hard drive is also a switch Logic Board or Power Supply and distribution of data from the HDD to Head ki mother board for control by the processor.

Actual Axis

Is the axis for a handle or as a robotic arm that can be read sctor Head of the hard drive.

Ribbon Cable

Ribbon cable is the liaison between the Head with the Logic Board, where any document or data read by the Head will be sent to the Logic Board to the next in order to send to the Mother Board Processor can process data in accordance with the received input.

IDE connector

Is the connecting cable between the hard drive with matherboard to send or receive data.
Right now the average hard drive is already using the system so it does not require cable SATA Tape (Cable IDE)

Jumper Settings

Each hardis have jumper settings, its function is to determine the position of the hard drive.

When we installed on the computer hard drive 2 pieces, then by setting jumper settings we can determine where the hard drive where the primary and secondary hard disk is usually called the Master and Slave.

Master is the main hard drive where the system is installed, while the Slave is the second hard drive is usually required for storage of documents and data. When the jumper settings on the Master is not the main hard drive where the system is installed, while the Slave is the second hard drive is usually required for storage of documents and data. When the jumper settings are not set, then the hard drive will not work.

Power connector

Is the current source directly from the power supply. Power supply on the hard disk there are two parts:
Voltage 12 Volt, serves to drive the mechanics such as dish and Head.
5 V, serves to mesupply power on the Logic Board to work to send and receive data.

Thus on the hard drive, may I write this statement can be useful for you.



Enhanced by Zemanta

Tuesday, July 12, 2011

SDRAM, DDR and RDRAM

. SDRAM (Synchronous Dynamic RAM)

- Type RAM built in 1996. SDRAM is the RAM is legendary, and able to survive long in the development of computer systems. As the name implies has a term SDRAM Synchronous Dynamic RAM is the ability to match the clock with the clock of the processor. If the RAM and processor in the same clock, then the computer system will be in balance because the data flow between them running smoothly. Technical characteristics have 168-pin SDRAM, 3.3V & 100/133 MHz FSB. Currently SDRAM is not used anymore by the computer platform, last used on the Pentium 4 version of the first generation. Types of SDRAM: SDRAM 32, 64, 128, 256, 512MB PC100/133.

• DDR (Double Data Rate)

- Type RAM is a further development of SDRAM technology. DDR was made in 2000. DDR was first created as a major competitor of Intel's RDRAM memory, and Rambus who developed in the early generations of Pentium 4, and is currently the mainstream of computer platforms. Technical characteristics of the 184-pin DDR, 2.5V & FSB 266/333/400 MHz. In theory, DDR has twice the processing capability compared to SDRAM, being able to carry 2 bits at a clock-it-compared to only 1 bit SDRAM reply. DDR is still used on various platforms, such as the Pentium 4 & Celeron D and will soon be replaced with DDR2 technology. Types of DDR: DDR 128, 256, 512, 1024 MB PC2100/2700/3200.

• DDR2 (Double Data Rate Generation 2)

DDR2 is the next generation of DDR with improvements in various features, such as the use of IC BGA (Ball Grid Array) which has a heat resistant & high density and a higher FSB. Technical characteristics of the 240-pin DDR2, 1.8V & 400/533/667/800 MHz FSB. DDR2 has a greater capacity of the DDR, where the latter can reach 2GB / modules. And now DDR2 will become the standard for all Intel platforms, 2006 onwards. DDR2 Types: DDR 256, 512, 1024 MB PC3200/4300/5300/6400.

• RDRAM (Rambus Dynamic RAM)

RAM Type was first created in 1999. RDRAM is the RAM using a new technology developed by a company who called Rambus. RDRAM bandwidth has the ability to match the bandwidth requirements on an Intel Pentium 4 processor. Dual Channel technology was first introduced by RDRAM. Unlike the others who have the type of processing RDRAM Serial, compared to DDR SDRAM & reply to Parallel processing. Technical characteristics of RDRAM is 184-pin, 2.5V & FSB 800, 1066 with 16-bit architecture (2 bytes). Currently all types of RDRAM is not used anymore on the computer because the price is too expensive and its performance has been equaled by DDR/DDR2. Types of RDRAM: RDRAM 64, 128, 256, 512 MHz PC800/1.066.

Virtual Memory vs. RAM
Virtual memory is temporary storage space that used to run programs that require a memory larger than physical memory. In other words, the virtual memory used to hold programs and data that is not enough physical memory
Virtual memory is slower than physical memory
The use of too much virtual memory can decrease performance of the system, Accordingly, the windows move the process that is not too often to the virtual memory, and let the process that is often used in physical memory. So this is very efficient.
The size of virtual memory can be changed. Windows recommends a minimum size of vitual memory is 1.5 times the physical memory. If you have multiple hard drives, eg the first hard drive is C: and the second hard disk is D: and you rarely use the D: drive, you can move the virtual memory to the D: drive. Moving to the hard disk virtual memory that are rarely used will slightly improve performance. The reason is, on the first hard drive is usually very busy head of the hard drive to open programs, documents, save files and much more. But remember, this way would not be useful if your drives are located on the same drive or in other words, a partition.
Process address space can be sequential logic. In its physical memory page spread (via the MMU)
Gains derived from only a portion of the program storage memory only on the physical / major are:
Decreasing the M / K is needed (traffic M / K is low)
Space becomes more flexible due to the reduction of physical memory that is used
Increased response due to reduced loads I / O and memory
Increasing the number of users that can be served. Space that is still widely available memory allows the computer to receive more requests from users


Enhanced by Zemanta

Monday, July 11, 2011

Universal Serial Bus

Universal Serial Bus (USB) is a serial bus standard for connecting devices, usually to computers but also used in other devices such as game consoles, cell phones and PDAs.

USB can connect additional equipment such as computer mouse, keyboard, image scanners, digital cameras, printers, hard disk, and networking components. USB has now become the standard for multimedia equipment such as image scanners and digital cameras.

USB is a host-centric bus where the host / terminal stem initiate all transactions. The first package / markers (tokens) produced by the host beginning to clarify whether the packets that follow it will be read or written and what is the purpose of the device and endpoint. Subsequent packets are data packets, followed by a handshaking packet data or a report on whether the marker has been well received or the endpoint fails to receive the data well.

Each process transactions on the USB consists of:
Package token / marker signal (Header that describes the data that follows)
Choice of data packets (including charge rates) and
Status of the package (for the acknowledge / notification of transactions and for error correction)

Designing the equipment using the USB

To make an equipment that can communicate with the USB protocol do not necessarily have to know in detail the USB protocol. Sometimes do not even need the knowledge of USB protocol at all. Knowledge of the USB protocol is only required to know the specifications required for our tool. In fact for mengimplemetasikan USB protocol on FPGA or other assistive devices are very inefficient and a lot of wasted time to design it. Using the USB controller is more advisable in making tools that can communicate over this protocol. USB controller has many forms, from the 8051-based microcontroller that has a USB input output directly to a modifier of the serial protocol such as I2C bus to USB.

USB controllers are usually sold together with the various facilities that simplify the development of tools, including a complete manual, drivers for windows, sample application code to access the USB, the example code for USB controllers, and circuit schematics elektronikanya.

In the development of software applications in personal computers, the communication between the hardware inside the USB hardware is not too much attention because Windows or another operating system that will take care of it. Software developers only provide data that will be sent to the buffer storage USB device and read data from USB devices from the buffer reader. For drivers were sometimes Windows already provide, except for equipment that has special specifications we have to make your own.

dunkom
Enhanced by Zemanta

Sunday, July 10, 2011

Microprocessor

A microprocessor (abbreviated as µP or uP) is a central processing unit (CPU) computer electronics made of miniature transistors and other circuitry on a semiconductor integrated circuit.

Before the development of microprocessors, CPUs electronic integrated circuit made of separate TTL; earlier, individual transistors; before that, from vacuum tubes. In fact there has been a design for a simple computer machines based on the Before the development of microprocessors, CPUs electronic integrated circuit made of separate TTL; earlier, individual transistors; before that, from vacuum tubes. In fact there has been a design for a simple computer machines based on mechanical parts such as gear, shaft, lever, Tinkertoy, etc..

The evolution of microprocessors has been known to follow Moore's Law which is an increase in performance from year to year. This theory was formulated that the counting power will double every 18 months, a process that really happened since the early 1970s; a surprise to people who are related. From the beginning as a driver in the calculator, the development of power has led to the dominance of the microprocessor in the various types of computer and every system from the largest mainframes to the smallest grasp computers now uses a microprocessor as its center. The first microprocessor appeared in the early 1970s and used for electronic calculators, using the binary-coded decimal (BCD) in 4-bit arithmetic. Other embedded use 4 - and 8-bits, such as terminals, printers, various kinds of automation etc, followed rather quickly. Affordable 8-bit to 16-bit also handle causes the first general purpose microcomputers in the mid-1970s.

Characteristics Microprocessor
Here are the important characteristics of the microprocessor:
The size of the internal data bus (internal data bus size): The number of channels contained in the microprocessor which states the number of bits of internal data bus size (internal data bus size): The number of channels contained in the microprocessor which states the number of bits that can be transferred between components inside the microprocessor .
The size of the external data bus (external data bus size): The number of channels used to transfer data between components between the microprocessor and components outside the microprocessor.
The size of the memory address (memory address size): The amount of memory addresses that can be directly addressable by the microprocessor.
The clock speed (clock speed): Rate or speed of the microprocessor clock to guide the work.
Special features (special features): The special features to support specific applications such as floating point processing facilities, multimedia and so on.

Computer processor within a period of time built of small and medium containing the equivalent of up to hundreds of IC transistors. Integration of the whole CPU onto a single chip, thus greatly reducing the cost of processing capacity. From humble beginnings, continued increases in microprocessor capacity have been given other forms of computers almost completely obsolete, with one or more microprocessor as processing element in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers.

Since the early 1970s, the increased capacity of microprocessors has been known to generally follow Moore's Law, which suggests that the complexity of integrated circuits, which relates to minimum component cost, doubles every two years. In the late 1990s, and in high-performance microprocessor segment, the heat generated (TDP), due to switching losses, the static leakage current, and other factors, emerged as a leading development constraints.

Intel 4004

Intel 4004 microprocessor is generally considered the first, and the cost in thousands of dollars. The first known advertisement on the Intel 4004 microprocessor is generally considered the first, and the cost in thousands of dollars. The first known advertisement for 4004 until November 1971; appeared in Electronic News.

Projects that result in 4004 came in 1969, when Busicom, a Japanese calculator manufacturer, asked Intel to build a high-performance chipsets for desktop calculator. Busicom original design called for programmable chip set consists of 7 different pieces, three of them are used for special purpose CPU with a program stored in ROM and data stored in shift register read-write memory. Ted Hoff, Intel engineer assigned to evaluate the project, believing Busicom design can be simplified by using dynamic RAM storage for data, not the memory shift register, and the more traditional general-purpose CPU architectures. Hoff came up with four chip architecture proposal: a ROM chip for storing programs, dynamic RAM chips to store data, a simple I / O device and a 4-bit central processing unit (CPU), which he felt could be integrated into a single chip, although he was not a chip designer. This chip is called the Project will generate 4004 originated in 1969, when Busicom, a Japanese calculator manufacturer, asked Intel to build a high-performance chipsets for desktop calculator. Busicom original design called for programmable chip set consists of 7 different pieces, three of them are used for special purpose CPU with a program stored in ROM and data stored in shift register read-write memory. Ted Hoff, Intel engineer assigned to evaluate the project, believing Busicom design can be simplified by using dynamic RAM storage for data, not the memory shift register, and the more traditional general-purpose CPU architectures. Hoff came up with four chip architecture proposal: a ROM chip for storing programs, dynamic RAM chips to store data, a simple I / O device and a 4-bit central processing unit (CPU), which he felt could be integrated into a single chip, although he was not a chip designer. This chip is later called the 4004 microprocessor.

Architecture and specification of 4004 is the result of the interaction of Intel's Hoff with Stanley Mazor, a software engineer reporting to Hoff, and with engineers Busicom Masatoshi Shima. April 1970 Intel hired Federico Faggin led the design of chips for four sets. Faggin, who originally developed the silicon gate technology (SGT) in 1968 at Fairchild Semiconductor (and also designed the world's first commercial integrated circuits using SGT - Fairchild 3708), has the right background to lead the project because it is the SGT to allow the design of a CPUs in one chip with the right speed, power dissipation and cost. Faggin also developed a new methodology for the design of random logic, based on silicon gate, which made 4004 possible. Unit 4004 the production was first sent to the Architecture and specification 4004 is the result of the interaction of Intel's Hoff with Stanley Mazor, a software engineer reporting to Hoff, and with engineers Busicom Masatoshi Shima. April 1970 Intel hired Federico Faggin led the design of chips for four sets. Faggin, who originally developed the silicon gate technology (SGT) in 1968 at Fairchild Semiconductor (and also designed the world's first commercial integrated circuits using SGT - Fairchild 3708), has the right background to lead the project because it is the SGT to allow the design of a CPUs in one chip with the right speed, power dissipation and cost. Faggin also developed a new methodology for the design of random logic, based on silicon gate, which made 4004 possible. 4004 first production unit delivered to Busicom March 1971, and shipped to other customers at the end of 1971.

TMS 1000

Smithsonian Institution said TI engineer Gary Boone and Michael Cochran successfully created the first microcontroller (also called micro) in 1971. The results of their work is the TMS 1000, which went commercial in 1974.

TI developed the 4-bit TMS 1000 and stressed pre-programmed embedded applications, introducing a version called TMS1802NC on 17 September 1971, which made for a calculator on a chip. The Intel chip is a 4-bit 4004, which was released on 15 November 1971, developed by Federico Faggin led the design of 4004 in 1970-1971, and Ted Hoff, who led the architecture in 1969. Head of the MOS Leslie L. Vadász.

TI filed a patent on the microprocessor. Gary Boone was awarded U.S. Patent 3,757,306 for a single-chip microprocessor architecture on September 4, 1973. Probably never be known which company actually has the first working microprocessor running at the lab bench. In both 1971 and 1976, Intel and TI enter into a patent cross-licensing agreement, with Intel paying royalties to TI for the microprocessor patents. A good history of these events contained in court documentation from a legal dispute between Cyrix and Intel, with TI filed a patent on the microprocessor. Gary Boone was awarded U.S. Patent 3,757,306 for a single-chip microprocessor architecture on September 4, 1973. Probably never be known which company actually has the first working microprocessor running at the lab bench. In both 1971 and 1976, Intel and TI enter into a patent cross-licensing agreement, with Intel paying royalties to TI for the microprocessor patents. A good history of these events contained in court documentation from a legal dispute between Cyrix and Intel, with TI as intervenor and owner of microprocessor patents.

A computer-on-a-chip is a variation of a microprocessor which combines the microprocessor core (CPU), some memory, and line I / O (input / output), all on a single chip. also referred to as micro-controller. Computer-on-a-chip patent, called the "microcomputer patent" at that time, U.S. Patent 4,074,351, awarded to Gary Boone and Michael J. IT Cochran. Apart from this patent, the standard meaning of microcomputer is a computer by using one or more microprocessors as its CPU (s), while the concept is defined in the patent may be more akin to a microcontroller.

Pico / General Instrument

In early 1971 General Pico Electronics Instruments introduced their first collaboration in the IC, a complete single-chip calculator IC Royal Digital III to the Monroe calculator. This IC can also be claimed to be one of the first microprocessor or

Microcontrollers have the ROM, RAM and a RISC instruction set of on-chip. Pico is the GI spinout by five design engineers whose vision is to create a single chip calculator IC. They have significant experience of previous designs in several chipsets calculator with both GI and Marconi-Elliott. Pico and GIs continue to have significant success in the handheld calculator market develops.

Designs 8-bit Intel 4004 was followed in 1972 by the Intel 8008, the world's first 8-bit microprocessor. According to A History of Modern Computing, (MIT Press), pp. 220-21, Intel enter into a contract with Computer Terminals Corporation, later called Datapoint, San Antonio TX, for a chip design for their terminals. Datapoint later decided not to use the chip, and Intel marketed as 8008 in April, 1972. This is the world's first 8-bit microprocessor. This is the basis of the famous "Mark-8" computer kit advertised in the magazine Radio-Electronics in the year 197

8008 was the precursor to a very successful Intel 8080 (1974), Zilog Z80 (1976), and derivative Intel 8-bit processors. Competing Motorola 6800 was released in August 1974 and 6502 similar MOS Technology in 1975 (designed primarily by the same person). The 6502 to rival the popularity of the Z80 during the 1980s.

Western Design Center, Inc. (WDC) introduced the CMOS 65C02 in 1982 and licensed the design to several companies. It was used as the CPU in the Apple IIe IIC and personal computers in the classroom as well as medical implantable pacemakers and defibrilators, automotive, industrial and consumer devices. WDC pioneered the licensing microprocessor design, followed by ARM and other microprocessor Intellectual Property (IP) providers in the 1990s.

Motorola introduced the MC6809 in 1978, an ambitious and think through the design of 8-bit compatible with the source and is implemented using programmable logic pure 6800. (Furthermore, 16-bit microprocessor is used typically focused on a few things, such as design requirements were getting too complicated for a purely hard-wired logic only.)

Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief surge of interest due to its innovative and powerful instruction set architecture. A seminal microprocessor in the world is spaceflight RCA's RCA 1802 (aka CDP1802, RCA COSMAC) (introduced in 1976), which is used in NASA's Voyager and spaceprobes Vikings in the 1970s, and onboard the probe Galileo to Jupiter (launched 1989, arrived 1995) . COSMAC RCA was the first to implement CMOS technology. The CDP1802 was used because it can run on very low power, and because the production process (Silicon on Sapphire) ensures better protection against cosmic radiation and electrostatic dirt than other processors at the time. Thus, 1802 is said to be the first radiation hardened microprocessor.

RCA 1802 had what is called a static design, meaning that the clock frequency can be made arbitrarily low, even to 0 Hz, total stop condition. This let the Voyager / Viking / Galileo spacecraft use minimum electric power long stretch of smooth travel. Timer and / or sensors would awaken / improve the performance of processor time for important tasks, such as navigation updates, attitude RCA 1802 had what is called a static design, meaning that the clock frequency can be made arbitrarily low, even to 0 Hz, total stop condition. This let the Voyager / Viking / Galileo spacecraft use minimum electric power long stretch of smooth travel. Timer and / or sensors would awaken / improve the performance of processor time for important tasks, such as navigation updates, attitude control, data acquisition, and radio communications.

12-bit designs

Intersil 6100 families consisting of a 12-bit microprocessor (in 6100) and a range of support and peripheral memory ICs. Recognized microprocessor DEC PDP-8 minicomputer instruction set. Because it is sometimes referred to as CMOS-PDP8. Because it is also manufactured by Harris Corporation, also known as Harris HM-6100. By virtue of CMOS technology and related benefits, the 6100 is being incorporated into several military designs until the early 1980s.

16-bit designs

The first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced in early 1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8. In the same year, National introduced the first 16-bit single-chip microprocessor, the First National Semiconductor multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced in early 1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8. In the same year, National introduced the first 16-bit single-chip microprocessor, the National Semiconductor PACE, which is then followed by the NMOS version, the INS8900.

Other early multi-chip 16-bit microprocessors include one used by Digital Equipment Corporation (DEC) in the LSI-11 OEM board set and the packet PDP 11/03 minicomputer, and Fairchild Semiconductor MicroFlame 9440, both introduced in the period 1975 -1976. The first 16-bit microprocessor chip is the TI TMS 9900, which is also compatible with

IT-990 line of minicomputers. Which is used in IT 9900 990 / 4 minicomputer, the TI-99/4A home computer, and the TM990 line of OEM micro board. The chip is packaged in a large ceramic 64-pin DIP package, while most 8-bit like that used an Intel 8080 which is more common, smaller, and less expensive plastic 40-pin DIP. An advanced chip, the TMS 9980, was designed to compete with the Intel 8080, TI has 990 full 16-bit instruction set, using a 40-pin plastic package, moved data 8 bits at a time, but could only address 16 KB. The third chip, the TMS 9995, is a new design. The family later expanded to include 99,105 and 99,110. Western Design Center, Inc. (WDC) introduced the CMOS 65 816 16-bit upgrade of the WDC CMOS 65C02 in 1984. 65 816 16-bit microprocessor is the core of the Apple IIgs and later the Super Nintendo Entertainment System, making it a TI-990 line of minicomputers. Which is used in IT 9900 990 / 4 minicomputer, the TI-99/4A home computer, and the TM990 line of OEM micro board. The chip is packaged in a large ceramic 64-pin DIP package, while most 8-bit like that used an Intel 8080 which is more common, smaller, and less expensive plastic 40-pin DIP. An advanced chip, the TMS 9980, was designed to compete with the Intel 8080, TI has 990 full 16-bit instruction set, using a 40-pin plastic package, moved data 8 bits at a time, but could only address 16 KB. The third chip, the TMS 9995, is a new design. The family later expanded to include 99,105 and 99,110. Western Design Center, Inc. (WDC) introduced the CMOS 65 816 16-bit upgrade of the WDC CMOS 65C02 in 1984. 65 816 16-bit microprocessor is the core of the Apple IIgs and later the Super Nintendo Entertainment System, making it one of the most popular 16-bit designs of all time.

Intel followed a different path, having no minicomputers that mimics, and instead "upsized" their 8080 design into a 16-bit Intel 8086, the first x86 family, which most of the power of modern PC type computers. Intel introduced the 8086 as a cost effective way of porting software from the 8080 line, and succeeded in winning much business on that premise. The 8088, a 8086 version used an external 8-bit data bus, the microprocessor in the first IBM PC model 5150. After their 8086 and 8088, Intel released the 80186, 80286 and, in 1985, 32-bit 80386, to strengthen their dominance of the PC market with backwards compatibility processor family.

Integrated microprocessor memory management unit (MMU) was developed by Childs et al. from Intel, and awarded U.S. patent number 4,442,484.

32-bit designs

The new 16-bit designs in the market for a while kenudian beedar 32-bit designs began to emerge. The most significant of the 32-bit designs is the MC68000, introduced in 1979. The 68K, as is widely known, has a 32-bit registers but used 16-bit internal data path and 16-bit external data bus to reduce pin count, and supported only 24-bit addresses. Motorola generally described as 16-bit processor, although it clearly has 32-bit architecture. The combination of high performance, large (16 megabytes or 224 bytes) of memory space and low cost enough to make the most popular CPU design of its class.

Apple Lisa and Macintosh designs take advantage 68,000, as well as a number of other designs in the mid-1980s, including the Atari ST and Commodore Amiga. The world's first single-chip fully-32-bit microprocessor, with 32-bit data path, 32-bit bus, and 32-bit, is AT & T Bell Labs BELLMAC-32A, with first samples in 1980, and general production in 1982. After removal of the AT & T in 1984, renamed 32 000 WE (Western Electric WE), and had two further generations, the WE 32100 and WE 32200. Microprocessors are used in AT & T 3B5 and 3B15 minicomputers; in the 3B2, the world's first desktop supermicrocomputer; in the "Companion", the world's first 32-bit laptop computer and in "Alexander", the world's first book-sized supermicrocomputer, featuring ROM-pack memory cartridges similar to today's gaming consoles. All of these systems running UNIX System V operating system.

Intel's first 32-bit microprocessor is iAPX 432, which was introduced in 1981 but was not a commercial success. It has an advanced capability-based object-oriented architecture, but poor performance compared to contemporary architecture such as Intel's own 80 286 (introduced 1982), which is almost four times faster on typical benchmark tests. However, the results for iAPX432 partly because of the hurry and therefore suboptimal There is a compiler.

ARM first appeared in 1985. It is a RISC processor design, which has since dominated the 32-bit processor embedded systems space because most of its power efficiency, the licensing model, and a wide selection of tools of system development. Semiconductor manufacturers generally license such as ARM11 core and integrate it into their own systems on a chip products; only a few vendors such as licensing to modify the ARM core. Most mobile phones include an ARM processor, as do various other types of products. There's an ARM core-oriented microcontroller without virtual memory support, and SMP applications processors with virtual memory.

Motorola's success with 68,000 toward the MC68010, which added virtual memory support. The MC68020, introduced in 1985 added full 32-bit data and address bus. At 68 020 became very popular in the market supermicrocomputer Unix, and many small companies (eg, Altos, Charles River Data Systems) produced desktop-size system. MC68030 introduced the following improvements over previous designs by integrating the MMU into the chip. Continued success led to the MC68040, which included an FPU for better math performance. A 68 050 failed to achieve its performance goals and was not released, and follow-up MC68060 was released into the market more quickly saturated with RISC designs. The Motorola 68K family success with 68,000 toward the MC68010, which added virtual memory support. The MC68020, introduced in 1985 added full 32-bit data and address bus. At 68 020 became very popular in the market supermicrocomputer Unix, and many small companies (eg, Altos, Charles River Data Systems) produced desktop-size system. MC68030 introduced the following improvements over previous designs by integrating the MMU into the chip. Continued success led to the MC68040, which included an FPU for better math performance. A 68 050 failed to achieve its performance goals and was not released, and follow-up MC68060 was released into the market more quickly saturated with RISC designs. The 68K family faded from the desktop in the early 1990s.

Other large companies to design and follow-ons into embedded equipment 68 020. At one point, there are more 68020s in embedded equipment than Intel Pentiums in the PC. The ColdFire processor cores are derivatives of the venerable 68 020. During this period (early to mid 1980s), National Semiconductor introduced a very similar 16-bit pinout, 32-bit internal microprocessor called the NS 16 032 (later renamed 32016), a full 32-bit version named the NS 32032, and a line of 32-bit industrial OEM microcomputers. In the mid-1980s, successively introduced the first symmetric multiprocessor (SMP) server class computer using the NS 32 032. This is one of several design wins, and disappeared in the late 1980s.

From 1985 to 2003, 32-bit x86 architectures became increasingly dominant in desktop, laptop and server market, and these microprocessors became faster and more capable. Intel had licensed early versions of the architecture to other companies, but refused to license the Pentium, so AMD and Cyrix versions of the architecture is built based on the design itself. During this span, these processors increased in complexity (transistor count) and capability (instructions / second) by at least three-fold. Intel's Pentium line of perhaps the most From 1985 to 2003, 32-bit x86 architectures became increasingly dominant in desktop, laptop and server market, and these microprocessors became faster and more capable. Intel had licensed early versions of the architecture to other companies, but refused to license the Pentium, so AMD and Cyrix versions of the architecture is built based on the design itself. During this span, these processors increased in complexity (transistor count) and capability (instructions / second) by at least three-fold. Intel's Pentium line of perhaps the most famous and well known 32-bit models, at least with the community

64-bit designs in personal computers

While 64-bit microprocessor designs have been used in some markets since the early 1990s, early 2000s saw the introduction of 64-bit microprocessor targeted at the PC market. With the introduction of AMD 64-bit architecture backward-compatible with x86, x86-64 (now called AMD64), in September 2003, followed by Intel's near fully compatible extensions 64-bit (first called the IA-32e or EM64T, later renamed Intel 64), 64-bit desktop era began. Both versions can run 32-bit legacy applications without any performance penalty as well as new 64-bit software. With Windows XP x64 operating system, Windows Vista x64, Linux, BSD and Mac OS X that run 64-bit native, the software is also directed to take full advantage of the ability of such processors. Moving to 64-bit is more than just an increase in register size from the IA-32 as well as two 64-bit microprocessors While the design has been used in some markets since the early 1990s, early 2000s saw the introduction of 64-bit microprocessor targeted at the market PC. With the introduction of AMD 64-bit architecture backward-compatible with x86, x86-64 (now called AMD64), in September 2003, followed by Intel's near fully compatible extensions 64-bit (first called the IA-32e or EM64T, later renamed Intel 64), 64-bit desktop era began. Both versions can run 32-bit legacy applications without any performance penalty as well as new 64-bit software. With Windows XP x64 operating system, Windows Vista x64, Linux, BSD and Mac OS X that run 64-bit native, the software is also directed to take full advantage of the ability of such processors. Moving to 64-bit is more than just an increase in register size from the IA-32 as well as doubling the number of general purpose registers.

Moving to 64-bit PowerPC processor was intended since the processors' design in the early 90's and not the main cause of incompatibility. Integer registers are extended as well as all related data pathways, but, as with IA-32, both floating point and vector units have been operating at or above the 64-bit for several years. Unlike what happens when the IA-32 was extended for x86-64, no new general purpose registers are added in 64-bit PowerPC, so any performance gained when using 64-bit mode for applications making no move to 64-bit PowerPC processor was intended since the processors' design in the early 90's and not the main cause of incompatibility. Integer registers are extended as well as all related data pathways, but, as with IA-32, both floating point and vector units have been operating at or above the 64-bit for several years. Unlike what happens when the IA-32 was extended for x86-64, no new general purpose registers are added in 64-bit PowerPC, so any performance gained when using 64-bit mode to make the application does not use a larger address space minimal.

Multicore designs

A different approach to improve computer performance is to add processors, such as in the design of symmetric multiprocessing, which has been popular in servers and workstations since the early 1990s. Keeping up with Moore's Law is increasingly challenging as chip-making technologies approach the physical limits of technology. In response, the microprocessor manufacturers look for other ways to improve performance, in order to maintain the momentum of constant upgrades in the market.

A multi-core processor is a single chip that contains more than one microprocessor core, effectively multiplying the potential performance by the number of cores (as long as the operating system and software designed to take advantage of more than one processor). Some components, such as bus interface and the second level cache, may be shared between the cores. Because the cores are physically very close they interface at clock rates much faster than the discrete multiprocessor systems, improving overall system performance.

In 2005, the first personal computer dual-core processors were announced and as of 2009 dual-core and quad-core processors are widely used in servers, workstations and PCs, while the six-and eight-core processors will be available for high-end applications in both home and professional environment.

Sun Microsystems has released copies of Niagara and Niagara 2, both of which are core features eight designs. Niagara 2 supports more threads and operates at 1.6 GHz. High-end Intel Xeon processors in LGA771 sockets are DP (dual processor) is able, as well as

Intel Core 2 Extreme QX9775 is also used in the Mac Pro by Apple and Intel Skulltrail motherboard. With the transition to socket and quad-core Intel LGA1366 i7 chips are now considered mainstream and i9 upcoming chip will introduce six and Intel Core 2 Extreme QX9775 is also used in the Mac Pro by Apple and Intel Skulltrail motherboard. With the transition to socket and quad-core Intel LGA1366 i7 chips are now considered mainstream and i9 upcoming chip will introduce six and possibly die hex dual-core (12-core) processors.

RISC

In the mid-1980s until the early 1990s, a new discovery of high-performance Reduced Instruction Set Computer (RISC) microprocessors appeared, influenced by such discrete RISC CPU designs such as the IBM 801 and others. RISC microprocessors were initially used in special purpose machines and Unix, but later gained widespread acceptance in other roles. In 1986, HP released the first system with PA-RISC CPUs. The first commercial microprocessor designs released both by MIPS Computer Systems, 32-bit R2000 (the R1000 was not released) or by an Acorn computer, 32-bit ARM2 in 1987. [Citation needed] R3000 made the design truly practical, and R4000 introduced the world's first commercially available 64-bit RISC microprocessor. The project will compete In the mid-1980s until the early 1990s, a new discovery of high-performance Reduced Instruction Set Computer (RISC) microprocessors appeared, influenced by such discrete RISC CPU designs such as the IBM 801 and others. RISC microprocessors were initially used in special purpose machines and Unix, but later gained widespread acceptance in other roles. In 1986, HP released the first system with PA-RISC CPUs. The first commercial microprocessor designs released both by MIPS Computer Systems, 32-bit R2000 (the R1000 was not released) or by an Acorn computer, 32-bit ARM2 in 1987. [Citation needed] R3000 made the design truly practical, and R4000 introduced the world's first commercially available 64-bit RISC microprocessor. Competing projects will result in IBM POWER and Sun

SPARC architecture. Soon every major vendor was releasing a RISC design, including AT & T CRISP, AMD 29000, Intel i860 and Intel i960, Motorola 88000, DEC Alpha. As of 2007, two 64-bit RISC architecture is still produced in volume for non-embedded applications: SPARC and Power ISA.
dunovteck



Enhanced by Zemanta

Saturday, July 9, 2011

CDs or DVDs

CDs / DVDs is a data storage medium or a program, where data on CD / DVD can only be read by the Optical Disk supplied on the CD / DVD Room.

CD has several layers, among others:

1. Plastic

Plastic Plastic is made from beans should be pliable and strong, so that when the CD / DVD in the reading will be spinning very fast, because the rotational speed CD / DVD at the time in reading it will generate heat, so when the plastic CD / DVD is not supple and strong then CD / DVD can be broken down in the CD / DVD Room,

Plastic CD / DVD is working to place a sheet of plastic coating that has been irradiated by the laser beam

2. Plastic coating Data

A layer of thin plastic that has been irradiated by UV Lasser, where in this layer of data or documents we have been saved. to protect this layer is given another layer of liquid plastic coating to protect the data storage as well as a reflective tool (Cover).

So the structure is a thin plastic layer of plastic then place the next data storage layer of the protective liquid or cover

3. Protective layer
Where this protective layer to protect the CD / DVD when the laser beam shot from the Optical Disk when reading CD / DVD is not transparent, because the laser beam Optics shot should bounce back to the optical disk, because this is the reflection of laser beam that reads data from disk, so when the protective CD / DVD is damaged then the laser beam does not bounce and the CD can no longer be read as shown below:

Protective layer CD / DVD can also be called Cover, Liquid Silver or protective coating.

There are several types of Type CD as we know, among others:

1. CD / DVD

Usually a CD / DVD has been completed, which documents have been loaded can not be deleted and also a CD / DVD can not be in the content to back up our documents (can Read Only). This type of CD / DVD can be read in CD / CD-RW / DVD / DVD-RW Room.

CD-ROM disc is silver. The manufacturing process is by placing a sheet of plastic coating that has been irradiated by the laser beam. The laser beam would form a kind of pit (hole) of micro-sized - which is very small. The holes that will form the contents of a row of code data. Once a hole is created, it can not be closed again. Then it will be wrapped in the plastic coating by the liquid plastic longer useful as a protective and reflective. All this process is done gradually in a molding machine

2. CD / DVD-R

CD / DVD-R is usually known by the name of Blank CD, CD / DVD can only be written only once. after we burn CDs to backup our documents then this CD can not be burned again.

CD-R discs are generally green, but there are some that are blue, red and black. Manufacturing process is similar to CD-ROM, by placing a sheet of plastic film. The difference is that the plastic sheet has not been irradiated by the laser. Then it will be wrapped in the plastic coating by the liquid plastic longer useful as a protective and reflective. And when you do the burning of the plastic sheet is already irradiate the laser, it will form a row of holes where data or code of our document, after forming the hole then the CD / DVD is not being written back.

This type of CD / DVD-R can be read in CD / DVD room and firing or the writer must use a CD / DVD RW

3. CD / DVD RW

CD-RW disc is generally purple. Manufacturing process is similar to CD-ROM or CD-Rs by placing a sheet of plastic film. The difference is the plastic sheet that has the ability to open and close. As has been explained that the data layer when illuminated by the laser will make the holes as the code. On the CD-RW data layer that can be holes that could close again if needed. That is why we can record and erase CD-RW media is our heart's content.

CD-RW not carelessly can be read on a CD player or VCD player. To be able to read CD-RW takes power laser beam is stronger than usual. Therefore make sure that the CD player or VCD player supports CD-RW.
Enhanced by Zemanta

Friday, July 8, 2011

Intel Pentium Dual-Core

Pentium Dual-Core brand refers to mainstream x86-architecture microprocessors from Intel. They are based on either 32-bit Yonah or (with a quite different microarchitectures) 64-bit Merom, Allendale, and more recently, with the launch of the model E5200, Wolfdale cores, targeted at mobile or desktop computer.

In terms of features, price and performance at a certain clock frequency, the Pentium Dual-Core Celeron processor is positioned at the top but under microprocessor Intel Core and Core 2. Pentium Dual-Core is also a very popular choice for overclocking, because it can provide optimal performance (when overclocked) at a low price.

Processor cores

In 2006, Intel announced a plan to re-retire from the Pentium brand, as the moniker of low cost Core architecture processors based on single-core Conroe-L, but with 1 MB of cache. Identification numbers for those planned Pentiums were similar to the sum of both Pentium Dual-Core CPU, but with a "1" digits, instead of "2", suggesting a single core functionality. A single-core Conroe-L with 1 MB of cache is considered not strong enough to distinguish the planned Pentiums from Celerons, so it was replaced by a dual-core CPU, adding "Dual-Core" to the name row. During 2009, Intel changed the name back from Pentium to Pentium Dual-Core in the publication. Some processors are sold under the name of both. For example, the series of ultra-low voltage SU2xxx is single-core Pentium processor.


Types of Processor Core

1. Yonah

The first processors using the brand of notebook computer appeared in early 2007. Those processors, named Pentium T2060, T2080, and T2130, [2] has a 32-bit derived from the Pentium M Yonah core, and very similar to Core Duo T2050 processor with the exception of having 1 MB of L2 cache instead of 2 MB. The three of them had a 533 MHz FSB connects the CPU with the memory. Intel developed the first processor using the brand of notebook computer appeared in early 2007. Those processors, named Pentium T2060, T2080, and T2130, [2] has a 32-bit derived from the Pentium M Yonah core, and very similar to Core Duo T2050 processor with the exception of having 1 MB of L2 cache instead of 2 MB. The three of them had a 533 MHz FSB connects the CPU with the memory. Intel developed the Pentium Dual-Core at the request of the laptop manufacturers.

2. Allendale
Subsequently, on June 3, 2007, Intel released the desktop Pentium Dual-Core processor brand name known as Pentium E2140 and E2160. A Model E2180 was released later in September 2007. It supports Intel 64 extensions, based on newer, 64-bit with Allendale core Core microarchitecture. This is similar to Core 2 Duo E4300 processor with the exception of having 1 MB of L2 cache instead of 2 MB. Both have a 800 MHz FSB. They are targeting the budget market over the Intel Celeron (Conroe-L series single-core) processor is only 512 KB of L2 cache. Such a step marked a change in the Pentium brand, relegating it to the budget segment than its original position as a mainstream / Subsequently, on June 3, 2007, Intel released the desktop Pentium Dual-Core processor brand name known as Pentium E2140 and E2160. A Model E2180 was released later in September 2007. It supports Intel 64 extensions, based on newer, 64-bit with Allendale core Core microarchitecture. This is similar to Core 2 Duo E4300 processor with the exception of having 1 MB of L2 cache instead of 2 MB. Both have a 800 MHz FSB. They are targeting the budget market over the Intel Celeron (Conroe-L series single-core) processor is only 512 KB of L2 cache. Such a step marked a change in the Pentium brand, relegating it to the budget segment than its original position as a mainstream / premium brand. This CPU is overclocked.

3. Merom-2M
Mobile versions of processors Allendale, Merom-2M, also introduced in 2007, featuring a 1MB L2 cache, but only 533 MT / s FSB with the processor T23xx. Hour bus then increased to 667 MT / s with a Pentium processor T3xxx made from the same die.

4. Wolfdale-3M
45 nm E5200 models released by Intel on August 31, 2008, with 2MB L2 cache is larger for the series E21xx 65 nm and 2.5 GHz clock speed. The model is also highly overclocked E5200 processors, with some fans reached more than 6 GHz clock speed using liquid nitrogen cooling. E6500K boasts Intel released the model using this core. Model features 45 nm E5200 models released by Intel on August 31, 2008, with 2MB of L2 cache a larger upper E21xx series of 65 nm and 2.5 GHz clock speed. The model is also highly overclocked E5200 processors, with some fans reached more than 6 GHz clock speed using liquid nitrogen cooling. E6500K boasts Intel released the model using this core. Features an open multiplier model, but currently only sold in China.

5. Penryn-3M
The Penryn core is the successor to Merom core and Intel's 45 nm mobile versions of their series Pentium Dual-Core microprocessors. FSB increased to from 667 MHz to 800 MHz and the voltage lowered. Intel released the first Penryn Core, Pentium T4200, in December 2008. In June 2009, Intel released the first single-core Pentium processors using the name, Consumers Ultra-Low Voltage (CULV) Penryn core is called the Pentium SU2700. Intel has also changed the brand of all Pentium Dual-Core as the Pentium procesors. In September 2009, Intel introduced the Pentium series SU4000 SU2000 together with the Celeron and Core 2 Duo SU7000 series, a dual-core processor based on Penryn-3M CULV and use a 800 MHz FSB. SU4000 Pentium series has 2 MB of L2 cache but core Penryn is the successor to the Merom core and Intel's 45 nm mobile versions of their series Pentium Dual-Core microprocessors. FSB increased to from 667 MHz to 800 MHz and the voltage lowered. Intel released the first Penryn Core, Pentium T4200, in December 2008. In June 2009, Intel released the first single-core Pentium processors using the name, Consumers Ultra-Low Voltage (CULV) Penryn core is called the Pentium SU2700. Intel has also changed the brand of all Pentium Dual-Core as the Pentium procesors. In September 2009, Intel introduced the Pentium series SU4000 SU2000 together with the Celeron and Core 2 Duo SU7000 series, a dual-core processor based on Penryn-3M CULV and use a 800 MHz FSB. SU4000 Pentium series has 2 MB of L2 cache but otherwise is essentially identical to two other lines.

Termination
Pentium Dual-Core brand was discontinued in early 2009 and disappeared from all the online material on Intel's website, along with all the Mobile Pentium Dual-Core product information. The remainder Desktop Pentium Dual-Core E2000 and E5000 series processors have been rebranded as the Pentium. OEM E6000 series desktop and mobile Pentium SU2000 only and always called Pentium T4000 series. With the launch of 32 nm processors in the coming months, Intel will stop some of the Atom, Celeron, Pentium, Core 2, Core i7 and even models. Pentium E2200 and E2220 is scheduled to be discontinued Pentium Dual-Core brand was discontinued in early 2009 and disappeared from all the online material on Intel's website, along with all the Mobile Pentium Dual-Core product information. The remainder Desktop Pentium Dual-Core E2000 and E5000 series processors have been rebranded as the Pentium. OEM E6000 series desktop and mobile Pentium SU2000 only and always called Pentium T4000 series. With the launch of 32 nm processors in the coming months, Intel will stop some of the Atom, Celeron, Pentium, Core 2, Core i7 and even models. Pentium E2200 and E2220 is scheduled to be discontinued in Q3 2009, and will be replaced by the E6000 series.

Comparison with the Pentium D
Although the use of the name Pentium, Pentium Dual-Core desktop is based on Core microarchitecture, which can be seen clearly when comparing the specs Pentium D, which is based on NetBurst microarchitecture was first introduced in Pentium 4. For example, the desktop Pentium Dual-Core has a 1 MB or 2 MB shared L2 cache while the Pentium D processor either 2 MB or 4 MB L2 cache, depending on the model. Additionally, the fastest-clocked Pentium D is clocked at 3.73 GHz while the fastest-clocked desktop Pentium Dual-Core is clocked at 2.93 GHz. But the main difference is the desktop Pentium Dual Core processors have a TDP of only 65 W while the Pentium D may have 95 W or 130 W TDP. Despite having a smaller L2 cache and slower clock speeds, Pentium Dual-Core has proven much faster than most Pentium D in a variety of CPU-intensive applications while providing decreased to 50% less heat.
Enhanced by Zemanta