The #1 Biggest Mistake That People Make With Adsense
By Joel Comm
It's very easy to make a lot of money with AdSense. I know it's easy because in a short space of time, I've managed to turn the sort of AdSense revenues that wouldn't keep me in candy into the kind of income that pays the mortgage on a large suburban house, makes the payments on a family car and does a whole lot more besides.

But that doesn't mean there aren't any number of mistakes that you can make when trying to increase your AdSense income - and any one of those mistakes can keep you earning candy money instead of earning the sort of cash that can pay for your home.

There is one mistake though that will totally destroy your chances of earning a decent AdSense income before you've even started.

That mistake is making your ad look like an ad.

No one wants to click on an ad. Your users don't come to your site looking for advertisements. They come looking for content and their first instinct is to ignore everything else. And they've grown better and better at doing just that. Today's Internet users know exactly what a banner ad looks like. They know what it means, where to expect it - and they know exactly how to ignore it. In fact most Internet users don't even see the banners at the top of the Web pages they're reading or the skyscrapers running up the side.

But when you first open an AdSense account, the format and layout of the ads you receive will have been designed to look just like ads. That's the default setting for AdSense - and that's the setting that you have to work hard to change.

That's where AdSense gets interesting. There are dozens of different strategies that smart AdSense account holders can use to stop their ads looking like ads - and make them look attractive to users. They include choosing the right formats for your ad, placing them in the most effective spots on the page, putting together the best combination of ad units, enhancing your site with the best keywords, selecting the most ideal colors for the font and the background, and a whole lot more besides.

The biggest AdSense mistake you can make is leaving your AdSense units looking like ads.

The second biggest mistake you can make is to not know the best strategies to change them.

For more Google AdSense tips, visit http://adsense-secrets.com
Copyright © 2005 Joel Comm. All rights reserved

Friday, October 31, 2008

Domain Name System

The Domain Name System (DNS) is a hierarchical naming system for computers, services, or any resource participating in the Internet. It associates various information with domain names assigned to such participants. Most importantly, it translates humanly meaningful domain names to the numerical (binary) identifiers associated with networking equipment for the purpose of locating and addressing these devices world-wide. An often used analogy to explain the Domain Name System is that it serves as the "phone book" for the Internet by translating human-friendly computer hostnames into IP addresses. For example, www.example.com translates to 208.77.188.166.

he Domain Name System makes it possible to assign domain names to groups of Internet users in a meaningful way, independent of each user's physical location. Because of this, World-Wide Web (WWW) hyperlinks and Internet contact information can remain consistent and constant even if the current Internet routing arrangements change or the participant uses a mobile device. Internet domain names are easier to remember than IP addresses such as 208.77.188.166(IPv4) or 2001:db8:1f70::999:de8:7648:6e8 (IPv6). People take advantage of this when they recite meaningful URLs and e-mail addresses without having to know how the machine will actually locate them.

The Domain Name System distributes the responsibility for assigning domain names and mapping them to Internet Protocol (IP) networks by designating authoritative name servers for each domain to keep track of their own changes, avoiding the need for a central register to be continually consulted and updated.

In general, the Domain Name System also stores other types of information, such as the list of mail servers that accept email for a given Internet domain. By providing a world-wide, distributed keyword-based redirection service, the Domain Name System is an essential component of the functionality of the Internet.

Other identifiers such as RFID tags, UPC codes, International characters in email addresses and host names, and a variety of other identifiers could all potentially utilize DNS .

The Domain Name System also defines the technical underpinnings of the functionality of this database service. For this purpose it defines the DNS protocol, a detailed specification of the data structures and communication exchanges used in DNS, as part of the Internet Protocol Suite (TCP/IP). The context of the DNS within the Internet protocols may be seen in the following diagram. The DNS protocol was developed and defined in the early 1980's and published by the Internet Engineering Task Force (cf. History).

History

The practice of using a name as a more human-legible abstraction of a machine's numerical address on the network predates even TCP/IP. This practice dates back to the ARPAnet era. Back then, a different system was used. The DNS was invented in 1983, shortly after TCP/IP was deployed. With the older system, each computer on the network retrieved a file called HOSTS.TXT from a computer at SRI (now SRI International). The HOSTS.TXT file mapped numerical addresses to names. A hosts file still exists on most modern operating systems, either by default or through configuration, and allows users to specify an IP address (eg. 208.77.188.166) to use for a hostname (eg. www.example.net) without checking DNS. Systems based on a hosts file have inherent limitations, because of the obvious requirement that every time a given computer's address changed, every computer that seeks to communicate with it would need an update to its hosts file.

The growth of networking required a more scalable system that recorded a change in a host's address in one place only. Other hosts would learn about the change dynamically through a notification system, thus completing a globally accessible network of all hosts' names and their associated IP Addresses.

At the request of Jon Postel, Paul Mockapetris invented the Domain Name system in 1983 and wrote the first implementation. The original specifications appear in RFC 882 and RFC 883. In November 1987, the publication of RFC 1034 and RFC 1035 updated the DNS specification and made RFC 882 and RFC 883 obsolete. Several more-recent RFCs have proposed various extensions to the core DNS protocols.

In 1984, four Berkeley students—Douglas Terry, Mark Painter, David Riggle and Songnian Zhou—wrote the first UNIX implementation, which was maintained by Ralph Campbell thereafter. In 1985, Kevin Dunlap of DEC significantly re-wrote the DNS implementation and renamed it BIND—Berkeley Internet Name Domain. Mike Karels, Phil Almquist and Paul Vixie have maintained BIND since then. BIND was ported to the Windows NT platform in the early 1990s.

BIND was widely distributed, especially on Unix systems, and is the dominant DNS software in use on the Internet. With the heavy use and resulting scrutiny of its open-source code, as well as increasingly more sophisticated attack methods, many security flaws were discovered in BIND. This contributed to the development of a number alternative nameserver and resolver programs. BIND itself was re-written from scratch in version 9, which has a security record comparable to other modern Internet software.

Structure

The domain name space
Domain names, arranged in a tree, cut into zones, each served by a nameserver.

The domain name space consists of a tree of domain names. Only one node or leaf in the tree has zero or more resource records, which hold information associated with the domain name. The tree sub-divides into zones beginning at the root zone. A DNS zone consists of a collection of connected nodes authoritatively served by an authoritative nameserver. (Note that a single nameserver can host several zones.)

Administrative responsibility over any zone may be divided, thereby creating additional zones. Authority is said to be delegated for a portion of the old space, usually in form of sub-domains, to another nameserver and administrative entity. The old zone ceases to be authoritative for the new zone.

Parts of a domain name

A domain name usually consists of two or more parts (technically a label), which is conventionally written separated by dots, such as example.com.

* The rightmost label conveys the top-level domain (for example, the address www.example.com has the top-level domain com).
* Each label to the left specifies a subdivision, or subdomain of the domain above it. Note: “subdomain” expresses relative dependence, not absolute dependence. For example: example.com is a subdomain of the com domain, and www.example.com is a subdomain of the domain example.com. In theory, this subdivision can go down 127 levels. Each label can contain up to 63 octets. The whole domain name may not exceed a total length of 253 octets. In practice, some domain registries may have shorter limits.
* A hostname refers to a domain name that has one or more associated IP addresses; ie: the 'www.example.com' and 'example.com' domains are both hostnames, however, the 'com' domain is not.

DNS servers

Name server

The Domain Name System is maintained by a distributed database system, which uses the client-server model. The nodes of this database are the name servers. Each domain or subdomain has one or more authoritative DNS servers that publish information about that domain and the name servers of any domains subordinate to it. The top of the hierarchy is served by the root nameservers: the servers to query when looking up (resolving) a top-level domain name (TLD).

DNS resolvers


The client-side of the DNS is called a DNS resolver. It is responsible for initiating and sequencing the queries that ultimately lead to a full resolution (translation) of the resource sought, e.g., translation of a domain name into an IP address.

A DNS query may be either a recursive query or a non-recursive query:

* A non-recursive query is one in which the DNS server may provide a partial answer to the query (or give an error).
* A recursive query is one where the DNS server will fully answer the query (or give an error). DNS servers are not required to support recursive queries.

The resolver (or another DNS server acting recursively on behalf of the resolver) negotiates use of recursive service using bits in the query headers.

Resolving usually entails iterating through several name servers to find the needed information. However, some resolvers function simplistically and can communicate only with a single name server. These simple resolvers rely on a recursive query to a recursive name server to perform the work of finding information for them.


Address resolution mechanism

In theory a full host name may have several name segments, (e.g ahost.ofasubnet.ofabiggernet.inadomain.example). In practice, full host names will frequently consist of just three segments (ahost.inadomain.example, and most often www.inadomain.example). For querying purposes, software interprets the name segment by segment, from right to left, using an iterative search procedure. At each step along the way, the program queries a corresponding DNS server to provide a pointer to the next server which it should consult.
A DNS recursor consults three nameservers to resolve the address www.wikipedia.org.

As originally envisaged, the process was as simple as:

1. the local system is pre-configured with the known addresses of the root servers in a file of root hints, which need to be updated periodically by the local administrator from a reliable source to be kept up to date with the changes which occur over time.
2. query one of the root servers to find the server authoritative for the next level down (so in the case of our simple hostname, a root server would be asked for the address of a server with detailed knowledge of the example top level domain).
3. querying this second server for the address of a DNS server with detailed knowledge of the second-level domain (inadomain.example in our example).
4. repeating the previous step to progress down the name, until the final step which would, rather than generating the address of the next DNS server, return the final address sought.

The diagram illustrates this process for the real host www.wikipedia.org.

The mechanism in this simple form has a difficulty: it places a huge operating burden on the root servers, with every search for an address starting by querying one of them. Being as critical as they are to the overall function of the system, such heavy use would create an insurmountable bottleneck for trillions of queries placed every day. In practice caching is used to overcome this problem, and in actual fact root nameservers deal with very little of the total traffic.

Circular dependencies and glue records

Name servers in delegations appear listed by name, rather than by IP address. This means that a resolving name server must issue another DNS request to find out the IP address of the server to which it has been referred. Since this can introduce a circular dependency if the nameserver referred to is under the domain that it is authoritative of, it is occasionally necessary for the nameserver providing the delegation to also provide the IP address of the next nameserver. This record is called a glue record.

For example, assume that the sub-domain en.wikipedia.org contains further sub-domains (such as something.en.wikipedia.org) and that the authoritative name server for these lives at ns1.something.en.wikipedia.org. A computer trying to resolve something.en.wikipedia.org will thus first have to resolve ns1.something.en.wikipedia.org. Since ns1 is also under the something.en.wikipedia.org subdomain, resolving ns1.something.en.wikipedia.org requires resolving something.en.wikipedia.org which is exactly the circular dependency mentioned above. The dependency is broken by the glue record in the nameserver of en.wikipedia.org that provides the IP address of ns1.something.en.wikipedia.org directly to the requestor, enabling it to bootstrap the process by figuring out where ns1.something.en.wikipedia.org is located.

In practice

When an application (such as a web browser) tries to find the IP address of a domain name, it doesn't necessarily follow all of the steps outlined in the Theory section above. We will first look at the concept of caching, and then outline the operation of DNS in "the real world."

Caching and time to live

Because of the huge volume of requests generated by a system like DNS, the designers wished to provide a mechanism to reduce the load on individual DNS servers. To this end, the DNS resolution process allows for caching (i.e. the local recording and subsequent consultation of the results of a DNS query) for a given period of time after a successful answer. How long a resolver caches a DNS response (i.e. how long a DNS response remains valid) is determined by a value called the time to live (TTL). The TTL is set by the administrator of the DNS server handing out the response. The period of validity may vary from just seconds to days or even weeks.

Caching time

As a noteworthy consequence of this distributed and caching architecture, changes to DNS do not always take effect immediately and globally. This is best explained with an example: If an administrator has set a TTL of 6 hours for the host www.wikipedia.org, and then changes the IP address to which www.wikipedia.org resolves at 12:01pm, the administrator must consider that a person who cached a response with the old IP address at 12:00noon will not consult the DNS server again until 6:00pm. The period between 12:01pm and 6:00pm in this example is called caching time, which is best defined as a period of time that begins when you make a change to a DNS record and ends after the maximum amount of time specified by the TTL expires. This essentially leads to an important logistical consideration when making changes to DNS: not everyone is necessarily seeing the same thing you're seeing. RFC 1537 helps to convey basic rules for how to set the TTL.

Note that the term "propagation", although very widely used in this context, does not describe the effects of caching well. Specifically, it implies that when you make a DNS change, it somehow spreads to all other DNS servers (instead, other DNS servers check in with yours as needed), and that you do not have control over the amount of time the record is cached (you control the TTL values for all DNS records in your domain, except your NS records and any authoritative DNS servers that use your domain name).

Some resolvers may override TTL values, as the protocol supports caching for up to 68 years or no caching at all. Negative caching (the non-existence of records) is determined by name servers authoritative for a zone which MUST include the Start of Authority (SOA) record when reporting no data of the requested type exists. The MINIMUM field of the SOA record and the TTL of the SOA itself is used to establish the TTL for the negative answer. RFC 2308

Many people incorrectly refer to a mysterious 48 hour or 72 hour propagation time when you make a DNS change. When one changes the NS records for one's domain or the IP addresses for hostnames of authoritative DNS servers using one's domain (if any), there can be a lengthy period of time before all DNS servers use the new information. This is because those records are handled by the zone parent DNS servers (for example, the .com DNS servers if your domain is example.com), which typically cache those records for 48 hours. However, those DNS changes will be immediately available for any DNS servers that do not have them cached. And any DNS changes on your domain other than the NS records and authoritative DNS server names can be nearly instantaneous, if you choose for them to be (by lowering the TTL once or twice ahead of time, and waiting until the old TTL expires before making the change).

In the real world
DNS resolving from program to OS-resolver to ISP-resolver to greater system.

Users generally do not communicate directly with a DNS resolver. Instead DNS-resolution takes place transparently in client-applications such as web-browsers, mail-clients, and other Internet applications. When an application makes a request which requires a DNS lookup, such programs send a resolution request to the local DNS resolver in the local operating system, which in turn handles the communications required.

The DNS resolver will almost invariably have a cache (see above) containing recent lookups. If the cache can provide the answer to the request, the resolver will return the value in the cache to the program that made the request. If the cache does not contain the answer, the resolver will send the request to one or more designated DNS servers. In the case of most home users, the Internet service provider to which the machine connects will usually supply this DNS server: such a user will either have configured that server's address manually or allowed DHCP to set it; however, where systems administrators have configured systems to use their own DNS servers, their DNS resolvers point to separately maintained nameservers of the organization. In any event, the name server thus queried will follow the process outlined above, until it either successfully finds a result or does not. It then returns its results to the DNS resolver; assuming it has found a result, the resolver duly caches that result for future use, and hands the result back to the software which initiated the request.

Broken resolvers

An additional level of complexity emerges when resolvers violate the rules of the DNS protocol. A number of large ISPs have configured their DNS servers to violate rules (presumably to allow them to run on less-expensive hardware than a fully-compliant resolver), such as by disobeying TTLs, or by indicating that a domain name does not exist just because one of its name servers does not respond.[citation needed]

As a final level of complexity, some applications (such as web-browsers) also have their own DNS cache, in order to reduce the use of the DNS resolver library itself. This practice can add extra difficulty when debugging DNS issues, as it obscures the freshness of data, and/or what data comes from which cache. These caches typically use very short caching times — on the order of one minute. Internet Explorer offers a notable exception: recent versions cache DNS records for half an hour.

Other applications

The system outlined above provides a somewhat simplified scenario. The Domain Name System includes several other functions:

* Hostnames and IP addresses do not necessarily match on a one-to-one basis. Many hostnames may correspond to a single IP address: combined with virtual hosting, this allows a single machine to serve many web sites. Alternatively a single hostname may correspond to many IP addresses: this can facilitate fault tolerance and load distribution, and also allows a site to move physical location seamlessly.
* There are many uses of DNS besides translating names to IP addresses. For instance, Mail transfer agents use DNS to find out where to deliver e-mail for a particular address. The domain to mail exchanger mapping provided by MX records accommodates another layer of fault tolerance and load distribution on top of the name to IP address mapping.
* Sender Policy Framework and DomainKeys, instead of creating their own record types, were designed to take advantage of another DNS record type, the TXT record.
* To provide resilience in the event of computer failure, multiple DNS servers are usually provided for coverage of each domain, and at the top level, thirteen very powerful root servers exist, with additional "copies" of several of them distributed worldwide via Anycast.

Protocol details

DNS primarily uses UDP on port 53 to serve requests. Almost all DNS queries consist of a single UDP request from the client followed by a single UDP reply from the server. TCP comes into play only when the response data size exceeds 512 bytes, or for such tasks as zone transfer. Some operating systems such as HP-UX are known to have resolver implementations that use TCP for all queries, even when UDP would suffice.

Extensions to DNS

EDNS is an extension of the DNS protocol which allows the transport over UDP of DNS replies exceeding 512 bytes, and adds support for expanding the space of request and response codes. It is described in RFC 2671.

Types of DNS records

List of DNS record types

When sent over the Internet, all records use the common format specified in RFC 1035 shown below.
RR (Resource record) fields Field Description Length (octets)
NAME Name of the node to which this record pertains. (variable)
TYPE Type of RR. For example, MX is type 15. 2
CLASS Class code. 2
TTL Signed time in seconds that RR stays valid. 4
RDLENGTH Length of RDATA field. 2
RDATA Additional RR-specific data. (variable)

The type of the record indicates what the format of the data is, and gives a hint of its intended use; for instance, the A record is used to translate from a domain name to an IPv4 address, the NS record lists which name servers can answer lookups on a DNS zone, and the MX record is used to translate from a name in the right-hand side of an e-mail address to the name of a machine able to handle mail for that address.

Many more record types exist and be found in the complete List of DNS record types.

Internationalized domain names

Internationalized domain name

While domain names technically have no restrictions on the characters they use and can include non-ASCII characters, the same is not true for host names.[8] Host names are the names most people see and use for things like e-mail and web browsing. Host names are restricted to a small subset of the ASCII character set known as LDH, the Letters A–Z in upper and lower case, Digits 0–9, Hyphen, and the dot to separate LDH-labels; see RFC 3696 section 2 for details. This prevented the representation of names and words of many languages natively. ICANN has approved the Punycode-based IDNA system, which maps Unicode strings into the valid DNS character set, as a workaround to this issue. Some registries have adopted IDNA.

Security issues

DNS was not originally designed with security in mind, and thus has a number of security issues.

One class of vulnerabilities is DNS cache poisoning, which tricks a DNS server into believing it has received authentic information when, in reality, it has not.

DNS responses are traditionally not cryptographically signed, leading to many attack possibilities; DNSSEC modifies DNS to add support for cryptographically signed responses. There are various extensions to support securing zone transfer information as well.

Even with encryption, a DNS server could become compromised by a virus (or for that matter a disgruntled employee) that would cause IP addresses of that server to be redirected to a malicious address with a long TTL. This could have far-reaching impact to potentially millions of Internet users if busy DNS servers cache the bad IP data. This would require manual purging of all affected DNS caches as required by the long TTL (up to 68 years).

Some domain names can spoof other, similar-looking domain names. For example, "paypal.com" and "paypa1.com" are different names, yet users may be unable to tell the difference when the user's typeface (font) does not clearly differentiate the letter l and the number 1. This problem is much more serious in systems that support internationalized domain names, since many characters that are different, from the point of view of ISO 10646, appear identical on typical computer screens. This vulnerability is often exploited in phishing.

Techniques such as Forward Confirmed reverse DNS can also be used to help validate DNS results.

Domain Registration

The right to use a domain name is delegated by domain name registrars which are accredited by the Internet Corporation for Assigned Names and Numbers (ICANN), the organization charged with overseeing the name and number systems of the Internet. In addition to ICANN, each top-level domain (TLD) is maintained and serviced technically by a sponsoring organization, the TLD Registry. The registry is responsible for maintaining the database of names registered within the TLDs they administer. The registry receives registration information from each domain name registrar authorized to assign names in the corresponding TLD and publishes the information using a special service, the whois protocol.

Registrars usually charge an annual fee for the service of delegating a domain name to a user and providing a default set of name servers. Often this transaction is termed a sale or lease of the domain name, and the registrant is called an "owner", but no such legal relationship is actually associated with the transaction, only the exclusive right to use the domain name. More correctly authorized users are known as "registrants" or as "domain holders".

ICANN publishes a complete list of TLD registries and domain name registrars in the world. One can obtain information about the registrant of a domain name by looking in the WHOIS database held by many domain registries.

For most of the more than 240 country code top-level domains (ccTLDs), the domain registries hold the authoritative WHOIS (Registrant, name servers, expiration dates, etc.). For instance, DENIC, Germany NIC, holds the authoritative WHOIS to a .DE domain name. Since about 2001, most gTLD registries (.ORG, .BIZ, .INFO) have adopted this so-called "thick" registry approach, i.e. keeping the authoritative WHOIS in the central registries instead of the registrars.

For .COM and .NET domain names, a "thin" registry is used: the domain registry (e.g. VeriSign) holds a basic WHOIS (registrar and name servers, etc.). One can find the detailed WHOIS (registrant, name servers, expiry dates, etc.) at the registrars.

Some domain name registries, also called Network Information Centres (NIC), also function as registrars, and deal directly with end users. But most of the main ones, such as for .COM, .NET, .ORG, .INFO, etc., use a registry-registrar model. There are hundreds of Domain Name Registrars that actually perform the domain name registration with the end user (see lists at ICANN or VeriSign). By using this method of distribution, the registry only has to manage the relationship with the registrar, and the registrar maintains the relationship with the end users, or 'registrants' -- in some cases through additional layers of resellers.

In the process of registering a domain name and maintaining authority over the new name space created, registrars store and use several key pieces of information connected with a domain:

* Administrative contact. A registrant usually designates an administrative contact to manage the domain name. The administrative contact usually has the highest level of control over a domain. Management functions delegated to the administrative contacts may include management of all business information, such as name of record, postal address, and contact information of the official registrant of the domain and the obligation to conform to the requirements of the domain registry in order to retain the right to use a domain name. Furthermore the administrative contact installs additional contact information for technical and billing functions.
* Technical contact. The technical contact manages the name servers of a domain name. The functions of a technical contact include assuring conformance of the configurations of the domain name with the requirements of the domain registry, maintaining the domain zone records, and providing continuous functionality of the name servers (that leads to the accessibility of the domain name).
* Billing contact. The party responsible for receiving billing invoices from the domain name registrar and paying applicable fees.
* Name servers. Domains usually need at least two authoritative name servers that perform name resolution for the domain. If they are not automatically provided by the registrar, the domain holder must specify domain names and IP addresses for these servers.

Abuse and Regulation

Critics often claim abuse of administrative power over domain names. Particularly noteworthy was the VeriSign Site Finder system which redirected all unregistered .com and .net domains to a VeriSign webpage. For example, at a public meeting with VeriSign to air technical concerns about SiteFinder [9], numerous people, active in the IETF and other technical bodies, explained how they were surprised by VeriSign's changing the fundamental behavior of a major component of Internet infrastructure, not having obtained the customary consensus. SiteFinder, at first, assumed every Internet query was for a website, and it monetized queries for incorrect domain names, taking the user to VeriSign's search site. Unfortunately, other applications, such as many implementations of email, treat a lack of response to a domain name query as an indication that the domain does not exist, and that the message can be treated as undeliverable. The original VeriSign implementation broke this assumption for mail, because it would always resolve an erroneous domain name to that of SiteFinder. While VeriSign later changed SiteFinder's behaviour with regard to email, there was still widespread protest about VeriSign's action being more in its financial interest than in the interest of the Internet infrastructure component for which VeriSign was the steward.

Despite widespread criticism, VeriSign only reluctantly removed it after the Internet Corporation for Assigned Names and Numbers (ICANN) threatened to revoke its contract to administer the root name servers. ICANN published the extensive set of letters exchanged, committee reports, and ICANN decisions .

There is also significant disquiet regarding the United States' political influence over ICANN. This was a significant issue in the attempt to create a .xxx top-level domain and sparked greater interest in alternative DNS roots that would be beyond the control of any single country.

Additionally, there are numerous accusations of domain name "front running", whereby registrars, when given whois queries, automatically register the domain name for themselves. Recently, Network Solutions has been accused of this.

Truth in Domain Names Act

Anticybersquatting Consumer Protection Act

In the United States, the "Truth in Domain Names Act" (actually the "Anticybersquatting Consumer Protection Act"), in combination with the PROTECT Act, forbids the use of a misleading domain name with the intention of attracting people into viewing a visual depiction of sexually explicit conduct on the Internet.

Read More......

Wednesday, October 29, 2008

Link Exchange

A link exchange (also known as a banner exchange) is a confederation of websites that operates similarly to a web ring. Webmasters register their web sites with a central organization, that runs the exchange, and in turn receive from the exchange HTML code which they insert into their web pages. In contrast to a web ring, where the HTML code simply comprises simple circular ring navigation hyperlinks, in a link exchange the HTML code causes the display of banner advertisements, for the sites of other members of the exchange, on the member web sites, and webmasters have to create such banner advertisements for their own web sites.

The banners are downloaded from the exchange. A monitor on the exchange determines, from referral information supplied by web browsers, how many times a member web site has displayed the banner advertisements of other members, and credits that member with a number of displays of its banner on some other member's web site. Link exchanges usually operate on a 2:1 ratio, such that for every two times a member shows a second member's banner advertisement, that second member displays the first member's banner advertisement. This page impressions:credits ratio is the exchange rate.

One of the earliest link exchanges was LinkExchange, a company that is now owned by Microsoft.

Link exchanges have advantages and disadvantages from the point of view of those using the World Wide Web for marketing. On the one hand, they have the advantages of bringing in a highly targeted readership (for link exchanges where all members of the exchange have similar web sites), of increasing the "link popularity" of a site with Web search engines, and of being relatively stable methods of hyperlinking. On the other hand, they have the disadvantages of potentially distracting visitors away to other sites before they have fully explored the site that the original link was on.

Feig notes several aspects of link exchange companies that prospective members take into account:

* Banners that are animated images result in member web sites taking a long time to load. Some companies impose restrictions on animation lengths.
* The size, in bytes, of a banner is important, affecting both how long it takes to load and how long it takes to render the web site displaying the banner.
* Control over the subjects of advertisements is important. Some companies offer guarantees that advertisements will be restricted to certain subjects, will not include advertisements for pornography, and so forth.
* Companies that provide mechanisms to design banners for webmasters often use automated facilities, where the generated banner design is not reviewed by a human being.


Read More......

Blog

A blog (a contraction of the term "Web log") is a Web site, usually maintained by an individual with regular entries of commentary, descriptions of events, or other material such as graphics or video. Entries are commonly displayed in reverse-chronological order. "Blog" can also be used as a verb, meaning to maintain or add content to a blog.

Many blogs provide commentary or news on a particular subject; others function as more personal online diaries. A typical blog combines text, images, and links to other blogs, Web pages, and other media related to its topic. The ability for readers to leave comments in an interactive format is an important part of many blogs. Most blogs are primarily textual, although some focus on art (artlog), photographs (photoblog), sketches (sketchblog), videos (vlog), music (MP3 blog), audio (podcasting), which are part of a wider network of social media. Micro-blogging is another type of blogging, one which consists of blogs with very short posts. As of December 2007, blog search engine Technorati was tracking more than 112 million blogs. With the advent of video blogging, the word blog has taken on an even looser meaning — that of any bit of media wherein the subject expresses his opinion or simply talks about something.

Types
This section does not cite any references or sources.
Please help improve this section by adding citations to reliable sources. Unverifiable material may be challenged and removed. (August 2008)

There are many different types of blogs, differing not only in the type of content, but also in the way that content is delivered or written.

Personal Blogs
The personal blog, an ongoing diary or commentary by an individual, is the traditional, most common blog. Personal bloggers usually take pride in their blog posts, even if their blog is never read by anyone but them. Blogs often become more than a way to just communicate; they become a way to reflect on life or works of art. Blogging can have a sentimental quality. Few personal blogs rise to fame and the mainstream, but some personal blogs quickly garner an extensive following. A type of personal blog is referred to as "microblogging," which is extremely detailed blogging as it seeks to capture a moment in time. Sites, such as Twitter, allow bloggers to share thoughts and feelings instantaneously with friends and family and is much faster than e-mailing or writing. This form of social media lends to an online generation already too busy to keep in touch.

Corporate Blogs
A blog can be private, as in most cases, or it can be for business purposes. Blogs, either used internally to enhance the communication and culture in a corporation or externally for marketing, branding or public relations purposes are called corporate blogs.

Question Blogging
is a type of blog that answers questions. Questions can be submitted in the form of a submittal form, or through email or other means such as telephone or VOIP. Qlogs can be used to display shownotes from podcasts or the means of conveying information through the internet. Many question logs use syndication such as RSS as a means of conveying answers to questions.

By Media Type
A blog comprising videos is called a vlog, one comprising links is called a linklog, a site containing a portfolio of sketches is called a sketchblog or one comprising photos is called a photoblog. Blogs with shorter posts and mixed media types are called tumblelogs.



By Device
Blogs can also be defined by which type of device is used to compose it. A blog written by a mobile device like a mobile phone or PDA could be called a moblog.One early blog was Wearable Wireless Webcam, an online shared diary of a person's personal life combining text, video, and pictures transmitted live from a wearable computer and EyeTap device to a web site. This practice of semi-automated blogging with live video together with text was referred to as sousveillance. Such journals have been used as evidence in legal matters.

By Genre
Some blogs focus on a particular subject, such as political blogs, travel blogs, house blogs, fashion blogs, project blogs, education blogs, niche blogs, classical music blogs, quizzing blogs and legal blogs (often referred to as a blawgs) or dreamlogs. While not a legitimate type of blog, one used for the sole purpose of spamming is known as a Splog.

Read More......

Tuesday, October 28, 2008

International markets

he search engines' market shares vary from market to market, as does competition. In 2003, Danny Sullivan stated that Google represented about 75% of all searches. In markets outside the United States, Google's share is often larger, and Google remains the dominant search engine worldwide as of 2007. As of 2006, Google held about 40% of the market in the United States, but Google had an 85-90% market share in Germany. While there were hundreds of SEO firms in the US at that time, there were only about five in Germany.

In Russia the situation is reversed. Local search engine Yandex controls 50% of the paid advertising revenue, while Google has less than 9%. In China, Baidu continues to lead in market share, although Google has been gaining share as of 2007.

Successful search optimization for international markets may require professional translation of web pages, registration of a domain name with a top level domain in the target market, and web hosting that provides a local IP address. Otherwise, the fundamental elements of search optimization are essentially the same, regardless of language.

Read More......

As a marketing strategy

Eye tracking studies have shown that searchers scan a search results page from top to bottom and left to right (for left to right languages), looking for a relevant result. Placement at or near the top of the rankings therefore increases the number of searchers who will visit a site. However, more search engine referrals does not guarantee more sales. SEO is not necessarily an appropriate strategy for every website, and other Internet marketing strategies can be much more effective, depending on the site operator's goals. A successful Internet marketing campaign may drive organic traffic to web pages, but it also may involve the use of paid advertising on search engines and other pages, building high quality web pages to engage and persuade, addressing technical issues that may keep search engines from crawling and indexing those sites, setting up analytics programs to enable site owners to measure their successes, and improving a site's conversion rate.


SEO may generate a return on investment. However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals. Due to this lack of guarantees and certainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors. It is considered wise business practice for website operators to liberate themselves from dependence on search engine traffic. A top-ranked SEO blog Seomoz.org has reported, "Search marketers, in a twist of irony, receive a very small share of their traffic from search engines." Instead, their main sources of traffic are links from other websites.

Read More......

White Hat versus Black Hat

SEO techniques can be classified into two broad categories: techniques that search engines recommend as part of good design, and those techniques that search engines do not approve of. The search engines attempt to minimize the effect of the latter, among them spamdexing. Industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO. White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing

An SEO technique is considered white hat if it conforms to the search engines' guidelines and involves no deception. As the search engine guidelines are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines, but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see. White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the spiders, rather than attempting to trick the algorithm from its intended purpose. White hat SEO is in many ways similar to web development that promotes accessibility,
Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception. One black hat technique uses text that is hidden, either as text colored similar to the background, in an invisible div, or positioned off screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking.

Search engines may penalize sites they discover using black hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines' algorithms, or by a manual site review. One infamous example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices. Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google's list.

Read More......

Webmasters and Search Engines

By 1997 search engines recognized that webmasters were making efforts to rank well in their search engines, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such as Infoseek, adjusted their algorithms in an effort to prevent webmasters from manipulating rankings.

Due to the high marketing value of targeted search results, there is potential for an adversarial relationship between search engines and SEOs. In 2005, an annual conference, AIRWeb, Adversarial Information Retrieval on the Web, was created to discuss and minimize the damaging effects of aggressive web content providers.

SEO companies that employ overly aggressive techniques can get their client websites banned from the search results. In 2005, the Wall Street Journal reported on a company, Traffic Power, which allegedly used high-risk techniques and failed to disclose those risks to its clients. Wired magazine reported that the same company sued blogger Aaron Wall for writing about the ban. Google's Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.

Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences, chats, and seminars. In fact, with the advent of paid inclusion, some search engines now have a vested interest in the health of the optimization community. Major search engines provide information and guidelines to help with site optimization. Google has a Sitemaps program to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website. Google guidelines are a list of suggested practices Google has provided as guidance to webmasters. Yahoo! Site Explorer provides a way for webmasters to submit URLs, determine how many pages are in the Yahoo! index and view link information.

Getting indexed

The leading search engines, Google, Yahoo! and Microsoft, use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically. Some search engines, notably Yahoo!, operate a paid submission service that guarantee crawling for either a set fee or cost per click. Such programs usually guarantee inclusion in the database, but do not guarantee specific ranking within the search results. Yahoo's paid inclusion program has drawn criticism from advertisers and competitors. Two major directories, the Yahoo Directory and the Open Directory Project both require manual submission and human editorial review. Google offers Google Webmaster Tools, for which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found, especially pages that aren't discoverable by automatically following links.

Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by the search engines. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.

Preventing indexing

Robots Exclusion Standard


To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots. When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed, and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.

Read More......

Search Engine Optimization

Search engine optimization (SEO) is the process of improving the volume and quality of traffic to a web site from search engines via "natural" ("organic" or "algorithmic") search results. Usually, the earlier a site is presented in the search results, or the higher it "ranks," the more searchers will visit that site. SEO can also target different kinds of search, including image search, local search, and industry-specific vertical search engines.


As an Internet marketing strategy, SEO considers how search engines work and what people search for. Optimizing a website primarily involves editing its content and HTML coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines. Sometimes a site's structure (the relationships between its content) must be altered too. Because of this it is, from a client's perspective, always better to incorporate Search Engine Optimization when a website is being developed than to try and retroactively apply it.

The acronym "SEO" can also refer to "search engine optimizers," a term adopted by an industry of consultants who carry out optimization projects on behalf of clients, and by employees who perform SEO services in-house. Search engine optimizers may offer SEO as a stand-alone service or as a part of a broader marketing campaign. Because effective SEO may require changes to the HTML source code of a site, SEO tactics may be incorporated into web site development and design. The term "search engine friendly" may be used to describe web site designs, menus, content management systems and shopping carts that are easy to optimize.

Another class of techniques, known as black hat SEO or Spamdexing, use methods such as link farms and keyword stuffing that degrade both the relevance of search results and the user-experience of search engines. Search engines look for sites that employ these techniques in order to remove them from their indices.

History

Webmasters and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all a webmaster needed to do was submit a page, or URL, to the various engines which would send a spider to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed.The process involves a search engine spider downloading a page and storing it on the search engine's own server, where a second program, known as an indexer, extracts various information about the page, such as the words it contains and where these are located, as well as any weight for specific words, as well as any and all links the page contains, which are then placed into a scheduler for crawling at a later date.

Site owners started to recognize the value of having their sites highly ranked and visible in search engine results, creating an opportunity for both white hat and black hat SEO practitioners. According to industry analyst Danny Sullivan, the earliest known use of the phrase search engine optimization was a spam message posted on Usenet on July 26, 1997.

Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag, or index files in engines like ALIWEB. Meta tags provided a guide to each page's content. But using meta data to index pages was found to be less than reliable because the webmaster's account of keywords in the meta tag were not truly relevant to the site's actual keywords. Inaccurate, incomplete, and inconsistent data in meta tags caused pages to rank for irrelevant searches. Web content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.

By relying so much on factors exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search allowing those results to be false would turn users to find other search sources. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate.

While graduate students at Stanford University, Larry Page and Sergey Brin developed "backrub", a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.


Page and Brin founded Google in 1998. Google attracted a loyal following among the growing number of Internet users, who liked its simple design. Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaining PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming. In recent years major search engines have begun to rely more heavily on off-web factors such as the age, sex, location, and search history of people conducting searches in order to further refine results.

By 2007, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation. Google says it ranks sites using more than 200 different signals.The three leading search engines, Google, Yahoo and Microsoft's Live Search, do not disclose the algorithms they use to rank pages. Notable SEOs, such as Rand Fishkin, Barry Schwartz, Aaron Wall and Jill Whalen, have studied different approaches to search engine optimization, and have published their opinions in online forums and blogs. SEO practitioners may also study patents held by various search engines to gain insight into the algorithms.

Read More......

Monday, October 27, 2008

PayPal

PayPal is an e-commerce business allowing payments and money transfers to be made through the Internet. PayPal serves as an electronic alternative to traditional paper methods such as cheques and money orders.


PayPal is a type of person-to-person (P2P) payment service. A P2P payment service allows anyone with an e-mail address to transfer funds electronically to someone else with an e-mail address. The initiator of an electronic funds transfer via PayPal must first register with and fund their PayPal account. A PayPal account can be funded with a check or money order, an electronic debit from a bank account or by a credit card. The recipient of a PayPal transfer can either request a check from PayPal, establish their own PayPal deposit account or request a transfer to their bank account. PayPal is an example of a payment intermediary service that facilitates worldwide e-commerce.

PayPal performs payment processing for online vendors, auction sites, and other commercial users, for which it charges a fee. It sometimes also charges a transaction fee for receiving money (a percentage of the amount sent plus an additional fixed amount). The fees charged depend on the currency used, the payment option used, the country of the sender, the country of the recipient, the amount sent and the recipient's account type. On October 3, 2002, PayPal became a wholly owned subsidiary of eBay. Its corporate headquarters are in San Jose, California, United States at eBay's North First Street satellite office campus. The company also has significant operations in Omaha, Nebraska; Scottsdale, Arizona; and Austin, Texas in the U.S.; India; Dublin, Ireland; and Berlin, Germany, and now also in Tel-Aviv, Israel after PayPal acquired an Israeli startup called FraudSciences for $169 million. As of July 2007, across Europe, PayPal also operates as a Luxembourg-based bank.

History

Beginnings


The current incarnation of PayPal is the result of a March 2000 merger between Confinity and X.com.initially as a Palm Pilot payments and cryptography company. X.com was founded by Elon Musk in March 1999, initially as an Internet financial services company. Both Confinity and X.com launched their websites in late 1999. Both companies were located on University Avenue in Palo Alto. Confinity's website was initially focused on reconciling beamed payments from Palm Pilots with email payments as a feature and X.com's website initially included financial services with email payments as a feature.

At Confinity, many of the initial recruits were alumni of The Stanford Review, also founded by Peter Thiel, and most early engineers hailed from the University of Illinois at Urbana-Champaign, recruited by Max Levchin. On the X.com side, Elon Musk recruited a wide range of technical and business personnel, including many that were critical to the combined company's success, such as Amy Klement, Sal Giambanco, Roelof Botha of Sequoia Capital, Sanjay Bhargava and Jeremy Stoppelman.

To block potentially fraudulent access by automated systems, PayPal devised a system (see CAPTCHA) of making the user enter numbers from a blurry picture, which they coined the Gausebeck-Levchin test. According to Eric M. Jackson, author of the book The PayPal Wars, PayPal invented this system now in common use; however, there is evidence AltaVista used a CAPTCHA as early as 1997, before PayPal existed.[citation needed] The neutrality of The PayPal Wars, which was self-published by Eric Jackson through his company World Ahead Publishing, funded in part by Peter Thiel, is disputed. In either case, the PayPal CAPTCHA has been proven insecure.

eBay watched the rise in volume of its online payments and realized the fit of an online payment system with online auctions. eBay purchased Billpoint in May 1999, prior to the existence of PayPal. eBay made Billpoint its official payment system, dubbing it "eBay Payments," but cut the functionality of Billpoint by narrowing it to only payments made for eBay auctions.

For this reason, PayPal was listed in several times as many auctions as Billpoint. In February 2000, the PayPal service had an average of approximately 200,000 daily auctions while Billpoint (in beta) had only 4,000 auctions. By April 2000, more than 1,000,000 auctions promoted the PayPal service. PayPal was able to turn the corner and become the first dot-com to IPO after the September 11 attacks.

Acquisition by eBay

In October 2002, PayPal was acquired by eBay for $1.5 billion.[10] PayPal had previously been the payment method of choice by more than fifty percent of eBay users, and the service competed with eBay’s subsidiary Billpoint, Citibank’s c2it, whose service was closed in late 2003, and Yahoo!'s PayDirect, whose service was closed in late 2004. Western Union announced the December 2005 shut down of their BidPay service but subsequently sold it in 2006 to CyberSource Corporation. BidPay announced it would cease all operations on 31 December 2007, and it did. Some competitors which offer some of PayPal’s services, such as Wirecard, Moneybookers, 2Checkout, CCNow and Kagi, remain in business, despite the fact that eBay now requires everyone on its Australian and United Kingdom sites to offer PayPal.

PayPal’s total payment volume, the total value of transactions, was US$11 billion in the fourth quarter of 2006, an increase of 36% over the previous year. The company continues to focus on international growth and growth of its Merchant Services division, providing online payments for retailers off eBay.

Business today

Currently, PayPal operates in 190 markets, and it manages over 164 million accounts. PayPal allows customers to send, receive, and hold funds in 18 currencies worldwide. These currencies are the Australian dollar, Canadian dollar, Chinese renminbi yuan (only available for some Chinese accounts, see below), Euro, Pound sterling, Japanese yen, Czech Koruna, Danish krone, Hong Kong dollar, Hungarian forint, Israeli new sheqel, Mexican pesos, New Zealand dollar, Norwegian krone, Polish zloty, Singapore dollar, Swedish krona, Swiss franc and U.S. dollar. PayPal operates locally in 13 countries.

Residents in 190 markets can use PayPal in their local markets to send money online. These new markets include Peru, Indonesia, the Philippines, Croatia, Fiji, Vietnam and Jordan. A complete list can be viewed at PayPal's website.

In China PayPal offers two kinds of accounts:

* PayPal.com accounts, for sending and receiving money to/from other PayPal.com accounts. All non-Chinese accounts are PayPal.com accounts, so these accounts may be used to send money internationally.
* PayPal.cn accounts, for sending and receiving money to and from other PayPal.cn accounts.

It is impossible to send money between PayPal.cn accounts and PayPal.com accounts, so PayPal.cn accounts are effectively unable to make international payments. For PayPal.cn, the only supported currency is the renminbi.

Although PayPal's corporate headquarters are located in San Jose, PayPal’s operations center is located near Omaha, Nebraska, where the company employs more than 2,000 people as of 2007. PayPal’s international headquarters is located in Dublin, Ireland. The company also recently opened a technology center in Scottsdale, Arizona.

Online

The domain paypal.com attracted at least 260 million visitors annually by 2008 according to a Compete.com study.

Bank status

In the United States, PayPal is licensed as a money transmitter on a state-by-state basis. PayPal is not classified as a bank in the United States, though the company is subject to some of the rules and regulations governing the financial industry including Regulation E consumer protections and the USA PATRIOT Act.[16] On May 15, 2007, PayPal announced that it would move its European operations from the UK to Luxembourg, commencing July 2, 2007 as PayPal (Europe) S.à r.l. & Cie, S.C.A. This would be as a Luxembourg entity regulated as a bank by the Commission de Surveillance du Secteur Financier (CSSF), the Luxembourg equivalent of the FSA. PayPal Luxembourg will then provide the PayPal service throughout the European Union (EU).

Read More......

Internet marketing

Internet marketing, also referred to as web marketing, online marketing, or eMarketing, is the marketing of products or services over the Internet.


The Internet has brought many unique benefits to marketing, one of which being lower costs for the distribution of information and media to a global audience. The interactive nature of Internet marketing, both in terms of providing instant response and eliciting responses, is a unique quality of the medium. Internet marketing is sometimes considered to have a broader scope because it refers to digital media such as the Internet, e-mail, and wireless media; however, Internet marketing also includes management of digital customer data and electronic customer relationship management (ECRM) systems.

Internet marketing ties together creative and technical aspects of the Internet, including design, development, advertising, and sales. Internet marketing does not simply entail building or promoting a website, nor does it mean placing a banner ad on another website. Effective Internet marketing requires a comprehensive strategy that synergizes a given company's business model and sales goals with its website function and appearance, focusing on its target market through proper choice of advertising type, media, and design.

Internet marketing also refers to the placement of media along different stages of the customer engagement cycle through search engine marketing (SEM), search engine optimization (SEO), banner ads on specific websites, e-mail marketing, and Web 2.0 strategies. In 2008 The New York Times working with comScore published an initial estimate to quantify the user data collected by large Internet-based companies. Counting four types of interactions with company websites in addition to the hits from advertisements served from advertising networks, the authors found the potential for collecting upward of 2,500 pieces of data on average per user per month.

Business models

Internet marketing is associated with several business models:

* e-commerce — goods are sold directly to consumers or businesses,
* publishing — the sale of advertising,
* lead-based websites — an organization generates value by acquiring sales leads from its website, and
* affiliate marketing — a business rewards one or more affiliates for each visitor or customer brought about by the affiliate's marketing efforts.

There are many other business models based on the specific needs of each person or business that launches an Internet marketing campaign.

Differences from traditional marketing

One-to-one approach

The targeted user is typically browsing the Internet alone, so the marketing messages can reach them personally. This approach is used in search marketing, where the advertisements are based on search engine keywords entered by the user.

Appeal to specific interests

Internet marketing and geo marketing places an emphasis on marketing that appeals to a specific behavior or interest, rather than reaching out to a broadly-defined demographic. "On- and Off-line" marketers typically segment their markets according to age group, gender, geography, and other general factors. Marketers have the luxury of targeting by activity and geolocation. For example, a kayak company can post advertisements on kayaking and canoing websites with the full knowledge that the audience has a related interest.

Internet marketing differs from magazine advertisements, where the goal is to appeal to the projected demographic of the periodical. Because the advertiser has knowledge of the target audience—people who engage in certain activities (e.g., uploading pictures, contributing to blogs)— the company does not rely on the expectation that a certain group of people will be interested in its new product or service.

Geo targeting

Geo targeting (in internet marketing) and geo marketing are the methods of determining the geolocation (the physical location) of a website visitor with geolocation software, and delivering different content to that visitor based on his or her location, such as country, region/state, city, metro code/zip code, organization, Internet Protocol (IP) address, ISP or other criteria.

Different content by choice

A typical example for different content by choice in geo targeting is the FedEx website at FedEx.com where users have the choice to select their country location first and are then presented with different site or article content depending on their selection.

Automated different content

With automated different content in internet marketing and geomarketing the delivery of different content based on the geographical geolocation and other personal information is automated.

Read More......

Saturday, October 25, 2008

World of Google Adsense

AdSense is an advertisement serving program run by Google. Website owners can enroll in this program to enable text, image, and more recently, video advertisements on their websites. These advertisements are administered by Google and generate revenue on either a per-click or per-impression basis. Google is also currently beta-testing a cost-per-action based service.

Overview

Google uses its Internet search technology to serve advertisements based on website content, the user's geographical location, and other factors. Those wanting to advertise with Google's targeted advertisement system may enroll through AdWords. AdSense has become a popular method of placing advertising on a website because the advertisements are less intrusive than most banners, and the content of the advertisements is often relevant to the website.



Currently, AdSense uses JavaScript code to incorporate the advertisements into a participating website. If the advertisements are included on a website that has not yet been crawled by the Mediabot, AdSense will temporarily display advertisements for charitable causes, also known as public service announcements (PSAs). (The Mediabot is different from the Googlebot, which maintains Google's search index.)

Many websites use AdSense to monetize their content. AdSense has been particularly important for delivering advertising revenue to small websites that do not have the resources for developing advertising sales programs and salespeople. To fill a website with advertisements that are relevant to the topics discussed, webmasters implement a brief script on the websites' pages. Websites that are content-rich have been very successful with this advertising program, as noted in a number of publisher case studies on the AdSense website.

Some webmasters invest significant effort into maximizing their own AdSense income. They do this in three ways:[citation needed]

1. They use a wide range of traffic-generating techniques, including but not limited to online advertising.
2. They build valuable content on their websites that attracts AdSense advertisements, which pay out the most when they are clicked.
3. They use copy on their websites that encourages visitors to click on advertisements. Note that Google prohibits webmasters from using phrases like "Click on my AdSense ads" to increase click rates. The phrases accepted are "Sponsored Links" and "Advertisements".

The source of all AdSense income is the AdWords program, which in turn has a complex pricing model based on a Vickrey second price auction. AdSense commands an advertiser to submit a sealed bid (i.e., a bid not observable by competitors). Additionally, for any given click received, advertisers only pay one bid increment above the second-highest bid.
History

The underlying technology behind AdSense was derived originally from WordNet, Simpli (a company started by the founder of Wordnet, George A. Miller), and a number of professors and graduate students from Brown University, including James A. Anderson, Jeff Stibel, and Steve Reiss.[1] A variation of this technology utilizing WordNet was developed by Oingo, a small search engine company based in Santa Monica founded in 1998 by Gilad Elbaz and Adam Weissman. Oingo changed its name to Applied Semantics in 2001, which was later acquired by Google in April 2003 for US$102 million.

AdSense for Feeds

In May 2005, Google announced a limited-participation beta version of AdSense for Feeds, a version of AdSense that runs on RSS and Atom feeds that have more than 100 active subscribers. According to the Official Google Blog, "advertisers have their ads placed in the most appropriate feed articles; publishers are paid for their original content; readers see relevant advertising—and in the long run, more quality feeds to choose from."

AdSense for Feeds works by inserting images into a feed. When the image is displayed by a RSS reader or Web browser, Google writes the advertising content into the image that it returns. The advertisement content is chosen based on the content of the feed surrounding the image. When the user clicks the image, he or she is redirected to the advertiser's website in the same way as regular AdSense advertisements.

AdSense for Feeds has remained in its beta state until August 15, 2008, when it became available to all AdSense users.

AdSense for search

A companion to the regular AdSense program, AdSense for search, allows website owners to place Google search boxes on their websites. When a user searches the Internet or the website with the search box, Google shares any advertising revenue it makes from those searches with the website owner. However the publisher is paid only if the advertisements on the page are clicked: AdSense does not pay publishers for mere searches.

AdSense for mobile content

AdSense for mobile content allows publishers to generate earnings from their mobile websites using targeted Google advertisements. Just like AdSense for content, Google matches advertisements to the content of a website — in this case, a mobile website.

Read More......

Wednesday, October 8, 2008

Share and Get Money

Of course you enquires site friendship of which to the advantage of the owner of adsense ? we will advise to this site your site friendship that is most precise to share, you can share video, photo, and your blog.

Not all residing in here is owner Adsense but for you are owner Adsense here is place of promotion site or Blog... best you gets friend and good for you.

that you are not vexed again soon joined here www.flixya.com


Read More......

Wednesday, October 1, 2008

SEO VS LINK EXCHANGE !

which more effective enters as many as possible your URL to all the search engine or propagates you URL into various guest books to owner of web or blog ?


of course you knows that more and more link widespread you has bigger opportunity of web or your blog are visited by people, why that way ?

of course search engine which commonly use in world is Google and Yahoo ...

search engine like google and yahoo has different data base in making an index to seeking of address web or blog.

so, how according to you are SEO or Link Exchange ?


Read More......
Custom Search

Bookmark and Share

Add to Google Reader or Homepage

Add to My AOL

Subscribe in NewsGator Online