Sign up

How URLs Work

Uniform Resource Locaters are addresses used on the World Wide Web. Every web page or resource is specified by a unique URL, which means the internet consists of billions of URLs. They are the building blocks of navigation and the currency of the web. A hyperlink or link (clickable URL) is any text, image or button that you can click-on to jump to a new website or new page on the same site. There are only a few basic ways to access web pages:
1) Click-on hyperlinks from another website.
2) Indirectly - find and click-on links from Google (or other search engine) results.
3) Directly - enter a domain or URL into a browser's address bar.

• Entering a domain or URL into the browser's web address field is known as direct navigation or "type-in" traffic and amounts to about 10% of internet traffic (a browser's address bar shows the current URL and will also accept a pasted-in or typed-in URL that the user wishes to go to).
• Social interface to the Web relies on email and networking sites. When users recommend web pages to each other, email and bookmarking sites are second only to search engines.
• URLs greater than 78 characters long will usually wrap across a line feed, increasing likelihood of breaking. Some email clients impose a length limit at which lines are automatically broken; requiring the user to paste a long URL back together, rather than just clicking on it. A short URL eliminates this problem.
• People sometimes guess the domain name of sites they have not visited before; so pick a name that describes your blog, company or brand. Even when people have been to a site before, they will often try to guess or remember the site name instead of using a bookmark or history list; so an ideal domain name is short, simple, memorable and easy to spell.

Two types of URLs

• Static URLs get content directly from server files and always stay the same unless a webmaster purposefully changes an html file.
• Dynamic URLs get content from templates + databases. When URLs are requested, a web script begins with a template and fills in details by fetching information from a database. Since database content can be subject to frequent updating that means webpage content can frequently change as well.
• The address itself will indicate static or dynamic. If it contains any query strings such as ? & = then it is a dynamic URL. eg:  http://tiny.cc/newforum/thread.php?threadid= 357&sort=date   (Note also that blank spaces are never allowed in a web address).

Static URLs only contain dots, slashes, dashes or underscores. eg:  http://tiny.cc/newforum/url_discussion.html   and are considered clean or user-friendly URLs because they are more human readable and descriptive than dynamic URLs. And static URLs are typically ranked better in search engine results while dyanmic URLs tend to not get indexed.

Parts of a URL

Note that www. is only a subdomain (standing for world wide web, long-used by convention, but not always necessary). This link example:   www.tiny.cc/   contains an error due to the missing protocol. And http://www.tiny.cc is not a required format because tiny.cc happens to be configured without a subdomain (making www. optional). So http://tiny.cc is shorter - using only required, basic parts that make a URL functional - protocol + domain + top level domain (http:// + tiny + .cc).


A short URL is not a web page on its own. Instead it is simply a pointer that forwards traffic to a different address. Effectively making the same web page available under more than one address - both the original (long address) and the Tiny address. Browsers and servers talk back and forth using "headers" which contain various information. And a status code is one piece of information exchanged through a header. Let's say you click on a shortened URL ( eg. http://tiny.cc/x ). A conversation through the use of headers will take place between your computer and the tiny.cc server. Tiny.cc URLs use 301 redirects, which means your browser request will be responded to with a 301 = page moved permanently status code and will forward your browser to this "moved to" address (which we are calling the long URL). A search engine spider can follow links just as your browser did and is satisfied if it sees the 301 redirect method in-use. Search engines care about redirects because when they see one, they need to decide how to pass link popularity.

Link Masking

Potential buyers sometimes avoid clicking if they can see that the link involves marketing efforts or affiliate commisions. Masking means that your affiliate URL never becomes visible in a browser address bar.
• A very simple method if you have your own website is to create a blank html page and paste the following code into the blank page (substituting your affiliate URL).
Then name the html page something that makes sense for the application such as http://yourwebsite.com/orderpage. In this example you would use http://yourwebsite.com/orderpage in place of your affiliate URL and when clicked on, it would instantly redirect to your affiliate URL which never appears in the browser address bar before clicking. See http://tiny.cc/securityscan.html for a working example of this method.
• A second html method for website owners is to use an iframe. Instead of redirecting, an iframe loads a page within another webpage. And since the original web address never changes, the URL seen in the address bar never changes either.
• A third simple method if you don't have a website - requires only a domain name. Your domain registar account usually has URL forwarding options for the domain. So set-up URL forwarding (using a 301 redirect) to http://affiliatewebsite.com/affiliateID. This will create an instant redirect from your domain name to affiliate URL. (http://mysite.com --> http://affiliatewebsite.com/affiliateID) The customer will never see the destination URL in their browser address bar ahead of time.
There are other masking or cloaking techniques but most are frowned on by search engines due to potential for abuse and usually make use of javascript, server configuration or scripting languages such as php or perl.

Search Engines and Link Indexing

Two methods that search engines use to discover links and pages:
1) Webmasters can inform search engines about new links with a submission form.
2) Search engines automatically find links when they exist on public pages.
If a link is never used on the internet or shared publicly then it doesn't exist in a cyber sense. As an example, if you have a link pointing to personal information and only you or a family member use it, then there is no method for a web crawler to find it. It lives outside their known universe of pages and links. Search engines find and index a massive amount of information, but they are not all all seeing, all knowing. For instance see Deep Web and Dark Web. There are a lot of things that search engines never find because it's not posted on the web in a fashion that they can interpret. For instance, Google can't crawl and index the links inside of your TINY account. Crawlers are not allowed behind our login, and do not have a way to get into accounts. Short links are not pages on our site, they exist in a database that robots have no general access to. So unless shared, short links never appear in search results.