So let's just say you're a budding entrepreneur. You've gone into business for yourself and setup that all-important website. It's your digital storefront. No need for that brick-and-mortar store anymore. No need for the random person to patronize your shop from the street. Today, all you need are those virtual visitors -- people that are keenly interested in buying what you're selling.
I’ve just taken the SEO role at my agency full time and, whilst it can be difficult at times, I am liking the challenge. I wonder if you had any suggestions when it came to finding “opportunity keywords” for term/subjects that don’t necessarily have massive search volumes associated to them? I use a few tools and utilise Google’s related terms already, but wondered if there were any tricks for finding new markets?

Robots.txt is not an appropriate or effective way of blocking sensitive or confidential material. It only instructs well-behaved crawlers that the pages are not for them, but it does not prevent your server from delivering those pages to a browser that requests them. One reason is that search engines could still reference the URLs you block (showing just the URL, no title or snippet) if there happen to be links to those URLs somewhere on the Internet (like referrer logs). Also, non-compliant or rogue search engines that don't acknowledge the Robots Exclusion Standard could disobey the instructions of your robots.txt. Finally, a curious user could examine the directories or subdirectories in your robots.txt file and guess the URL of the content that you don't want seen.
The idea of “link bait” refers to creating content that is so extremely useful or entertaining it compels people to link to it. Put yourself in the shoes of your target demographic and figure out what they would enjoy or what would help them the most. Is there a tool you can make to automate some tedious process? Can you find enough data to make an interesting infographic? Is there a checklist or cheat sheet that would prove handy to your audience? The possibilities are nearly endless – survey your visitors and see what is missing or lacking in your industry and fill in the gaps.
To give you an example, our domain authority is currently a mediocre 41 due to not putting a lot of emphasis on it in the past. For that reason, we want to (almost) automatically scratch off any keyword with a difficulty higher than 70%—we just can’t rank today. Even the 60% range as a starting point is gutsy, but it’s achievable if the content is good enough.
If your social media profiles contain a link to your website, then you’ve turned your engagement into another channel for website traffic. Just be sure to engage moderately and in a sincere way, and avoid including links to your website in your comments—lest you appear spammy and hurt your online and business reputation. Increased traffic should not be the goal of your engagement, but rather a secondary result.
Español: aumentar el tráfico en un sitio web, Русский: увеличить посещаемость сайта, 中文: 增加网站流量, Deutsch: Die Zahl der Zugriffe auf Websites steigern, Français: augmenter le trafic de votre site web, Nederlands: Meer bezoekers naar je website trekken, Čeština: Jak zvýšit návštěvnost webových stránek, Bahasa Indonesia: Menaikkan Kunjungan ke Situs Web, العربية: زيادة حركة الزيارات على موقعك الإلكتروني, हिन्दी: वेबसाइट का ट्रैफिक बढ़ाएं, Tiếng Việt: Tăng Lượng truy cập Trang web
Expert roundups have been abused in the Internet Marketing industry, but they are effective for several reasons. First, you don’t have to create any content. The “experts” create all the content. Second, it is ego bait. Meaning, anyone who participated in the roundup will likely share it with their audience. Last, it is a great way to build relationships with influencers.

To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots (usually ). When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[47]