Joe Meloni

Bloggers and content writers responsibly quoting from third-party sources are not in danger of issues related to duplicate content, according to a recent Webmaster Help video from Matt Cutts.

In the video, Cutts cites the case of a writer including a linked excerpt from an article as a method of adding insight and value for readers. Using this and other ethical techniques will never be an issue, he said.

“You’re just a regular blogger, and you want to include an excerpt (from) some author you like or some other blogger who has a good insight. Just put that in a blockquote, include a link to the original source and you’re in pretty good shape,” Cutts said. “If that’s the sort of thing you’re doing, I would never worry about getting dinged for duplicate content.”

“You’re just a regular blogger, and you want to include an excerpt (from) some author you like or some other blogger who has a good insight. Just put that in a blockquote, include a link to the original source and you’re in pretty good shape.” – Google’s Matt Cutts

Google has designed its algorithms to detect quotes with attribution, and it’s unlikely that sites doing this will experience problems.

The same standards most journalists and other writers use to quote sources for mainstream news publications must be applied to any site acting as a publisher on the web. Citing sources and including links generally keep Google from taking action.

“If your idea of quoting is including an entire article from some other site or maybe even multiple articles and you’re not doing any original articles yourself, then that can affect (Google’s view) of your site,” Cutts said.

Generally, developing original articles that include some important and valuable insight from third parties is a good practice. As part of SEO strategies, companies often turn to news content marketing to provide industry-focused perspectives on popular topics. Companies including quotes from other sources with links to their pages aren’t in danger of penalties for duplicate content.

In fact, detecting duplicate content and appropriately penalizing sites has been a focus of Google since the early days of Panda. Even beyond Panda, Brafton reported Google adjusted its algorithms in November 2011 to better differentiate between sites scraping content and those with original or attributed information.