Want to raise your Facebook ads relevance score? Wondering how to create Facebook ads that drive positive engagement and social proof? In this article, you’ll discover seven ways to quickly and significantly lift your Facebook ads relevance score. What Is the Facebook Ads Relevance Score? Facebook’s relevance score is a rating on a scale of [...]
The post How to Improve Your Facebook Ads Relevance Score: 7 Methods That Work appeared first on Social Media Examiner.
HAL 9000 won’t let Dave back in the spacecraft. What’s gone wrong? Was it something Dave said? Your brand’s online reputation can also determine whether customers will let you into their wallets or not.
In this episode of our popular Here’s Why digital marketing video series, Mark Traphagen reveals the six factors that work together to determine the online reputation of a brand, and gives tips on how marketers can make use of them.
Don’t miss a single episode of Here’s Why with Mark & Eric. Click the subscribe button below to be notified via email each time a new video is published.
Subscribe to Here’s Why
6 Dimensions of Online Reputation that Should Guide Your Social Media Marketing
See all of our Here’s Why Videos | Subscribe to our YouTube Channel
Eric: Mark, do people really take note of a brand and its reputation online when they’re browsing or using social media?
Mark: Eric, they sure do. Think of the last time that you were on Facebook and a post came up from a business. We know at that point a real battle begins as people have developed a resistance to brand content, but an interesting visual, video, or post title will get your attention. Now, where do you look next to decide if reading or watching this is worth your time?
Eric: I get it. I look at the brand and what business it’s from.
Mark: Yes. And what information is that giving you?
Eric: I’m probably asking myself if I trust the brand name, whether I think it’s likely that what they’re going to share is going to be helpful to me some way.
Mark: Right. So understanding what influences your online reputation is critical for your digital marketing success.
Eric: Sure. But can you share the details of that with us?
Mark: Sure. The people who developed the Harris Poll created what they call the Reputation Quotient, or RQ. Their RQ project conducts regular surveys to determine the brands that most stick in the minds of consumers. They then evaluate each of those brands according to six dimensions of reputation:
Eric: I know in your Marketing Land column you’ve written about each of those in detail, but since this is a brief video, why don’t you pick the three you think are the most important for social media marketers and SEOs?
Mark: Sounds good. I’d put products and services at the top of the list. Which is kind of funny since it’s one of the factors marketers have little control over. But it’s undeniably true that if your company has a reputation for crappy products or bad service, all the marketing in the world isn’t going to overcome that. Now, where marketers can be an influence on that process is providing timely feedback from customers, especially from social media. That can help improve products or services more quickly.
Second, I think marketers need to think about emotional appeal. Study after study shows that emotions play a bigger role than rational thought in human choices including purchasing decisions.
A great example of that was REI’s Mirnavator video.
It’s an inspiring and heartwarming story of a woman overcoming multiple challenges, including online bullying, to pursue her passion for running. Mirna’s story made a strong emotional connection with REI’s customers, fitness enthusiasts who also tend to be socially conscious.
REI’s strong connection with the values of environmental conservation along with the strength of the human spirit, have made loyal customers out of me and millions like me.
Eric: What’s the third dimension then of online reputation that marketers should pay attention to?
Mark: That would be, I think, vision and leadership. Now, as with products and services, vision and leadership at first glance would seem like an area marketing has little control over. However, marketing can do a lot to amplify the vision and leadership of a company, provided that company clearly has both of those.
If the company has a visionary, charismatic CEO, or industry respected thought leaders, marketing should be creating opportunities for them to be seen and heard. I only have to mention names like Elon Musk or Steve Jobs to evoke the powerful effect such leaders can have on a brand when they’re made public.
Eric: Thanks, Mark.
Don’t miss a single episode of Here’s Why with Mark & Eric. Click the subscribe button below to be notified via email each time a new video is published.
Subscribe to Here’s Why
See all of our Here’s Why Videos | Subscribe to our YouTube Channel
Welcome to this week’s edition of the Social Media Marketing Talk Show, a news show for marketers who want to stay on the leading edge of social media. On this week’s Social Media Marketing Talk Show, we discuss how to implement Instagram’s new two-factor authentication update and weigh the pros and cons of Facebook’s new [...]
The post Portal From Facebook: Marketing Experts Weigh In on Facebook Smart Speakers appeared first on Social Media Examiner.
Want more engagement in your Facebook group? Looking for tips on shaping your group’s culture? To explore how to build a loyal and engaged community inside of Facebook groups, I interview Dana Malstaff. More About This Show The Social Media Marketing podcast is designed to help busy marketers, business owners, and creators discover what works [...]
The post How to Cultivate Community With Facebook Groups appeared first on Social Media Examiner.
Do you need to improve your LinkedIn marketing? Wondering which content types will perform best for you? In this article, you’ll find a step-by-step process to help you create a LinkedIn content marketing plan. Why Content Marketing on LinkedIn Deserves a Second Look LinkedIn is a thriving community of more than 500 million members around [...]
The post How to Create a Content Marketing Plan Using LinkedIn appeared first on Social Media Examiner.
Have you ever banked an something you believed was a sure thing, only to be proven wrong? Then watch the Journey, Social Media Examiner’s episodic video documentary that shows you what really happens inside a growing business. Watch the Journey In episode 6, Michael Stelzner (founder of Social Media Examiner) and his team conclude testing [...]
The post Leaning Into Launch Day: The Journey, Season 2, Episode 6 appeared first on Social Media Examiner.
If you’re not familiar with Amazon marketing, you probably should be. Sponsored Products are ads you can run through Amazon, designed to feature a product in a way that makes it look recommended by the platform. Think of it like native advertising for Amazon. They’re immensely popular, and will only grow more popular as their use catches on. Better to get in now while the getting is good, optimize to be at the top of the pack, and get a head start on the future of Amazon marketing.
What are Sponsored Products?
If you’ve ever browsed Amazon, at least without an adblocker, you’ve seen products in the search results with a Sponsored label next to them. These look like any other product listing in the search results, appearing at the top and bottom of the results page, only they have a “sponsored products” category heading. They can appear in many categories, though not all product categories allow them. Appliances, Automotive, Beauty, Collectibles, Computers, Electronics, Art, Grocery, Luggage, Music, Outdoors, Shoes, Software, Sports, and Video Games are just some of the available categories.
Sponsored products are ads designed to send traffic from the search results page to the product detail page for the product you choose to advertise. You don’t have a ton of flexibility with the copy for the ads; you can choose which product and which details of the product are most relevant, but you can’t completely customize the ads. What you can do, however, is adjust a lot of the behind the scenes elements to improve the visibility and click-through rates for these ads. Let’s talk about how.
20 Methods for Optimizing Sponsored Product Ads
There are a lot of options available to you for optimizing your product ads. I’ve compiled 20 pieces of advice you can use. Play around with them, experiment, and figure out what works best for your products in your categories.
1. Understand Campaign Types
First up, you should understand that there are two types of campaigns, and what the benefits and drawbacks are to each of them. They are Automatic and Manual, and they are different types of targeting.
The difference between the two is pretty simple: Automatic targeting allows Amazon to choose which keyword searches are relevant to the product, and thus where to show your ads, based on your product page copy. Manual allows you to choose specific keywords to run for your ads. Automatic targeting is, obviously, easier and less time consuming to run. Manual gives you more control, and thus typically higher click rates and better results – assuming you optimize properly – but has a higher time investment and a greater chance of failure.
2. Use Automatic Campaigns First
Start by running automatic campaigns. This allows Amazon to use their vast array of customer data and intention analytics to figure out how to advertise your products for you. They won’t be the best converting ads, or the cheapest ads, but they’ll be a middle of the road level of serviceable. You won’t be wasting a ton of money or leaving a ton of conversions on the floor.
The key to success with Amazon sponsored products is to start with automatic campaigns for at least a month, to gather data. The data Amazon will provide you is invaluable to future optimizations.
3. Don’t Fall to Product Bias
Everything in your storefront is something you believe will sell. All too often, I see Amazon marketers get tunnel vision, hoping to increase sales of the products they feel are most successful, leaving their other products in the dust. If you have a huge catalog, you might not have the budget to advertise for all of them, but you should definitely experiment.
The problem here is that all too often your successful products are successful because you’re already reaching a significant part of your audience. Sponsored product ads will make them a bit more successful, but it might not be as big a benefit as advertising a less successful product. If you consider that you’re only as successful as your least successful product, it makes more sense to raise up the underperformers than to boost the overperformers even more.
4. Run One SKU Per Ad Group
When you run an ad campaign with Amazon sponsored products, you can add as few or as many individual product SKUs to the campaign as you like. I highly recommend you only add one SKU per ad group.
Why? Amazon’s analytics used to provide detailed data about your ads, including which keywords drew in clicks to which SKUs. However, they wiped some of this data, and now you can see which keywords are bringing in how many clicks and sales, but you can’t see which SKU in the ad group brought in that data. However, if you only have one SKU in the ad group, you know that it’s that one SKU that brought in that performance.
This is a lot easier with small catalogs than larger catalogs, of course. If you have 30 products, managing 30 ad groups is relatively easy. If you have 1,000 products, managing 1,000 ad groups becomes a lot harder.
5. Use Bulk Operations if Necessary
This is less of an optimization tip and more of a time-saving tip. If you have over, say, 100 products, you should probably make use of Amazon’s Bulk Operations for Sponsored Products. This is a tool that allows you to essentially automate the creation of sponsored product ads for your product catalog. It will require a little bit of excel wizardry, but it allows you to manage a large number of ads simply by uploading a spreadsheet, rather than having to manually create and specify details of ads for every single SKU or keyword you want to use.
6. Harvest Data
As already mentioned, Amazon’s analytics will provide you information about the ad groups you run, including how well that ad group performed and for what keywords.
When you have one SKU per ad group, you can draw a direct set of data: this SKU performs X well for Y keywords. After you run your ads for at least a couple of weeks, if not a month, check your search term reports and see how well your products have performed, and for which keywords.
7. Understand Data
The data you harvest needs to be understood before you can use it for further optimizations. Amazon will report a variety of different metrics, which you can attribute to individual SKUs if you divided up your ads as I mentioned in step 4.
Number of orders. This is the number of conversions per keyword per SKU. Be sure to normalize this by time, otherwise it will look like older ads are performing better than newer ads based on this metric alone.
Sales. Similar to the above, this is the number of sales of a SKU per keyword for the ad. This includes individual product sales, however, and thus will be higher in cases where one order included multiples of the same product.
Clicks. The number of times a sponsored product ad was clicked, per keyword per SKU. This allows you to calculate the conversion rate for individual ads.
8. Correlate Data
Using the data Amazon gives you, you can correlate trends and figure out where you want to focus your energies. Export your data and start making some charts. Compare each SKU and the keyword data for that SKU, compiling lists of the best keywords for each. For each keyword, calculate the clicks-per-sale rate, and identify the best handful of keywords for each ad group, which will correspond to each SKU.
9. Build Manual Campaigns
Once you have your data correlated and ready to go, you can cancel the automatic campaigns for certain products and replace them with manual campaigns.
Specify which keywords go with which SKUs and run those ads without the additional exposure in underperforming keywords. Keep an eye on these and make sure the data continues to perform.
10. Use Broad Match Keywords
Broad keyword matching is any search term that includes part of the keyword or a synonym thereof. If you’re advertising “green mascara”, your ad will show up for product searches including “emerald mascara” or “green makeup” or even just “mascara.” Use these for exposure and to gather data about what modifiers your customers are most likely to use.
11. Use Phrase Match Keywords
Phrase matching for keywords means the full phrase must be part of the search term, but the search term can include additional words. Amazon will also take close variations into account, including typos and close synonyms. This is useful for more refined ads once you know which keywords work best.
12. Use Exact Match Keywords
Exact match, as you might expect, is specifically your keyword and no other words, synonyms, or phrases.
Use this only once you’ve drilled down to the most effective keywords and know they’re going to last for a while. If the keywords have high seasonality, you’ll need to keep a close eye on when they fall off and cut off the ads before you waste too much of your budget.
13. Use Negative Keywords
Negative keywords are keywords that you include if you specifically don’t want your ad to appear for those searches. For example, if you know users are searching for organic or natural makeup and your mascara is not all-natural, you can include natural/organic as negative keywords. This will prevent you from advertising a product to people who aren’t going to buy it because it doesn’t match what they want.
14. Calculate Advertising Cost of Sale
Your advertising cost of sale is essentially your potential profit margin. You can calculate this by taking your total ad spend, dividing it by your total sales value, and multiplying the result by 100 to get your percent. Products with a low ACoS should have higher bids to get more traffic. Products with a high ACoS probably have a lot of traffic but few conversions, and may be an opportunity to prune out an underperforming keyword.
15. Rotate Products
The larger your catalog, the harder it is to advertise everything in it with a limited budget. Periodically rotate out the majority of your ads until you’ve advertised everything in your catalog for at least a base amount of time, typically two to four weeks.
Rotate out 90% of your ads, while keeping your top 10% best performing ads, until you have base data for everything. From there, keep your top performing 50% and rotate through your remaining 50% with experiments to see how they can be optimized.
16. Know Your Goals
Make sure, when you’re optimizing your sponsored product ads, that you know your goals. Are you trying to achieve the most revenue, or the highest number of sales in general? This can affect how you adjust your keywords and bidding strategies. In some cases you might be focusing on the highest return on ad spend, while other times you simply want sales numbers and even breaking even works fine.
17. Don’t Be Afraid to Return to Automatic
The general progression for a single SKU is to run automatic ads to harvest initial data, then run manual ads to optimize on that data. However, sometimes you’ll end up pursuing a dead end with underperforming ads all around. In these cases, consider returning to automatic ads to harvest fresh data and look for a new place to start.
18. Use the Right Number of Keywords
When you’re creating your sponsored product ads, you can use as few or as many keywords you want.
How many should you use? I recommend somewhere in the range of 25-50. Too many keywords will spread your ads too thin, while too few will fail to capture large portions of your audience.
19. Be Consistent
Consistency is key when you’re comparing your data. Try to avoid comparing apples to oranges. If you’re determining which of two products should be advertised, make sure you’re not comparing an ad that ran for two days to an ad that ran for a month, or one with 10 keywords and one with 500.
20. Optimize Product Listings
Amazon’s product ads pull data from your organic product page, with all of the images, details, and information you include available for them to choose. Since you can’t really optimize your ad copy, optimize your product listings instead.
The post 20 Ways to Optimize Your Sponsored Products on Amazon Ads appeared first on Growtraffic Blog.
Wondering how to split test Facebook link preview images without using Facebook ads? Looking for affordable tools to help? In this article, you’ll learn how to use three tools to create, test, and optimize blog post images for better performance in the Facebook news feed. #1: Design 3 to 5 Test Images Using Crello The [...]
The post How to Split Test Facebook Link Preview Images for More Organic Traffic to Your Website appeared first on Social Media Examiner.
The Latest Data on Featured Snippets and the Knowledge Graph
We’ve tracked over 1.4M queries since July 2015, charting how many of them return featured snippets, Knowledge Graph-based results, and/or enhanced regular snippets in the search results. There are links to our prior studies at the bottom of this post (you can use the quick links above).
We used the same set of queries for all four of our studies. All of the queries were selected because they were deemed likely to be something that could be addressed either by a Knowledge Graph result or a featured snippet result. In other words, all the queries look for a relatively simple, factual response.
There are a few important terms to understand when looking at this data:
The Knowledge Graph is a Google database containing many billions of public domain facts.
A Knowledge Panel is factual information provided by Google that is sourced from their Knowledge Graph, Wikipedia, or both. On desktop devices, this appears to the right of the search results, but on mobile devices it appears in line with the regular search results. Other than in the case of Wikipedia, no attribution is provided to the source of the information (since it’s public domain, that is not required.
A Knowledge Box is factual information provided by Google that is primarily sourced from the Knowledge Graph. On both desktop and mobile devices, this appears in-line with the regular search results. As with the Knowledge Panel, no attribution is provided.
A Featured Snippet is information that Google sources from third-party websites, that is then provided above the organic search results, along with attribution to the page where Google sourced the info.
An Enhanced Snippet is when a regular search result is enhanced with more information, beyond just that of a tile and a description. One example of this is the Sitelinks feature in the search results.
A Rich Answer is any search result that has one or more of the above features present in the result.
One more important point, and then on to the data! In this year’s study, we searched the 1.4M+ search queries using a mobile user agent. This does cause some differences in the way our data is calculated, but we’ll explain those as we go along.
Total Rich Answers
We continue to see growth in the total incidence of Rich Answers, as shown here:
Total Featured Snippets
In contrast, we saw a slight drop in total featured snippets:
This represents a drop of about 11% in total featured snippets.
Total Knowledge Graph-Based Results
We also saw a somewhat larger drop in total Knowledge Box + Knowledge Panel results, as shown here:
The total drop was about 32%.
Total Featured Snippet Results with Videos
The incidence of featured snippets that contain videos, however, went up significantly:
Comparing Mobile and Desktop
In addition to examining the 1.4M+ results in mobile, we also took a random sampling of 185,075 additional queries, which we also ran with a desktop user agent to compare the results.
Total Rich Answers: Mobile vs. Desktop
The incidence of rich answers on the desktop is noticeably higher than it is for mobile:
Total Featured Snippets: Mobile vs. Desktop
Desktop has the lead here, though by a smaller margin:
Total Knowledge Graph-Based Results: Mobile vs. Desktop
Total Featured Snippets with Videos: Mobile vs. Desktop
This one is pretty close to a dead heat:
A strong presence of videos in featured snippets in mobile makes sense, as videos work very well in mobile environments.
Rich Answers continue to rise. Across all of search, Google continues to increase the number of results that go beyond the traditional “blue link” with a two-line description.
Featured Snippets and Knowledge Graph results dip. For the first time, we saw a slight decline in the number of Featured Snippets and Knowledge-graph reuslts across all of search.
Featured Snippets with videos grow. After a sharp dip last year, Google seems to have re-committed to video-based Featured Snippets
All types of Rich Answers are more common on desktop than mobile, including Featured Snippets. However, video Featured Snippets on mobile occur with about the same frequency as on desktop.
Since more and more search slots are being occupied by Rich Answers, it is imperative that you try to earn a place in them for your brand. The easiest to pursue (though still a challenge!) are Featured Snippets.
Do you post photos from your mobile phone? Looking for expert-level mobile photo editing solutions? In this article, you’ll discover three mobile apps that help you edit and deliver professional-looking photographs. #1: Make Complex Edits Using Mendr Even if you’re a pro at using DIY photo editing apps, getting the perfect image can be a [...]
The post 3 Mobile Apps to Improve the Quality of Your Photos appeared first on Social Media Examiner.
Are you using a Facebook Messenger bot to talk with customers? Wondering how to build an email list using your bot? In this article, you’ll learn how to automate the capture of email addresses (and other user information) into the email marketing service of your choice. #1: Set Up a List and Welcome Message in [...]
The post How to Grow Your Email List With a Facebook Messenger Bot appeared first on Social Media Examiner.
Mark can’t decide which yogurt is best, so he’s testing them all. In similar fashion, Google has to do a lot of testing to decide which source makes the best featured snippet for a given query.
In this episode of the popular Here’s Why digital marketing video series, Eric Enge reveals the results of his fascinating study showing how much Google churns the sources for its featured snippets in search.
Don’t miss a single episode of Here’s Why with Mark & Eric. Click the subscribe button below to be notified via email each time a new video is published.
Subscribe to Here’s Why
Featured Snippet Churn Documented: What It Means for Your SEO Strategy
Can a Google Featured Snippet Drive Significant Site Traffic?
See all of our Here’s Why Videos | Subscribe to our YouTube Channel
Mark: Eric, tell the viewers what we did in our latest featured snippet study.
Eric: Happy to. We were interested in measuring how frequently Google changes featured snippets. To do that, we got together with the team at STAT Search Analytics to see if we could use their data to help measure it, because they have an awesome tool for this. We took 5,000 keywords that we had shown featured snippets in previous studies that we did on the topic, and we checked them with the STAT Tool for 124 straight days on both mobile and desktop.
Mark: And what did we see?
Eric: What we saw was really interesting. One of the first findings is, even though we started with 5,000 queries that we knew had historically shown featured snippets before, 522 of those queries didn’t show any at all during the 124 days of the new study. But we also saw lots of queries that were very stable, if they had featured snippets for the entire time. That happened for about 2,200 of the 5,000 queries, on desktop that is. And on the mobile side, about 1,275.
Study shows 10% fewer featured snippets year-over-year in a test set of 5000 queries.Click To Tweet
However, and here is where it gets really interesting, we saw many that changed very, very frequently. For one keyword, “What is day student tuition?” we saw 12 different sites that Google used to show a featured snippet during the test period, and eight days that had no featured snippet at all. So, there is a high amount of what I call churn, or thrashing if you will. It’s just amazing how quickly they’re turning all this stuff over.
Google continues to churn sites used as sources for featured snippets for a lot of queries.Click To Tweet
Also, Wikipedia was the number one provider, showing up for 40% of all the featured snippets on desktop and about 35% on mobile. The top 10 sites accounted for only 47% of all featured snippets, however. Even the next nine sites were only another 7%, which tells you that there’s a long spread of a lot of other domains for the rest of these.
Mark: Were there any significant differences between mobile and desktop?
Eric: There sure were some interesting findings there too. First of all, mobile tended to show more videos than desktop, which is interesting by itself.
The other thing is that we saw that mobile showed far more knowledge graphs than desktop. Or, to be a little more precise, we had results for desktop that were showing both the featured snippet and the knowledge graph. And on mobile, because they want a trimmer looking SERP (Search Engine Results Page), they would drop the featured snippet and show just the knowledge graph. That was one of the big differences between the two platforms.
Mark: What does this mean for webmasters and site publishers?
Eric: First of all, featured snippets remain incredibly important, and you just have to learn how to obtain them. You have to make that part of your business. We have tons of data that shows that you get traffic increases from doing this, and you need to know that Google is doing a lot of dynamic testing, so you have to expect this to be volatile. You might succeed in getting a featured snippet and then you’ll lose it. But Google isn’t investing all this energy unless it’s incredibly important.
I’ve said it before, and I know other people out there are saying it too, one of the reasons why featured snippets are so important is they play an important role in voice environments. So, if someone does a voice query, and Google wants to be able to give that right first answer, then the whole featured snippet thing is about a whole new algorithm that Google is using to get to that perfect first right answer. In the voice environment, if you’re not that perfect right answer, you’re just out of the picture. So that’s another reason why it’s really important to get it.
But even if you do all of these, the interesting thing that I want people to think about, Mark, is it’s actually Google giving you clues on how to make your content better. Put aside Google for a moment; you’d probably want access to a tool that could help you learn how to make your content better for potential visitors to your site. So that’s a whole other reason to do featured snippets.
Altogether, to me, all of these things are such huge opportunities. You’ve just got to go do it.
Mark: Thank you, Eric. That’s really fascinating. And, you know, as Eric alluded to toward the end there, the value of some of these studies that we do, like this one, is that it not only gives you details about, in this case, featured snippets and knowledge graph results and things like that, and mobile versus desktop, but overall to how Google thinks, how Google approaches search and the process. And as search marketers, that’s very, very valuable.
So, you’ll want to look at the whole featured snippets study that Eric produced. There’s a lot more detail in there; a lot more for you to learn.
Don’t miss a single episode of Here’s Why with Mark & Eric. Click the subscribe button below to be notified via email each time a new video is published.
Subscribe to Here’s Why
See all of our Here’s Why Videos | Subscribe to our YouTube Channel
Welcome to this week’s edition of the Social Media Marketing Talk Show, a news show for marketers who want to stay on the leading edge of social media. On this week’s Social Media Marketing Talk Show, we explore the Instagram co-founders’ departure from Facebook and updates from the fifth-annual Oculus Connect Conference. Our special guests [...]
The post Instagram Founders Leave Facebook appeared first on Social Media Examiner.
Want to increase your business’s exposure in social media feeds? Curious how word of mouth can help you overcome algorithm changes? To explore how talk triggers encourage customers to evangelize your business, I interview Jay Baer. More About This Show The Social Media Marketing podcast is designed to help busy marketers, business owners, and creators [...]
The post How to Get Customers to Evangelize Your Business appeared first on Social Media Examiner.
Google Ads have a number of different bidding strategies. One of them, most commonly known as Target CPA Bidding, is an automatic bidding strategy. Can changing your bidding strategy to CPA Bidding hurt your conversions?
About Target CPA Bidding
Target CPA Bidding with Google Ads is a “smart” bidding strategy, which means it’s automatically optimized by Google algorithms, rather than your own micromanagement. Google uses an array of data sources, including your ad past performance, your goals, and general ad performance across similar keywords to determine what their bidding strategy should be for your ads.
Each different Automatic bidding strategy focuses on optimizing your ads for a different metric. For example, you can optimize for conversions instead of costs, or for clicks over conversions. If you’re running an awareness campaign, you’d prefer a higher volume of clicks, versus a higher percentage but lower volume of conversions.
With Target CPA bidding, you’re being optimized to get as many conversions as possible, so long as those conversions are at or below a given cost threshold. This means you might be able to set a higher cost cap and get more conversions, but since you don’t want to spend that much per conversion, you’re getting fewer conversions than you otherwise might.
Note that according to Google’s help center, Target CPA bidding optimizes for average cost per action/conversion, rather than individual prices. If you’re optimizing for $1-per-conversion, and Google gets multiple 50-cent conversions, it means they have the flexibility to get $2 or more conversions, so long as it averages out to at most $1 each.
In practice, this isn’t important. So long as the average cost remains where you want it, it doesn’t really matter if you’re getting 100 conversions at that price exactly, or 50 at a lower and 50 at a higher price. You’re still paying the same amount overall for the same number of conversions, within certain bounds.
Target CPA Settings
When you’re setting up ads using Target CPA bidding, you have a handful of different settings you can specify to attempt to guide your ad performance.
First up, you have the target CPA itself. If you want to average $1 conversions, your target CPA should be $1. Again, this is an average, so you might get some 10 cent conversions and you might get some $5 conversions, so long as the sum total of all conversions divided by the number of conversions averages out to $1.
Using a low target CPA can hurt conversions.
If you set $1 as your threshold, but the average cost per conversion in your niche is closer to $2, you’ll have far fewer conversions than you otherwise could. A target bid that’s too low will mean you’re being out-bid in the ad auction for your best converting audience.
Google will attempt to recommend an ideal target CPA when you set up your ads, based on historical data for similar ads you have run in the past. This target suggestion will be calculated based on the past few weeks of performance; data too much older than a month isn’t useful to current ad auctions.
Secondly, you can specify bid limits. You can set both a minimum and a maximum bid limit. For example, if you know that any conversions obtained with a bid under 10 cents are going to be worthless to you as a whole, you can set a higher minimum to eliminate those low bids. Conversely, if you know that extremely expensive conversions rarely end up worthwhile, you can set a maximum bid cap to cut those out.
Google does not recommend setting bid limits for automatic bidding strategies, because it restricts flexibility. If you set a $3 bid cap for your target $1 CPA, automatic bidding will not be able to give you those $5 conversions, even if it keeps your average under $1. This results in a lower number of conversions. You must use portfolio ads rather than standard ads to set bid limits.
You can choose to adjust your target CPA based on device. This is essentially a prioritization system. If you know that your mobile users are most valuable to you as customers, you can set a focus on mobile users, with less priority given to desktop ad auctions. These adjustments are percentile, meaning you can adjust the value of a given platform up by however much % you want, and down by a maximum of 100%. If you adjust a platform to -100%, those ads will be eliminated. A -100% adjustment to mobile, for example, will force your ads to only display on desktop and tablet devices. Those are the only three categories; mobile, desktop, and tablet.
You are also able to choose to only pay for conversions, rather than pay for positioning in the ad ranks. Paying for conversions has its own slate of benefits and drawbacks, which you can read about here.
Does Switching to Target CPA Hurt Conversions?
To go back to the initial question, as posed in the title of this post, does switching to target CPA hurt your conversions?
The answer is “it depends”, and it depends entirely on the settings you use and the settings you’re changing away from.
For example, if you used manual CPA bidding prior to the change, you may lose many of the optimizations you have made, and be reverted to a more average line of bidding. This can result in fewer conversions, or a lower average conversion value, due to your optimizations being wiped.
On the other hand, if you’re switching from a manual bidding strategy that has been working quite poorly for you, the switch to automatic bidding can increase your conversions, as well as increasing the average conversion value.
In part, this depends on the target CPA settings you’ve chosen. If you set lower bid caps, a lower target CPA, or a higher required value for your target conversions, you are likely to get fewer conversions overall. Conversely, if automatic optimizations allow you to open up your bids to spend your money as best as possible, Google will likely be able to get you more conversions than you were with your manual optimizations.
The Benefits of Target CPA Bidding
One of the biggest benefits of using an automatic bidding strategy is saving yourself both time and money. As long as you set bid caps to prevent over-spending, and you set a target CPA that sits firmly in the middle of your plausible conversion range, you should be able to get more conversions for the same budget as with manual bidding.
This is because Google’s algorithms can take in data on an ongoing basis and make adjustments to your bidding automatically throughout the day. They can even dynamically adjust bidding based on user performance from hour to hour. If you were to try to make these optimizations manually, you would be adjusting on the fly constantly throughout the day. It would be a full time job.
Chances are good that switching to an automatic bidding strategy will get you more conversions than using a manual strategy, while also saving you time. However, it may not save you money, and depending on your settings, it might not get you as many conversions.
In general with the automatic ad auction, if you increase your target CPA, you will get more conversions. It’s not even a complicated equation. If you have more money to spend, you’ll get more people coming in. It will, of course, increase your average cost per conversion.
Achieving Success with Target CPA Bidding
If you’re interested in using target CPA bidding or another automatic strategy, I can give you some tips to help your initial forays be a success.
First up, how long should you experiment with a bidding style before making a determination as to its effectiveness? I generally recommend about a month. 30 days will get you a good set of data, so long as you aren’t operating in a niche with very strong seasonality. Obviously, it’s hard to run a viable experiment comparing a data set that isn’t equal to another. A heavily Christmas-themed niche is going to have vastly different performance numbers comparing 30 days in November to 30 days in June.
I also recommend that you be careful with split testing and incremental changes. Usually, incremental changes and optimizations help increase performance for ads. However, with automatic CPA bidding, Google considers both past and current performance to determine bids. If you make a change, Google will be considering data from both before and after the change when deciding on bidding strategies. You have to wait until the older data falls off to see how the change really impacted performance. This applies to ad targeting, ad copy, and even ad placement.
You should also be careful with setting your target CPA much lower than Google’s recommended target CPA. You will generally end up leaving a lot of conversions on the table if you do so. The other aspects of your ads will need to make up for the lack of budget, meaning they have to be incredibly compelling, which may not be plausible. You will have a lower cost per conversion, but also a lower volume of conversions.
Another decision you have to make is whether you want your ads to appear in search or in the display network. PPC Hero studied this and found that, usually, display ads performed better with target CPA bidding. A target CPA in a reasonable range usually ended up close to or slightly over the target average with display ads, while it ended up much higher – up to 106% more than the target average – with search ads. This may vary based on niche or performance, of course, but it seems consistent that display ads are cheaper.
Paying Attention to Valid Metrics
One thing you need to keep in mind when you’re choosing a bidding strategy and various bid caps is what your targets should be.
Do you want to focus on a specific cost per action? You are free to do so with target CPA bidding, but be aware that you may end up with fewer conversions on average. If you care more about the cost per conversion than you do about the number of conversions, this can be a good way to balance out your advertising costs.
Do you want to focus on a specific value of conversions? Using other automatic bidding strategies, you can optimize for the value of a conversion. If you know that fewer conversions with higher average value is a better result for your company than a larger number of smaller value conversions, this can be a good option. This is particularly useful if user support or account maintenance is a huge money sink, and your high value customers are where your profits come from.
Do you want to focus on a specific volume of conversions? Setting a target CPA is likely to give you fewer conversions than keeping your CPA open and aiming for as many conversions as possible. You simply need to be aware that if your CPA rises too high, you may end up spending more on customer acquisition than you profit from those customers.
Always know which metrics you want to monitor before you start creating your ads. While you can always adjust your bids and bidding strategies later, it’s always best to have a foundation in mind before you begin.
The post Can Switching to CPA Bids in Google Ads Hurt Conversions? appeared first on Growtraffic Blog.
Have you ever wanted to do your own Ask Me Anything (AMA) event? Wondering how to run these online Q&A sessions on social media? In this article, you’ll learn how to organize and manage an online AMA experience on Facebook, Instagram, or Twitter. Why Run an AMA on Social Media to Support Your Marketing? The [...]
The post How to Plan a Successful Ask Me Anything (AMA) Experience appeared first on Social Media Examiner.
Have you ever struggled refining the messaging for a launch? Then watch The Journey, Social Media Examiner’s episodic video documentary that shows you what really happens inside a growing business. Watch the Journey In episode 5, Michael Stelzner (founder of Social Media Examiner) and his team work on tapping into emotional messaging for their launch. [...]
The post Minding The Message: The Journey, Season 2, Episode 5 appeared first on Social Media Examiner.
The concept of author authority (or “author rank”) in Google Search has a long and somewhat muddy history. To many of us, it makes sense that Google would value the “EAT” (Expertise, Authoritativeness, and Trustworthiness) of content creators in its quest to judge the quality of content. After all, wouldn’t you rather get medical advice from a properly credentialed M.D. than from some blogger who keeps WebMD open in a browser tab?
But does Google actually care about who created a content piece? And does it currently use that as an active factor in its search ranking algorithms?
TLDR Spoiler Alert! I’ll give you my conclusions right up top. For my supporting evidence, read the rest of the post! Bottom line: I don’t think we have sufficient evidence to say whether Google is using any kind of author authority in search. However, we do have evidence of an increasing (and renewed) interest by Google in identifying authors. If your content is meant to project the authority and reliability of your brand, then it makes sense for users to see that it’s written by credible subject matter experts. (Bonus: you’ll be all set if Google ever does crank up “Author Rank”!)
A Brief History of Author Authority and Google
Agent Rank / Author Rank
The origins of the concept of using a content creator’s authority and reputation go back to the agent rank patent granted to Google on July 21, 2009. The patent proposed a means of evaluating the contributors to various elements on a web page by determining the identity of each of the contributing “agents” with a “digital signature” and assigning a score to each one based on other content associated with it. (For an in-depth explanation of agent rank, see this post by Bill Slawski.)
The agent rank patent probably would have faded into the obscurity of Google history had it not been for a new Google project unveiled in 2011: Google Authorship.
On June 7, 2011, Google announced authorship markup for web search. Authorship was a development that allowed publishers and authors to create a digital signature for authors using Schema.org’s rel=”author” and rel=”me” structured data markup attributes.
Simply described, publishers could link an author byline to an author’s identifiable profile on another site, and authors could link back to the publications. That two-way linkage created a digital signature that would give Google more confidence about the identity of authors, and it created a connection with their content across the web.
While there were no initial benefits announced for using Authorship markup, there was a blockbuster tease in the last paragraph of the Google blog post: “We know that great content comes from great authors, and we’re looking closely at ways this markup could help us highlight authors and rank search results.”
It’s very rare that Google even hints at something it might use to influence search rankings, so the SEO community immediately sat up and paid attention (including this author).
Little did we know just how prepared Google was to move forward with putting Authorship into action. Just 22 days after the authorship blog post, Google unveiled Google+.
Google+ and Google Authorship
I’ve always believed that one of the chief reasons Google threw so much weight on Google+ in its early days was its hunger to be able to identify individuals on the web. This was a needed step for authorship to work, but it went far beyond that.
Google highly incentivized (some would say coerced or even forced) individuals to create Google+ profiles. For a span of time, it was almost impossible to use many Google services without one. I think the primary importance of this was for advertising. If Google had a trackable identity for nearly every web user, it would make targeting advertising to those users many times more accurate. That’s why I always laughed when people said Google+ was not “monetized”!
But this also meant that a lot of the web’s authors would have Google profiles. Now Google had the anchor it needed on the digital signature side to put Authorship into action.
Authorship Goes Live
Authorship leaped from something Google might use someday to an active part of search in August 2011, when Google’s Matt Cutts and Othar Hansen (then head of the Authorship project) released a YouTube video encouraging authors to connect their published content across the web to their Google+ profiles. Those profiles started showing a special section where authors could list links to their author bios or archive pages on the publications for which they wrote.
Aside from the useful instructions on how to implement Authorship markup, this video also introduced Google’s plan to begin showing the profile images of authors next to search results for their content. And Hansen also affirmed that “at some point” Google might use this markup to influence search rankings.
Soon after that, Authorship rich snippets started appearing in search.
The appearance and components of Authorship rich snippets changed constantly over Authorship’s three-year presence in the SERPs. The example above shows them at their most robust, with an author photo, byline, and the number of Google+ circles (followers).
At times, the byline was a link, leading to a dedicated search results page of that author’s content. For a few months, clicking on a Google link with Authorship and then clicking back to Google opened up a dropdown of more content by that author.
Authorship results were never guaranteed. They didn’t show for every author (even if the markup had been implemented), and even for those that got it, it didn’t show for every result for their content. There seemed to be some algorithmic thresholds for an Authorship snippet to show.
The Decline and Fall of Authorship
The first indication that Google was pulling back from Authorship came in early December 2013, when the number of Authorship snippets showing in search plunged overnight. Then, in June 2014, the author profile photos disappeared forever, leaving only the bylines.
On August 27, 2014, I got a phone call I never expected to get. Google’s John Mueller messaged me to ask if he could speak to me under a temporary NDA. After a quick electronic transfer of the paperwork, John called me to let me know that in 24 hours Google would be shutting down Authorship snippets in search.
John’s call was a much-appreciated courtesy to me, in recognition of the leadership and guidance I had shown to the Authorship community. It allowed Eric Enge and I to work through that night to create a 2,500-word article about the rise and fall of Authorship, which Danny Sullivan published on Search Engine Land minutes after John Mueller made the official Google announcement on Google+.
Why Was Authorship Abandoned?
I won’t go into depth about the reasons (both known and speculative) why Google gave up on Authorship in search, because Eric and I covered them well in the aforementioned Search Engine Land article. However, here they are in brief:
Uneven adoption by authors and publishers. A study we did looked at Authorship markup adoption by authors on major publications, and it found less than a third of them had the necessary linkage. Google won’t use a signal that’s out of balance — one that potentially gives favoritism to some authors over others simply because they did some extra coding. Adoption also seemed skewed toward certain verticals, such as marketers (surprise!), and real estate and insurance agents.
Lack of value to searchers. During our phone call, John Mueller told me that their testing showed that Authorship snippets did not seem to be valued by searchers.
Emphasis on mobile-first. In interviews and Webmaster Hangouts post-Authorship, Mueller frequently blamed the rise of a mobile-first philosophy at Google for the death of Authorship, apparently meaning that the Authorship rich snippets didn’t fit well with that initiative.
In the end, whatever the reasons for the abandonment of the Authorship project, I think it was significant that it lasted as long as it did (three years). Most search experiments, especially those as prominently displayed on search pages as Authorship was, don’t last that long.
To me, that means despite the problems noted above, the idea of author authority in search was one Google thought had high potential.
Did Authorship Include Author Rank?
First, Author Rank was never a term used officially by Google. Instead, it was a popular concept in the search marketing community, based in part on Bill Slawski’s posts about the agent rank patent, but fueled by the hints from Google in 2011 that it “might someday” use Authorship as a search ranking signal.
Indeed, many in the search community simply assumed it must be in play, but they offered only anecdotal evidence. (“We got an authorship snippet for one post and three days later it went up three places in search!”) Eric Enge detailed the reasons why it was unlikely Google was using any sort of “Author Rank” back then, and I agreed with those reasons.
I believe Google knew it would have been premature to activate any kind of author authority in ranking algorithms during its tenure. However, I also believe it was in part a “training exercise” for any future use of author authority. I’m betting that Google learned a lot during the life of Authorship. More on that later.
Did Author Authority Die with Authorship?
Short answer: we don’t know.
But there are a number of hints and clues that indicate Google remains interested in the concept, if not as a ranking factor, then as an indicator of content quality and reliability. And we do know those things are more and more factoring into what Google likes to show users in search results.
Evidence That Author Authority Still Matters to Google
1. Content creators and the Google Search Quality Raters Guidelines
Renewed speculation about author authority began with the July 20, 2018, update to Google’s Search Quality Raters Guidelines (SQRG). The SQRG is the training document for the Search Quality Raters, which are the Google contractors tasked with evaluating actual web pages that could be served up by Google search in response to a given query.
The raters are told that the purpose of their work is “to evaluate search engine quality.” They do this by scoring the pages they are shown according to the overall quality of the page and how well it meets the needs of a typical user. Google sometimes uses the raters to test proposed changes to its search algorithms.
To get the highest score, a page must rate highly in Google’s three attributes of content quality, Expertise, Authority, and Trustworthiness (known by the acronym EAT).
SQRG update. A major addition to SQRG in the July 2018 update was the inclusion of “content creators” as part of the measure of content quality. Raters are told to look for a named creator associated with the content on a page, and then research where that creator shows up online. Specifically, raters are told to evaluate the EAT of the creator(s).
Google's Search Quality Raters Guidelines have a new emphasis on the reputation of the creator of a piece of content. Click To Tweet
So at least as far as Google’s SQRG is concerned, the reputation and expertise of the creator(s) of a piece of content is an important component of the overall EAT rating of the piece.
In fact, the raters are told a low content creator score is enough to give the content piece itself a low-quality score.
It is important to note that Google has been very clear that the work of these raters does not affect particular search results. In other words, even though the raters are given real-world examples of content that shows up in search for a given query, their ratings are never used to affect the results for that query. However, as noted above, their findings may be used to improve the overall algorithm.
Also, Google has stated clearly that something appearing in the SQRG does not necessarily mean it is a direct ranking factor. However, at the same time, Google recommends we read these guidelines to gain a good idea of what it wants to see in its search results (which is why it now releases them publicly).
Google pushback? In a live YouTube video on August 21, Google’s John Mueller was asked about author reputation in search. The question was inspired by speculation by some prominent SEOs (such as Marie Haynes) that author reputation may have been a factor in the August 1 Google update that affected many “Your Money or Your Life” (YMYL) content publishers.
This speculation was fueled by a tweet from Google Search Liaison Danny Sullivan suggesting that SEOs affected by the update review the SQRGs, which of course prominently mentioned creator reputation.
However, John Mueller’s response to the question about author reputation being a factor in the update seems, at first glance, to contradict that idea:
“I wouldn’t look at the Quality Rater Guidelines as something our algorithms are looking at explicitly and checking out the reputation of all authors and then using that to rank your websites.” – John Mueller
To me, a significant word in this quote is “all.” I tweeted the following:
“Glad for this confirmation of what I was sure was the case. IF Google is using any author EAT in search, I’d imagine it is with a limited set of well-known creators (starting with KP [Knowledge Panel] entities) and used as what I call a “confirmatory signal.” By confirmatory signal, I mean it is not a ranking signal in and of itself, but if the algo is comparing two sites with content that demands expertise, and all other things are equal (VERY hypothetical situation), the site using a known high-EAT author might get the nudge.”
For what it’s worth, John Mueller himself liked my tweet.
Related: Dave Davies included a nice summary of Google author-related patents in this post.
Did John Mueller rule out any use of author reputation as a content quality signal for search ranking? Click To Tweet
Significance: While we still can’t say there is at present some sort of “creator quality score” in Google’s search rankings, the major change to the SQRG is our strongest indication to date that who created a piece of content matters to Google.
2. Machine-Readable Entity IDs
As explained by Mike Arnesen in his excellent post Leveraging Machine-Readable Entity IDs for SEO, Machine-Readable Entity IDs (MREIDs) are unique codes in the form of a string of characters that identifies a particular entity anywhere on the web. An entity is any unique person, place, thing, or concept. So, Franklin Delano Roosevelt, the 32nd President of the United States, is an entity, as are New York City and transcendentalism.
MREIDs are necessary for search, because the nouns we use to describe entities are often ambiguous. For example, even though my name is fairly uncommon, there is another Mark Traphagen whom people search for because he is a prominent intellectual property attorney. A machine can’t tell the difference between that Mark Traphagen and me by our names, but if we each have a unique code associated with us, then a machine can tell us apart.
As Mike explains in his article, Google originally sourced MREIDs from entries in Freebase, a massive database of entities that Google acquired in 2010. But these days, Google uses many sources to find and tag entities with MREIDs, if they’re significant enough to merit one. Based on Bill Slawski’s evaluations of Google patents, it appears that Google is likely already using MREIDs for many search features, including Google Trends, Google Lens, and Google Reverse Image Search.
Significance: If Google wants to identify and evaluate the EAT of authors on the web, something like MREIDs would be absolutely necessary. As I mentioned above, one of the fail points of Google Authorship was its dependence on authors and publishers voluntarily coding in the necessary connections. MREIDs allow Google to find, associate, and disambiguate entities such as authors at the scale of the web.
3. Interesting Finds Author Boxes
Early in 2018, Google introduced Interesting Finds boxes for some mobile search queries. These expandable boxes displayed content relevant to the search query that might not show in the top 10 traditional results but might still be of interest to the searcher.
For a brief period in August 2018, I noticed Google showing Interesting Finds boxes for searches of the names of some web authors, including yours truly. After about two weeks, I no longer saw the boxes for the authors who were getting them, which led me to believe this was a search test by Google.
Here’s what many Interesting Finds box looked like:
Clicking on “10+ more stories” at the bottom of the box displayed a long list of content, all of it authored by me.
Here’s where it gets interesting. I do not currently have an MREID with Google, nor does a Knowledge Panel display for my name. But clearly Google had high confidence that I was the author of all of the articles displayed.
Why were these boxes shown only for some authors? I believe that Google chose to show them in cases where people searched for a name that created some ambiguity (in my case there is another Mark Traphagen who is a well-known attorney), but where the prior behavior of searchers showed they were often looking for content by or about a particular author.
Significance: Even though this was a brief test in mobile search and no longer shows, it is another indication that Google thinks content authors might be significant for search.
4. That Gary Illyes Tweet
This isn’t recent, but it was a significant moment that kept me on the path of watching Google’s behavior toward authors even after the death of Google Authorship. At Google Webmaster Trends Analyst Gary Illyes’s session at SMX Advanced 2016 I asked him whether Google was still paying any attention to the rel=author tags we had coded in during the days of Authorship. Michelle Robbins recorded his response on Twitter (@methode is Gary’s Twitter handle):
from @methode “we are not using authorship at all anymore…we are smarter than that.” but thanks for giving them all that data, SEOs. #smx
— MichelleRobbins (@MichelleRobbins) June 23, 2016
There are two significant concepts in that brief response.
“We are smarter than that” – an indication that Google was developing machine intelligence methods to identify and track authors at the scale and pace of the web.
“Thanks for all that data” – something many of us tracking Authorship had speculated on was that Google had used the Authorship experiment as a training set of data for a supervised machine learning program that would learn how to identify and track authors, as well as for data on how searchers respond to indications of author reputation.
Again, we can’t leap from this to say conclusively that Google is using such an algorithm at this point, or if they are, to what extent. However, it is another strong indicator that they are working on such a project.
Summing It Up
So what can we say about Google Search and author identity and reputation today? Here are my takeaways:
1. Something is in the works. While I can’t definitively state that author authority and reputation has any bearing on search results today, I am convinced that it continues to be something that Google is not only interested in, but is actively working on, and perhaps even testing in limited ways.
2. Not all authors. While Google Authorship was open to any author or publisher who bothered to implement the code, I believe one of the things Google discovered during the experiment was that not all authors matter. That sounds harsh, but it’s true. It’s likely that people are only swayed to any extent by who created a piece of content if they happen to recognize the author and already understand that author’s significance.
3. Not all content. In a similar fashion, I wouldn’t expect that Google would care about who authored every single piece of content on the web. In recent years, Google has given us many indications that YMYL (Your Money or Your Life) content merits significantly more scrutiny than other types. This is content that could affect either people’s finances or their well-being. It matters a lot whether investment advice is written by a trusted financial advisor, and even more if medical information is written by a legitimate doctor or scientist.
4. The company you keep. Even in cases where users might not care or pay attention to who created a piece of content, the reputation and relevance of the author still might matter. This is where my idea of confirmatory signals for search comes in. Again, a confirmatory signal is not a direct ranking factor in and of itself. Rather, it could be used to “tip the scales” so-to-speak where Google needs some extra confirmation that a piece of content is high quality.
The most important takeaway though is that whether or not author authority affects search in any way now, it’s still a good idea to apply two things to the content published on your site:
Seek out the best possible authors for your content. Don’t let just anyone write for you. Before accepting a content submission, check out where and what the author has already published. Are the publications and topics relevant to what this author is writing for you? Is the content high quality, filled with accurate and significant information and original thought? What do others say about this author?
Clearly identify the authors of your content. Give each author their own bio page on your site with important information about their qualifications and experience. Link your article bylines to that bio page, and link out from it to other places where the author has published.
Here at Perficient Digital we go above and beyond in vetting the authors we use to create content for our clients. We do that for many reasons, but one benefit we have found is that it makes our content much more likely to be accepted for publication when we pitch it to third-party publishers.
Learn more about our content marketing services!
But why should you care about your authors if it isn’t (yet) certain that Google is using authorship as a search factor? For at least two reasons:
It’s the right thing to do for your visitors and your brand. People form impressions of your brand based on the quality of your content, and one way to ensure higher quality content is to only use the best, most qualified authors.
It future-proofs your SEO. As I hope I’ve demonstrated, Google is showing that they remain interested in author expertise, authority, and trustworthiness. Even if they are not using that as a search factor now, I expect they will be ramping it up in the years to come. Using high quality authors on your site now will make you ready if Google ever flips that switch (or more likely, turns up the knob a bit).
SAN JOSE, CA – Xilinx Developer Forum (XDF) – Xilinx, Inc. (NASDAQ: XLNX) the leader in adaptive and intelligent computing, today launched Alveo, a portfolio of powerful accelerator cards designed to dramatically increase performance in industry-standard servers across cloud and on-premise data centers. With Alveo, customers can expect breakthrough performance improvement at low latency when running key data center applications like real-time machine learning inference as well as video processing, genomics, and data analytics, among others. The Alveo U200 and Alveo U250 are powered by the Xilinx® UltraScale+ FPGA and are available now for production orders. And like all Xilinx technology, customers can reconfigure the hardware, enabling them to optimize for shifting workloads, new standards and updated algorithms without incurring replacement costs.
Alveo accelerator cards deliver significant performance advantages over a broad set of applications. For machine learning, the Alveo U250 increases real-time inference throughput by 20X versus high-end CPUs, and more than 4X for sub-two-millisecond low-latency applications versus fixed-function accelerators like high-end GPUs*. Moreover, Alveo accelerator cards reduce latency by 3X versus GPUs, providing a significant advantage when running real-time inference applications.** And some applications like database search can be radically accelerated to deliver more than 90X, versus CPUs.***
Alveo is supported by an ecosystem of partners and OEMs who have developed and qualified key applications in AI/ML, video transcoding, data analytics, financial risk modeling, security, and genomics. Fourteen ecosystem partners have developed applications for immediate deployment. They are Algo-Logic Systems Inc, Bigstream, BlackLynx Inc., CTAccel, Falcon Computing, Maxeler Technologies, Mipsology, NGCodec, Skreens, SumUp Analytics, Titan IC, Vitesse Data, VYUsync and Xelera Technologies. Additionally, top OEMs are collaborating with Xilinx to qualify multiple server SKUs with Alveo accelerator cards including Dell EMC, Fujitsu Limited and IBM with more to come.
“The launch of Alveo accelerator cards further advances Xilinx’s transformation into a platform company, enabling a growing ecosystem of application partners that can now innovate faster than ever before,” said Manish Muthal, vice president, data center, Xilinx. “We are seeing strong customer interest in Alveo accelerators and are delighted to partner with our application ecosystem to deliver production-deployable solutions based on Alveo to our customers.”
“FPGA-based acceleration solutions in modern data centers are gaining popularity as accelerators that can be programmed and reprogrammed easily as users see fit,” said Ravi Pendekanti, senior vice president, product management and marketing, Dell EMC Servers & Infrastructure Systems. “Our collaboration with Xilinx to create best-in-class acceleration solutions will benefit customers in a range of applications from video content streaming to risk management and financial services.”
“Fujitsu congratulates Xilinx on the announcement of its new board level products and solutions. With 5G use cases for applications such as autonomous driving, telemedicine, and virtual reality, the range of vRAN applications based on the COTS servers is expected to expand considerably in the future,” said Mr. Masaki Taniguchi, vice president, deputy head of Network Products, Fujitsu Limited. “Fujitsu Limited and Fujitsu Laboratories Ltd. have been collaborating with Xilinx to jointly validate 3X performance on critical software functions in the 4G vRAN system. Fujitsu looks forward to creating powerful solutions by combining its x86 servers and Xilinx adaptable acceleration boards.”
“The launch of Xilinx’s standard acceleration board products is an exciting addition to a rapidly emerging technology arena focused on fueling performance-hungry applications,” said Keith McAuliffe, Vice President and Chief Technologist, Servers Global Business Unit, HPE. “We look forward to collaborating with Xilinx to bring their technology to market and enable our customers to create breakthrough business value.”
“With the IBM Power Systems AC922 server, IBM has already demonstrated that we have the best platform for enterprise AI training,” said Steve Sibley, vice president of IBM Cognitive Systems. “IBM sees inference as a key component of a complete, end-to-end AI platform, and POWER9’s leadership I/O bandwidth for data movement makes it an ideal pairing with Xilinx’s new Alveo U200 accelerator card to bring inference to the enterprise.”
Xilinx® Alveo U200 and U250 accelerator cards are available today starting at $8,995 (USD) and can be purchased today. Alternatively, you can try it out first in the Nimbix cloud.
Xilinx develops highly flexible and adaptive processing platforms that enable rapid innovation across a variety of technologies – from the endpoint to the edge to the cloud. Xilinx is the inventor of the FPGA, hardware programmable SoCs and the ACAP, designed to deliver the most dynamic processor technology in the industry and enable the adaptable, intelligent and connected world of the future. For more information, visit www.xilinx.com.
There was a time when it was a simple choice: dedicated or shared. That decision is still there to be made, but for many, there are additional considerations about what’s under the hood. What is containerisation and would it suit your needs? What kind of server hosting is required to run your containers? And what is “serverless”? Fasthosts offers a demystifying overview of the different options available, how they could be used, and who is likely to use them.
Traditionally, you would run an application via a web hosting package or dedicated server with an operating system and a complete software stack. But now, there are other options.
Containerisation, or operating-system-level virtualisation, uses a platform such as Docker to run isolated instances known as containers. A container is a package of software that includes everything needed for a specific application, functioning like a separate server environment. Sharing a single OS kernel, multiple containers can run on one server or virtual machine (VM) without affecting each other in any way. To the user, a container feels like its own unique environment, irrespective of the host infrastructure.
Containers can perform tasks that would otherwise require a whole server or VM, while consuming far less resources. They’re lightweight and agile, allowing them to be deployed, shut down and restarted at a moment’s notice, and easily transferred across hardware and environments. Because containers are standalone packages, they behave reliably and consistently for everyone, all the time, regardless of the local configuration.
When we talk container orchestrators, you may find that Kubernetes is frequently mentioned. There are several out there, but Kubernetes is the leading container orchestration tool, filling a vital role for anyone who needs to run large numbers of containers in a production environment – on one or more dedicated servers, for example. Kubernetes automates the deployment, scheduling and management of containerised applications. It automatically scales containers across multiple nodes (servers or VMs) to meet current demand and perform rollouts seamlessly, while also enabling containerised applications to self-heal: if a node fails, Kubernetes restarts, replaces or reschedules containers as required.
As with traditional web hosting solutions, you can choose whether to run your containers in a shared environment, where you will likely get the best value for money if you have relatively small workloads that will not fully utilise resources of a whole cluster of nodes (VMs or servers). But if you have larger workloads or regulatory obligations to meet, a dedicated environment, or even your own cluster, may be required.
In serverless computing, the orchestrator will automatically stop, start and scale the container on the infrastructure best placed to handle the demand at that time. This means that the developer has even less to be concerned about; code runs automatically, with no need to manually configure the infrastructure. Costs are also minimised, with all instances of a container automatically shut down when demand for it disappears.
“Microservices” is another term often used when discussing containers. Simply put, a traditional application is built as one big block, with a single file system, shared databases and a common language across its various functions. A microservices application reveals itself behind the scenes, where functions are broken down into individual components; for example a product service, a payment service, and a customer review service. Containerisation technologies like Kubernetes provide platforms and management tools for implementation, enabling microservices to be lightweight and run anywhere. Microservices can technically be built on traditional server hosting, but the practical reality of creating and maintaining a full microservices architecture demands a container platform like Docker, and an orchestration tool like Kubernetes.
Fasthosts remains focussed on building on these systems, with container technology firmly placed as the platform of the future.
Product manager Gavin Etheridge is confident about the future of containers: “Our CloudNX Apps & Stacks services are already built on these technologies, and we continue to take what we’ve learned and apply it to all our products. We use these technologies internally – we’ve been the guinea pigs ourselves – and our underlying platforms have become more resilient, with the additional benefits of self-healing. In the years to come, the development and adoption of containers will likely continue to accelerate.”