Corporate Blogs

Build Your Next Online Course Using Course Maker Pro

WP Engine -

Today, WordPress is being used in a multitude of ways—from content hubs to e-commerce websites and everything in between—more users are turning to WordPress to power their digital experiences than ever before.   One of the reasons WordPress has become so popular across various use cases is the massive catalog of pre-built, industry-specific themes users can… The post Build Your Next Online Course Using Course Maker Pro appeared first on WP Engine.

New Automation Features In AWS Systems Manager

Amazon Web Services Blog -

Today we are announcing additional automation features inside of AWS Systems Manager. If you haven’t used Systems Manager yet, it’s a service that provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With this new release, it just got even more powerful. We have added additional capabilities to AWS Systems Manager that enables you to build, run, and share automations with others on your team or inside your organisation — making managing your infrastructure more repeatable and less error-prone. Inside the AWS Systems Manager console on the navigation menu, there is an item called Automation if I click this menu item I will see the Execute automation button. When I click on this I am asked what document I want to run. AWS provides a library of documents that I could choose from, however today, I am going to build my own so I will click on the Create document button. This takes me to a a new screen that allows me to create a document (sometimes referred to as an automation playbook) that amongst other things executes Python or PowerShell scripts. The console gives me two options for editing a document: A YAML editor or the “Builder” tool that provides a guided, step-by-step user interface with the ability to include documentation for each workflow step. So, let’s take a look by building and running a simple automation. When I create a document using the Builder tool, the first thing required is a document name. Next, I need to provide a description. As you can see below, I’m able to use Markdown to format the description. The description is an excellent opportunity to describe what your document does, this is valuable since most users will want to share these documents with others on their team and build a library of documents to solve everyday problems. Optionally, I am asked to provide parameters for my document. These parameters can be used in all of the scripts that you will create later. In my example, I have created three parameters: imageId, tagValue, and instanceType. When I come to execute this document, I will have the opportunity to provide values for these parameters that will override any defaults that I set. When someone executes my document, the scripts that are executed will interact with AWS services. A document runs with the user permissions for most of its actions along with the option of providing an Assume Role. However, for documents with the Run a Script action, the role is required when the script is calling any AWS API. You can set the Assume role globally in the builder tool; however, I like to add a parameter called assumeRole to my document, this gives anyone that is executing it the ability to provide a different one. You then wire this parameter up to the global assumeRole by using the {{assumeRole}}syntax in the Assume role property textbox (I have called my parameter name assumeRole but you could call it what you like, just make sure that the name you give the parameter is what you put in the double parentheses syntax e.g.{{yourParamName}}). Once my document is set up, I then need to create the first step of my document. Your document can contain 1 or more steps, and you can create sophisticated workflows with branching, for example based on a parameter or failure of a step. Still, in this example, I am going to create three steps that execute one after another. Again you need to give the step a name and a description. This description can also include markdown. You need to select an Action Type, for this example I will choose Run a script. With the ‘Run a script’ action type, I get to run a script in Python or PowerShell without requiring any infrastructure to run the script. It’s important to realise that this script will not be running on one of your EC2 instances. The scripts run in a managed compute environment. You can configure a Amazon CloudWatch log group on the preferences page to send outputs to a CloudWatch log group of your choice. In this demo, I write some Python that creates an EC2 instance. You will notice that this script is using the AWS SDK for Python. I create an instance based upon an image_id, tag_value, and instance_type that are passed in as parameters to the script. To pass parameters into the script, in the Additional Inputs section, I select InputPayload as the input type. I then use a particular YAML format in the Input Value text box to wire up the global parameters to the parameters that I am going to use in the script. You will notice that again I have used the double parentheses syntax to reference the global parameters e.g. {{imageId}} In the Outputs section, I also wire up an output parameter than can be used by subsequent steps. Next, I will add a second step to my document . This time I will poll the instance to see if its status has switched to ok. The exciting thing about this code is the InstanceId, is passed into the script from a previous step. This is an example of how the execution steps can be chained together to use outputs of earlier steps. def poll_instance(events, context): import boto3 import time ec2 = boto3.client('ec2') instance_id = events['InstanceId'] print('[INFO] Waiting for instance to enter Status: Ok', instance_id) instance_status = "null" while True: res = ec2.describe_instance_status(InstanceIds=[instance_id]) if len(res['InstanceStatuses']) == 0: print("Instance Status Info is not available yet") time.sleep(5) continue instance_status = res['InstanceStatuses'][0]['InstanceStatus']['Status'] print('[INFO] Polling get status of the instance', instance_status) if instance_status == 'ok': break time.sleep(10) return {'Status': instance_status, 'InstanceId': instance_id} To pass the parameters into the second step, notice that I use the double parentheses syntax to reference the output of a previous step. The value in the Input value textbox {{launchEc2Instance.payload}} is the name of the step launchEc2Instance and then the name of the output parameter payload. Lastly, I will add a final step. This step will run a PowerShell script and use the AWS Tools for PowerShell. I’ve added this step purely to show that you can use PowerShell as an alternative to Python. You will note on the first line that I have to Install the AWSPowerShell.NetCore module and use the -Force switch before I can start interacting with AWS services. All this step does is take the InstanceId output from the LaunchEc2Instance step and use it to return the InstanceType of the ECS instance. It’s important to note that I have to pass the parameters from LaunchEc2Instance step to this step by configuring the Additional inputs in the same way I did earlier. Now that our document is created we can execute it. I go to the Actions & Change section of the menu and select Automation, from this screen, I click on the Execute automation button. I then get to choose the document I want to execute. Since this is a document I created, I can find it on the Owned by me tab. If I click the LaunchInstance document that I created earlier, I get a document details screen that shows me the description I added. This nicely formatted description allows me to generate documentation for my document and enable others to understand what it is trying to achieve. When I click Next, I am asked to provide any Input parameters for my document. I add the imageId and ARN for the role that I want to use when executing this automation. It’s important to remember that this role will need to have permissions to call any of the services that are requested by the scripts. In my example, that means it needs to be able to create EC2 instances. Once the document executes, I am taken to a screen that shows the steps of the document and gives me details about how long each step took and respective success or failure of each step. I can also drill down into each step and examine the logs. As you can see, all three steps of my document completed successfully, and if I go to the Amazon Elastic Compute Cloud (EC2) console, I will now have an EC2 instance that I created with tag LaunchedBySsmAutomation. These new features can be found today in all regions inside the AWS Systems Manager console so you can start using them straight away. Happy Automating! — Martin;

2019 Fall Hackathon: Propelling WP Engine Forward, Faster

WP Engine -

WP Engine, like any engine, needs fuel to press ahead. Innovation is the spark that ignites and propels us forward faster, and to keep that ingenious spark lit, we actively foster a creative and collaborative environment at WP Engine where cutting-edge ideas can take root and flourish. Our bi-annual Hackathons play an integral role in… The post 2019 Fall Hackathon: Propelling WP Engine Forward, Faster appeared first on WP Engine.

Impressions From WordCamp US 2019

InMotion Hosting Blog -

As a longtime sponsor of open-source projects, InMotion Hosting was thrilled to have the opportunity to sponsor WordCamp US 2019 in St. Louis, Missouri. WordCamp US is the year end WordPress meetup for North America. There were over 2,000 attendees – including Cody, Harry, and Joseph from InMotion Hosting. Each of them attended an expert speaker session and we wanted to take the opportunity to share their highlights: Cody Murphy On Marketing and Automation I attended a fascinating session about automation by Beka Rice, Head of Product at Skyverge. Continue reading Impressions From WordCamp US 2019 at The Official InMotion Hosting Blog.

Accelerate SQL Server Always On Deployments with AWS Launch Wizard

Amazon Web Services Blog -

Customers sometimes tell us that while they are experts in their domain, their unfamiliarity with the cloud can make getting started more challenging and take more time. They want to be able to quickly and easily deploy enterprise applications on AWS without needing prior tribal knowledge of the AWS platform and best practices, so as to accelerate their journey to the cloud. Announcing AWS Launch Wizard for SQL Server AWS Launch Wizard for SQL Server is a simple, intuitive and free to use wizard-based experience that enables quick and easy deployment of high availability SQL solutions on AWS. The wizard walks you through an end-to-end deployment experience of Always On Availability Groups using prescriptive guidance. By answering a few high-level questions about the application such as required performance characteristics the wizard will then take care of identifying, provisioning, and configuring matching AWS resources such as Amazon Elastic Compute Cloud (EC2) instances, Amazon Elastic Block Store (EBS) volumes, and an Amazon Virtual Private Cloud. Based on your selections the wizard presents you with a dynamically generated estimated cost of deployment – as you modify your resource selections, you can see an updated cost assessment to help you match your budget. Once you approve, AWS Launch Wizard for SQL Server provisions these resources and configures them to create a fully functioning production-ready SQL Server Always On deployment in just a few hours. The created resources are tagged making it easy to identity and work with them and the wizard also creates AWS CloudFormation templates, providing you with a baseline for repeatable and consistent application deployments. Subsequent SQL Server Always On deployments become faster and easier as AWS Launch Wizard for SQL Server takes care of dealing with the required infrastructure on your behalf, determining the resources to match your application’s requirements such as performance, memory, bandwidth etc (you can modify the recommended defaults if you wish). If you want to bring your own SQL Server licenses, or have other custom requirements for the instances, you can also select to use your own custom AMIs provided they meet certain requirements (noted in the service documentation). Using AWS Launch Wizard for SQL Server To get started with my deployment, in the Launch Wizard Console I click the Create deployment button to start the wizard and select SQL Server Always On. The wizard requires an AWS Identity and Access Management (IAM) role granting it permissions to deploy and access resources in my account. The wizard will check to see if a role named AmazonEC2RoleForLaunchWizard exists in my account. If so it will be used, otherwise a new role will be created. The new role will have two AWS managed policies, AmazonSSMManagedInstanceCore and AmazonEC2RolePolicyforLaunchWizard, attached to it. Note that this one time setup process will be typically performed by an IAM Administrator for your organization. However, the IAM user does not have to be an Administrator and CreateRole, AttachRolePolicy, and GetRole permissions are sufficient to perform these operations. After the role is created, the IAM Administrator can delegate the application deployment process to another IAM user who, in turn, must have the AWS Launch Wizard for SQL Server IAM managed policy called AmazonLaunchWizardFullaccess attached to it. With the application type selected I can proceed by clicking Next to start configuring my application settings, beginning with setting a deployment name and optionally an Amazon Simple Notification Service (SNS) topic that AWS Launch Wizard for SQL Server can use for notifications and alerts. In the connectivity options I can choose to use an existing Amazon Virtual Private Cloud or have a new one created. I can also specify the name of an existing key pair (or create one). The key pair will be used if I want to RDP into my instances or obtain the administrator password. For a new Virtual Private Cloud I can also configure the IP address or range to which remote desktop access will be permitted: Instances launched by AWS Launch Wizard for SQL Server will be domain joined to an Active Directory. I can select either an existing AWS Managed AD, or an on-premises AD, or have the wizard create a new AWS Managed Directory for my deployment: The final application settings relate to SQL Server. This is also where I can specify a custom AMI to be used if I want to bring my own SQL Server licenses or have other customization requirements. Here I’m just going to create a new SQL Server Service account and use an Amazon-provided image with license included. Note that if I choose to use an existing service account it should be part of the Managed AD in which you are deploying: Clicking Next takes me to a page to define the infrastructure requirements of my application, in terms of CPU and network performance and memory. I can also select the type of storage (solid state vs magnetic) and required SQL Server throughput. The wizard will recommend the resource types to be launched but I can also override and select specific instance and volume types, and I can also set custom tags to apply to the resources that will be created: The final section of this page shows me the cost estimate based on my selections. This data in this panel is dynamically generated based on my prior selections and I can go back and forth in the wizard, tuning my selections to match my budget: When I am happy with my selections, clicking Next takes me to wizard’s final Review page where I can view a summary of my selections and acknowledge that AWS resources and AWS Identity and Access Management (IAM) permissions will be created on my behalf, along with the estimated cost as was shown in the estimator on the previous page. My final step is to click Deploy to start the deployment process. Status updates during deployment can be viewed on the Deployments page with a final notification to inform me on completion. Post-deployment Management Once my application has been deployed I can manage its resources easily. Firstly I can navigate to Deployments on the AWS Launch Wizard for SQL Server dashboard and using the Actions dropdown I can jump to the Amazon Elastic Compute Cloud (EC2) console where I can manage the EC2 instances, EBS volumes, Active Directory etc. Or, using the same Actions dropdown, I can access SQL Server via the remote desktop gateway instance. If I want to manage future updates and patches to my application using AWS Systems Manager another Actions option takes me to the Systems Manager dashboard for managing my application. I can also use the AWS Launch Wizard for SQL Server to delete deployments performed using the wizard and it will perform a roll-back of all AWS CloudFormation stacks that the service created. Now Available AWS Launch Wizard for SQL Server is generally available and you can use it in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), South America (Sao Paulo), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Seoul), Asia Pacific (Tokyo), EU (Frankfurt), EU (Ireland), EU (London), and EU (Stockholm). Support for the AWS regions in China, and for the GovCloud Region, is in the works. There is no additional charge for using AWS Launch Wizard for SQL Server, only for the resources it creates. — Steve

AWS Data Exchange – Find, Subscribe To, and Use Data Products

Amazon Web Services Blog -

We live in a data-intensive, data-driven world! Organizations of all types collect, store, process, analyze data and use it to inform and improve their decision-making processes. The AWS Cloud is well-suited to all of these activities; it offers vast amounts of storage, access to any conceivable amount of compute power, and many different types of analytical tools. In addition to generating and working with data internally, many organizations generate and then share data sets with the general public or within their industry. We made some initial steps to encourage this back in 2008 with the launch of AWS Public Data Sets (Paging Researchers, Analysts, and Developers). That effort has evolved into the Registry of Open Data on AWS (New – Registry of Open Data on AWS (RODA)), which currently contains 118 interesting datasets, with more added all the time. New AWS Data Exchange Today, we are taking the next step forward, and are launching AWS Data Exchange. This addition to AWS Marketplace contains over one thousand licensable data products from over 80 data providers. There’s a diverse catalog of free and paid offerings, in categories such as financial services, health care / life sciences, geospatial, weather, and mapping. If you are a data subscriber, you can quickly find, procure, and start using these products. If you are a data provider, you can easily package, license, and deliver products of your own. Let’s take a look at Data Exchange from both vantage points, and then review some important details. Let’s define a few important terms before diving in: Data Provider – An organization that has one or more data products to share. Data Subscriber – An AWS customer that wants to make use of data products from Data Providers. Data Product – A collection of data sets. Data Set – A container for data assets that belong together, grouped by revision. Revision – A container for one or more data assets as of a point in time. Data Asset – The actual data, in any desired format. AWS Data Exchange for Data Subscribers As a data subscriber, I click View product catalog and start out in the Discover data section of the AWS Data Exchange Console: Products are available from a long list of vendors: I can enter a search term, click Search, and then narrow down my results to show only products that have a Free pricing plan: I can also search for products from a specific vendor, that match a search term, and that have a Free pricing plan: The second one looks interesting and relevant, so I click on 5 Digit Zip Code Boundaries US (TRIAL) to learn more: I think I can use this in my app, and want to give it a try, so I click Continue to subscribe. I review the details, read the Data Subscription Agreement, and click Subscribe: The subscription is activated within a few minutes, and I can see it in my list of Subscriptions: Then I can download the set to my S3 bucket, and take a look. I click into the data set, and find the Revisions: I click into the revision, and I can see the assets (containing the actual data) that I am looking for: I select the asset(s) that I want, and click Export to Amazon S3. Then I choose a bucket, and Click Export to proceed: This creates a job that will copy the data to my bucket (extra IAM permissions are required here; read the Access Control documentation for more info): The jobs run asynchronously and copy data from Data Exchange to the bucket. Jobs can be created interactively, as I just showed you, or programmatically. Once the data is in the bucket, I can access and process it in any desired way. I could, for example, use a AWS Lambda function to parse the ZIP file and use the results to update a Amazon DynamoDB table. Or, I could run an AWS Glue crawler to get the data into my Glue catalog, run an Amazon Athena query, and visualize the results in a Amazon QuickSight dashboard. Subscription can last from 1-36 months with an auto-renew option; subscription fees are billed to my AWS account each month. AWS Data Exchange for Data Providers Now I am going to put my “data provider” hat and show you the basics of the publication process (the User Guide contains a more detailed walk-through). In order to be able to license data, I must agree to the terms and conditions, and my application must be approved by AWS. After I apply and have been approved, I start by creating my first data set. I click Data sets in the navigation, and then Create data set: I describe my data set, and have the option to tag it, then click Create: Next, I click Create revision to create the first revision to the data set: I add a comment, and have the option to tag the revision before clicking Create: I can copy my data from an existing S3 location, or I can upload it from my desktop: I choose the second option, select my file, and it appears as an Imported asset after the import job completes. I review everything, and click Finalize for the revision: My data set is ready right away, and now I can use it to create one or more products: The console outlines the principal steps: I can set up public pricing information for my product: AWS Data Exchange lets me create private pricing plans for individual customers, and it also allows my existing customers to bring their existing (pre-AWS Data Exchange) licenses for my products along with them by creating a Bring Your Own Subscription offer. I can use the provided Data Subscription Agreement (DSA) provided by AWS Data Exchange, use it as the basis for my own, or I can upload an existing one: I can use the AWS Data Exchange API to create, update, list, and manage data sets and revisions to them. Functions include CreateDataSet, UpdataSet, ListDataSets, CreateRevision, UpdateAsset, and CreateJob. Things to Know Here are a couple of things that you should know about Data Exchange: Subscription Verification – The data provider can also require additional information in order to verify my subscription. If that is the case, the console will ask me to supply the info, and the provider will review and approve or decline within 45 days: Here is what the provider sees: Revisions & Notifications – The Data Provider can revise their data sets at any time. The Data Consumer receives a CloudWatch Event each time a product that they are subscribed to is updated; this can be used to launch a job to retrieve the latest revision of the assets. If you are implementing a system of this type and need some test events, find and subscribe to the Heartbeat product: Data Categories & Types – Certain categories of data are not permitted on AWS Data Exchange. For example, your data products may not include information that can be used to identify any person, unless that information is already legally available to the public. See, Publishing Guidelines for detailed guidelines on what categories of data are permitted. Data Provider Location – Data providers must either be a valid legal entity domiciled in the United States or in a member state of the EU. Available Now AWS Data Exchange is available now and you can start using it today. If you own some interesting data and would like to publish it, start here. If you are a developer, browse the product catalog and look for data that will add value to your product. — Jeff;    

New – Import Existing Resources into a CloudFormation Stack

Amazon Web Services Blog -

With AWS CloudFormation, you can model your entire infrastructure with text files. In this way, you can treat your infrastructure as code and apply software development best practices, such as putting it under version control, or reviewing architectural changes with your team before deployment. Sometimes AWS resources initially created using the console or the AWS Command Line Interface (CLI) need to be managed using CloudFormation. For example, you (or a different team) may create an IAM role, a Virtual Private Cloud, or an RDS database in the early stages of a migration, and then you have to spend time to include them in the same stack as the final application. In such cases, you often end up recreating the resources from scratch using CloudFormation, and then migrating configuration and data from the original resource. To make these steps easier for our customers, you can now import existing resources into a CloudFormation stack! It was already possible to remove resources from a stack without deleting them by setting the DeletionPolicy to Retain. This, together with the new import operation, enables a new range of possibilities. For example, you are now able to: Create a new stack importing existing resources. Import existing resources in an already created stack. Migrate resources across stacks. Remediate a detected drift. Refactor nested stacks by deleting children stacks from one parent and then importing them into another parent stack. To import existing resources into a CloudFormation stack, you need to provide: A template that describes the entire stack, including both the resources to import and (for existing stacks) the resources that are already part of the stack. Each resource to import must have a DeletionPolicy attribute in the template. This enables easy reverting of the operation in a completely safe manner. A unique identifier for each target resource, for example the name of the Amazon DynamoDB table or of the Amazon Simple Storage Service (S3) bucket you want to import. During the resource import operation, CloudFormation checks that: The imported resources do not already belong to another stack in the same region (be careful with global resources such as IAM roles). The target resources exist and you have sufficient permissions to perform the operation. The properties and configuration values are valid against the resource type schema, which defines its required, acceptable properties, and supported values. The resource import operation does not check that the template configuration and the actual configuration are the same. Since the import operation supports the same resource types as drift detection, I recommend running drift detection after importing resources in a stack. Importing Existing Resources into a New Stack In my AWS account, I have an S3 bucket and a DynamoDB table, both with some data inside, and I’d like to manage them using CloudFormation. In the CloudFormation console, I have two new options: I can create a new stack importing existing resources. I can import resources into an existing stack. In this case, I want to start from scratch, so I create a new stack. The next step is to provide a template with the resources to import. I upload the following template with two resources to import: a DynamoDB table and an S3 bucket. AWSTemplateFormatVersion: "2010-09-09" Description: Import test Resources: ImportedTable: Type: AWS::DynamoDB::Table DeletionPolicy: Retain Properties: BillingMode: PAY_PER_REQUEST AttributeDefinitions: - AttributeName: id AttributeType: S KeySchema: - AttributeName: id KeyType: HASH ImportedBucket: Type: AWS::S3::Bucket DeletionPolicy: Retain In this template I am setting DeletionPolicy  to Retain for both resources. In this way, if I remove them from the stack, they will not be deleted. This is a good option for resources which contain data you don’t want to delete by mistake, or that you may want to move to a different stack in the future. It is mandatory for imported resources to have a deletion policy set, so you can safely and easily revert the operation, and be protected from mistakenly deleting resources that were imported by someone else. I now have to provide an identifier to map the logical IDs in the template with the existing resources. In this case, I use the DynamoDB table name and the S3 bucket name. For other resource types, there may be multiple ways to identify them and you can select which property to use in the drop-down menus. In the final recap, I review changes before applying them. Here I check that I’m targeting the right resources to import with the right identifiers. This is actually a CloudFormation Change Set that will be executed when I import the resources. When importing resources into an existing stack, no changes are allowed to the existing resources of the stack. The import operation will only allow the Change Set action of Import. Changes to parameters are allowed as long as they don’t cause changes to resolved values of properties in existing resources. You can change the template for existing resources to replace hard coded values with a Ref to a resource being imported. For example, you may have a stack with an EC2 instance using an existing IAM role that was created using the console. You can now import the IAM role into the stack and replace in the template the hard coded value used by the EC2 instance with a Ref to the role. Moving on, each resource has its corresponding import events in the CloudFormation console. When the import is complete, in the Resources tab, I see that the S3 bucket and the DynamoDB table are now part of the stack. To be sure the imported resources are in sync with the stack template, I use drift detection. All stack-level tags, including automatically created tags, are propagated to resources that CloudFormation supports. For example, I can use the AWS CLI to get the tag set associated with the S3 bucket I just imported into my stack. Those tags give me the CloudFormation stack name and ID, and the logical ID of the resource in the stack template: $ aws s3api get-bucket-tagging --bucket danilop-toimport { "TagSet": [ { "Key": "aws:cloudformation:stack-name", "Value": "imported-stack" }, { "Key": "aws:cloudformation:stack-id", "Value": "arn:aws:cloudformation:eu-west-1:123412341234:stack/imported-stack/..." }, { "Key": "aws:cloudformation:logical-id", "Value": "ImportedBucket" } ] } Available Now You can use the new CloudFormation import operation via the console, AWS Command Line Interface (CLI), or AWS SDKs, in the following regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), EU (Frankfurt), EU (Ireland), EU (London), EU (Paris), and South America (São Paulo). It is now simpler to manage your infrastructure as code, you can learn more on bringing existing resources into CloudFormation management in the documentation. — Danilo

Accessing Bing Webmaster Tools API using cURL

Bing's Webmaster Blog -

Thank you webmasters for effectively using Adaptive URL solution to notify bingbot about your website’s most fresh and relevant content. But, did you know you don’t have to use the Bing webmaster tools portal to submit URLs? Bing webmaster tools exposes programmatic access to its APIs for webmasters to integrate their workflows. Here is an example using the popular command line utility cURL that shows how easy  it is to integrate the Submit URL single and Submit URL batch API end points. You can use Get url submission quota API to check remaining daily quota for your account. Bing API can be integrated and called by all modern languages (C#, Python, PHP…), however, cURL can help you to prototype and test the API in minutes and also build complete solutions with minimal effort. cURL is considered as one of the most versatile tools for command-line API calls and is supported by all major Linux shells – simply run the below commands in a terminal window. If you're a Windows user, you can run cURL commands in Git Bash, the popular git client for Windows" (no need to install curl separately, Git Bash comes with curl).  If you are a Mac user, you can install cURL using a package manager such as Homebrew. When you try the examples below, be sure to replace API_KEY with your API key string obtained from Bing webmaster tools > Webmaster API > Generate. Refer easy set-up guide for Bing’s Adaptive URL submission API for more details.   Submitting new URLs – Single curl -X POST "" -H "Content-Type: application/json" -H "charset: utf-8" -d '{"siteUrl":"", "url": ""}' Response: {"d": null} Submitting new URLs – Batch curl -X POST “” -H “Content-Type: application/json” -H “charset: utf-8” -d ‘{“siteUrl”:””, “urlList”:[“”, “”]}’ Response: {“d”:null} Check remaining API Quota curl “” Response: { “d”: {“__type”: “UrlSubmissionQuota:#Microsoft.Bing.Webmaster.Api”, “DailyQuota”: 973, “MonthlyQuota”: 10973 }} So, integrate the APIs today to get your content indexed real time by Bing. Please reach out to Bing webmaster tools support if you face any issues.  Thanks, Bing Webmaster Tools team  

Contextual Advertising Simplified: A Beginner’s Guide

BigRock Blog -

A couple of weeks ago, I was searching for the perfect hair care products to rejuvenate my hair. Needless to say, I did my complete research – browsing websites, seeing YouTube video, asking friends and even checking posts by influencers on social media. Eventually, I logged into my Amazon account, added the products to the cart but right before I could click ‘Buy’ I changed my mind. I didn’t want to purchase the products anymore until today when I finally purchased one of them.  Now, you might be wondering why am I sharing this here? Well, that’s because I realised, what you see frequently is what you end up buying. But more than that, it is the location or relevance where you see it. In our context – I am talking about an Ad.  Take, for instance, these two images listed below:  Note: The two products being shown in the images are brands I was actively searching for in the last couple of weeks and are actual screenshots. Image 1: Source: Instagram Image 2: Source: Look Like This (beauty, makeup & fashion blog) There are 4 main inferences, I would like to call out:  Both these products are different in terms of what they are, and the brand, however, hair care is the common thing These were the brands, if not the actual products I was actively searching for One ad is on my Instagram homepage while the other was on a website I follow And most importantly, which product did I end up purchasing and why?  The answer is, I purchased a Kama Ayurveda product shown in the 2nd image. However, the crux of this article is not what I bought but the ‘why.’ Why did I choose to go back to my abandoned cart and make the payment for only one product? And does this impact the decision making of other users as well? Let us begin with understanding advertising: What is Advertising? Advertising is a way to communicate with the consumer with a motive of persuading them to take action. The primary goal of advertising can thus be said to promote or sell a product to the consumer in an effective manner.  Advertising includes the advertiser, the page that promotes the advertisement, and the consumer who views it and decides to go through the purchase.  There are various types of advertising models available, some of the widely incorporated ones are: Social media advertising Native ads and sponsored content Paid search advertising Broadcast media Targeted advertising Depending on the type of advertising you think is suitable for your business you can incorporate it. Two of the most commonly known advertising types are pay-per-click advertising which falls under the paid search advertising model and contextual advertising under targeted advertising.  In this article, we’ll be covering contextual advertising. Namely, what contextual advertising is, its benefits and how to set it up. What is Contextual Advertising?  As a user, many-a-times you notice blogs and websites you frequent displaying ads for related products. Sometimes, you might find Ads that have no relation to the niche of the website, other times, they do. You’ve just seen a contextual ad!  Contextual advertising is a targeted type of advertising technique where the ad campaigns and the website or page the ad is placed is directly relevant to the user.  From a user point of view (me in this case), I purchased Kama Ayurveda Oil because I was being targeted with it every time I visited the fashion website to check a new product or review.  Now, from the business point of view, take, for example, if you are running a ‘Fashion and Beauty blog/website’, but your page displays a hosting or cooking Ad, the context is lost. The idea is that your website display Ads based on the product you sell, or the niche you write. In the same example, if the ad displayed on your website is a lipstick or perfume, then it will be contextual advertising. So, even though the type of ad doesn’t mean anything to the user, you have just lost on some brand building, ROI and conversions. Context is crucial when you’re trying to ensure that your users click on the Ad displayed on your web page, so you benefit from it.  Benefits of Contextual Advertising  Offers better user-experience to the viewers– this increases the chances of the Ads displayed on your page being clicked as they are based on website content. This also enhances the relevancy of the ad. Contextual Ads also improve the engagement rates of ad campaigns running on your website as it targets users based on context and not content alone.  How does contextual advertising work? Now that we’ve seen what contextual advertising is, how do you get relevant ads to display on relevant websites? Well, the answer to this is keyword targeting, topics and placements. Specific targeting helps to narrow down your pool and offers you a hassle-free experience. For this reason, Google AdSense is the perfect platform for contextual advertising. AdSense allows you to place text-based, video and image ads on the web pages of the relevant pool of websites. This way, your ad is visible to users who aren’t necessarily searching for you directly.  Take, for instance, the example shown below. As a user, I searched for ‘Relaxing Stress.’ However, before the video plays, I see an advertisement for ‘Mindvalley’ that talks about embracing our body energy etc. Now as a user, I didn’t search for Mindvalley, but since the YouTuber has chosen contextual advertising I could see this ad, and the chances of me clicking on it are high as it relevant to the content I searched. In this way, it is a win-win situation for both the advertiser and the page displaying the ad. So, what contextual advertising aims, is to target the user relevancy to the ad, such that the main purpose of ROI, conversions and clicks are fulfilled.  There are multiple advertising platforms available, however, Google AdSense is one of the leading tools. Google AdSense is one of the best and simplest tools to get started with contextual advertising. Follow the below-mentioned steps to kickstart your advertising journey! Steps to set up contextual advertising on Google AdSense: Setup your Google AdSense account Next, to set up your Campaign, select ‘Display Network’ and click on the what you want to optimise your ad for Next, you’ll be asked for ‘Campaign Subtype’ select ‘Standard display campaign’ over ‘Gmail campaign’ as this will give you a greater audience reach  Post this you need to select your target audience and their demographics. For this, your audience pool needs to meet certain criteria they are: Affinity – This allows you to target people based on long-term interests Intent – This is the actively purchasing products space Remarketing – this allows you to target your ads to users based on their age, gender, income, parental status etc. Once you’ve set up your campaign, it is time for customising it for ‘content targeting.’ After setting up the demographics, you will see this sign ‘+ Content Targeting’ click on this to choose keywords, topics, and placements. This is of the utmost importance when it comes to contextual advertising. Although content is said to be the king, this perspective is slowly changing with context being the new king. After all, ad campaigns targeted to users at the right place and at the right time, yield better results and improve the effectiveness of the ads to both the publishers and the users.  Have you switched to contextual advertising? If yes, do let us know your experience in the comments section below!


Recommended Content

Subscribe to Complete Hosting Guide aggregator - Corporate Blogs