At Waggener Edstrom, a global integrated communications agency, there are strong bonds between PR professionals, developers and analytics teams. Application developers are brought directly into the PR dialogue, where their skills and imagination are called on to create new tools and platforms for measuring campaign results on a daily or even moment-by-moment basis. These tools and platforms uncover access to data that can be used to direct and modify communications campaigns to make the greatest business impact.
Waggener Edstrom’s story tells how WE Infinity, a data mining and analytics platform, tracks online news coverage, millions of Tweets and other website content in real-time providing account managers the opportunity to measure campaign performance and help their clients modify strategies and tactics as a campaign matures. It’s an agile approach to unstructured data analysis that has helped Waggener Edstrom spur growth in an impressive roster of global enterprises.
“With the rapid pace of today’s communications environment combined with a deluge of data, our clients need to have accurate intelligence that helps them make decisions they can trust. With WE Infinity, we are helping our clients answer two of the most fundamental communications questions: Did I make an impact, and how do I improve my impact going forward,” said Karla Wachter, Waggener Edstrom Communications senior vice president of Insight & Analytics.
WE Infinity started with an internal hackathon, the desire for real-time analysis, and our free API keys. Application developers at Waggener Edstrom used five services from keyword extraction and named entities to concept tagging and author extraction to transform an approach that once was entirely manual – visiting numerous web sites and listening to multiple social channels on a daily basis – to a more than 80% automated process.
“We wanted to crunch near real-time data, and we needed a way to analyze it to determine if it was relevant to our clients,” said David Kohn, Waggener Edstrom vice president of software development. “Our developers quickly built a proof-of-concept that included AlchemyAPI’s services using the free API keys available on their website. That was given to a larger team of developers with the challenge to build their own tools using that platform.”
The data produced about brands, products and issues is only growing in today’s fast paced environment. But data is just data until it is transformed into useful intelligence that helps communications professionals understand their audience, who influences them and where, what messages resonate and what ones don’t. With solutions like WE Infinity, not only is Waggener Edstrom providing near real-time insight into the performance of communications activities, but also what can be done to improve performance going forward.
During a recent webcast, Eric Schmidt, Product Manager for Google's Cloud Dataflow and Richard Leavitt, CMO of AlchemyAPI partnered to dig into the analytical pipeline capabilities Google recently unveiled under their Cloud Dataflow services. Eric shared how he analyzed the sentiment of millions of tweets streaming real-time from the World Cup in order to track fan response to events happening on the field. Attendees learned how he calculated average sentiment per team and per match, correlated sentiment with keywords of turning points in the games, and tied it all together with a timeline visualization that lets you track how global fans feel throughout the match.
As always, we received excellent questions for our presenters and couldn't quite get to all of them. Read on to get the answers to your top seven questions on everything from the use of third party services to training algorithms to using similar approaches on text messages.
1. There’s an idea of plugging a third party service into something as big and fast as what Google is handling. How could you get a service like AlchemyAPI operating at the volume and speed you were processing?
Eric: Taking a step back, the pipeline or graph that I built when deployed on Dataflow is deployed across virtual machines to execute each step. One of my steps was to go and call out to Alchemy’s APIs and the translation service. We had to do a little bit of tuning to see how much latency there was with those downstream services. Our experience was that the latency was consistent when we were making calls out to the service. Then we could work backwards and say, “If we want to do 1,000 per second and we have X messages that don’t ultimately make it through then we can plan to only score so many messages and plan the amount of workers we need to call out.” We actually have a primitive in our SDK that helps you throttle the amount of parallelization that you do for an external service. You wouldn’t just stampede over a service like AlchemyAPI.
Richard: AlchemyAPI does support multiple connections up into the thousands so the idea to get many threads or many connections going to process at whatever speed you need is something Alchemy is really robust and able to handle.
2. There’s the ability in AlchemyAPI to target your sentiment at high, medium or low levels. When extracting sentiment, were you using general sentiments of the Tweets or were you targeting specific entities or teams?
Eric: We just used the general sentiment of Tweets. If I had to do it again, I don’t know that we would make a different decision. We felt pretty strongly that we could build a better targeting mechanism for soccer because we were already building up a rich set of inputs to help us understand team composition. We were building rosters and other team information for a different part of the project. So, we felt confident that we could understand the target of the Tweet with the data that we already had.
We built our own targeting algorithm for two reasons. One, resolving targets on soccer tweets is hard unless you have a very specific training model e.g. player names, nick names, country names, supporter phrases, etc. We weren’t getting the right level of targeting. And two, given the terseness of the Tweets we felt we could do this inline more efficiently vs. calling out. Now if I were processing an entire paragraph or blog, I’d use Alchemy’s API to do the targeting.
Richard: Tweets are notoriously short so it can be a challenge to gather sentiment with traditional language tools. Alchemy supports targeting sentiment at multiple levels of a document. You can get the overall sentiment of the complete document, or target sentiment to the specific phrases you care about (a team name, player, etc.), the named entities in the document or all the way down to each keyword and phrase. Before you target sentiment with your own phrases, you should see if our entities and keywords are already recognizing your data types. We do a pretty good job at recognizing these, even for brand new, first-time-seen entities and keywords! You can learn more here.
3. There is an idea that a lot of training goes on when using cloud services. Did you do any training? Or, were you using the “off-the-shelf” AlchemyAPI sentiment analysis feature?
Eric: Time and quality were big factors for us. We tried to do our own training with both our Prediction API and some other open-source APIs. We just weren’t getting the results that we wanted. Now, it was clear that our accuracy improved as we trained with Google’s Prediction API. But as Product Manager, I took a step back and I wondered if somebody else had built a better model. That was how I came across AlchemyAPI. I was doing testing on the same data that we were training on. I was getting better accuracy with AlchemyAPI and I didn’t have to invest time to build the training models.
4. Can the APIs you used be on data streams other than Twitter? Or, are they only for Twitter?
Eric: Yes you can process any type of stream source such as text, binary, relational dataset, etc. Dataflow’s internal data structure for containing data is called a PCollection where the type can be any type you create. PCollection
Richard: A powerful feature of AlchemyAPI is that we take on the heavy lifting of acquiring, crawling and cleaning text and images directly from web pages or posted HTML. In addition to text and image extraction APIs, you can scrape content from complex pages, extract authors, detect languages, etc.
5. Eric, did you train on Tweets from prior soccer matches? Can AlchemyAPI’s sentiment analysis API be tuned for this particular domain? Discuss customization please?
Eric: We did not train Alchemy’s API. We did several test passes on sets of games to verify accuracy. Alchemy does provide customization options. I will let Richard speak to that. I did do training on Google’s Prediction API, but as mentioned in the talk I was not able to get enough training done in the time budgeted to get the accuracy I wanted, which led us to use AlchemyAPI.
Richard: AlchemyAPI does have specific customization opportunities. Given that we are trying to democratize this technology, our philosophy is to have a general understanding that can be applied to many different problems of extracting meaning for text. If we are going to return keywords or categories – as in, smartphones or football – we try to be general to apply to a broad audience. But customization is extremely important and there are domain specific ideas that we all need to take advantage of.
Contact us if you are working with a specific lexicon or group of terms and need to customize the solution. We are constantly looking at how we can make our systems as flexible as possible and still offer customization to those in niche industries. In particular, our Taxonomy API is able to categorize text into over 1,000 categories going up to five levels deep, such as sports (i.e. kayaking, baseball or cricket) or finance (i.e. mergers and acquisitions, credit cards or vehicle financing). We have seen a lot of success with custom categorization in that area, which allows you to pick your own categories even if they aren’t normally available.
6. Please talk about the Twitter drill down. Are there opportunities to apply other NLP functions like keyword analysis or entity extraction?
Eric: We did consider this. If I was going to continue to evolve this, I would definitely do additional entity analysis. We would get higher fidelity or higher quality data if we used other NLP functions. There is definitely opportunity to gather additional insights. For this particular example, we did not use links. We extracted them and focused primarily on the text of each Tweet.
We considered doing image analysis using AlchemyAPI’s computer vision services. We thought about doing that, as well as influencer analysis. We were tracking certain Twitter handles based on those who are material influencers in the community. They have a more realistic or truthful view of what is happening in a particular match. While we stopped with just text analysis, you could definitely add another sub-workflow to do things like image analysis, influencer analysis or link analysis.
We did build some training models using Google’s Prediction API. We realized that we would have to spend two weeks or more to get the models that would provide a reasonable level of consistency. It would have been fun to customize training models with AlchemyAPI. If I were to extend this, I would invest in it a bit more. Our soccer experts and data scientists helped us use our internal data. You have to decide whether the training work is worth it.
7. So, just how long did this project take?
Eric: I did most of the development work in four weeks. That includes narrowing down scope, the customer process with AlchemyAPI, Twitter, etc. I licensed Twitter data. We spent quite a few cycles going through contract processing.
It took me about a week to get the parsing and basic pipeline built and all the to pieces work. Then, I spent another two weeks focused on honing the algorithms and building the visualizations. After that first week, I was able to prove that the pipeline and overall model worked, which was great because then I could really start focusing on how to answer the questions in a more meaningful way, versus just saying, “I can do sentiment analysis based on translation and third party APIs.”
Richard: For those of us that have been in the industry for a while, that’s what is so astounding. This wasn’t a team of developers or a long-winded project. Machine learning and analyzing unstructured data doesn’t have to be hard to get a lot of meaningful results from it.
AlchemyAPI was designed for speed of results. With 1000 free calls/day, and offered as a web service with built-in text extraction and cleaning for web pages, you really can get started immediately. Plus, stay tuned for the launch of AlchemyData, where we will offer a simple query interface to a treasure trove of public news and blogs - no text or image processing required (we’ll already have done it for you).
Today we're announcing the public availability of our Face Detection and Recognition API, the latest addition to the AlchemyVision product family. When provided an image file or URL, the AlchemyVision Face Detection and Recognition API returns the position, age, gender, and, in the case of celebrities, the identity of the people in the photo. Organizations across a variety of industries, such as social media monitoring and advertising, can take advantage of face detection to analyze their unstructured image data. This API provides the ability for applications to glean demographic data from images, which can be useful when analyzing a person’s social media habits or for analyzing which images have the highest return on investment in advertising campaigns.
In addition to the general face detection capabilities, an impressive feature of the AlchemyVision Face Detection and Recognition API is its familiarity with well-known entities. For example, providing the API with an image of a famous politician or Hollywood celebrity allows a user to retrieve identity information, along with several other pieces of metadata: age and gender, type hierarchy information pulled from our knowledge graph, and a slew of linked data (e.g., personal websites, DBpedia links). The face recognition system is capable of identifying 60,000 different celebrities.
Derrick Harris from GigaOM announced our release in a story today, "AlchemyAPI now recognizes famous faces (and can learn yours, too)" and according to him, he "was impressed when Turner showed me it could distinguish actor Will Ferrell from Red Hot Chili Peppers drummer Chad Smith. He also showed identifying politicians Barack Obama, Harry Reid and Nancy Pelosi."
We’ll be following up in the next few weeks with additional information on useful applications of this API and how you can extract business value from visual content.
The world is constantly uploading photos to the internet. By the same token, news articles and blog posts are written and posted to the web around the clock. Clearly, there is an ever-growing need to have automated systems tag these documents and images for us. Having a way to provide keywords for text and photos without human involvement can be a massive game-changer for someone interested in aggregating relevant topics in news outlets or categorizing very large libraries of pictures.
Here at AlchemyAPI, we have solutions to these problems! Users of AlchemyLanguage and AlchemyVision have already seen the power of quickly and reliably extracting keywords out of paragraphs and/or pixels. But, this begs the question: what do we mean by “reliable?”
For demonstration purposes, let’s consider the image below. What are some good tags (or keywords) for this picture? Don’t overthink this one...
If you said “iPhone” and “Apple,” most people would agree with you. As it turns out, AlchemyVision would also agree with you! See the output of our image tagging API for this photo:
AlchemyVision takes simple tagging a step further by associating a “confidence score” between 0 and 1 (1 being the most confident) with each tag.
Note how different these two scores are. The confidence of “iphone” is significantly higher than that of “apple.” Why? If we know that this is an iPhone, don’t we also know that it is an Apple product? Shouldn’t we be equally confident in those terms as appropriate tags for this image?
The thing about these scores is that they do not necessarily demonstrate the correctness of a particular tag, but rather they indicate how appropriate the tag is for a given image. In the example above, “iphone” and “apple” are both correct tags for the picture. However, it turns out that “iphone” is actually a better fit, which is why we see such a large difference between these scores.
While there is no silver bullet for selecting the number of terms or tags to associate with your images, you can turn the knobs yourself. Figure out the appropriate thresholds of scores for your image tagging purposes. As a general rule of thumb, a score of 0.9 or higher with AlchemyVision signifies that a tag that is pretty spot-on.
Also, for text analysis features like entity extraction and concept tagging, relevance scores are calculated for each entity or keyword in a document. The relevance score depicts the significance of each unique term, in a similar vein to the confidence scores returned in image tagging. The higher the relevance score, the more important that term to the central meaning of the document.
As tags can be subjective, we recommend familiarizing yourself with the outputs of our analysis engines by test driving the AlchemyLanguage and AlchemyVision demos. If you’d like to run more in-depth testing with some of your images and documents, download a free API Key and get started today.
“Easy to code,” “simple to use” and “Internet-scale” are probably not terms you associate with developing applications that analyze Tweets, chats, emails and other unstructured data coursing through analytics pipelines. But, perhaps we can cause you to reconsider with an example.
Leaning on a decade of data analytics and cloud scalability, Google recently unveiled Cloud Dataflow, an SDK and managed data processing service that can analyze real-time, streaming data flows and batch sets. Dataflow gives app developers the ability to execute semantic analysis of social posts and news using just a few lines of code and API calls.
Part of the reason we’re so excited about Dataflow is because we had a front-row seat when the Google Cloud engineering team paired it with our Sentiment Analysis API during the World Cup. For this project, Dataflow grabbed tweets, converted them into objects, translated them to English, and then used our API to score the positive and negative connotations found in World Cup fans’ tweets. It was fun to see how social media activity directly correlates with real-time events on the field.
Another reason we’re talking about Dataflow is that we are hosting a web session on Thursday, September 18 with Eric Schmidt, a Solutions Architect for the Google Cloud engineering team focused on big data scenarios who led the Dataflow project. Our CMO, Richard Leavitt, will join Eric, to discuss:
Recordings of the first two web events in our Deep Learning Series, titled What is Deep Learning AI & How Should You Use It Today? and Artificial Intelligence APIs: Intro for Those Building Smarter Applications, are available today.
What compelled Google to commit Schmidt and his team to create a replacement for 10-year old MapReduce was the need to cope with exabyte-scale data while making it very easy to write pipelines, apply analytics and use the same code for batch and streaming analytics.
While you probably aren’t dealing with data at Google-scale, you probably are working to answer the same types of questions: What do people really think about my company and products? How can we get actionable information from data in real-time, rather than just store it for later? What connections can I make from my raw data to the results to confirm the trends that we see?
I invite you to join us for How Google Does It: Big and Fast Semantic Analytics Pipelines. Of course, if you register but cannot attend, we will follow up with access to the presentation, recording and Q&A.
Date/Time: Thursday, September 18, 2014 at 12pm ET/9am PT
Perfect for: Software and technology leaders who want to perform semantic analysis on real-time social media and news streams at Internet scale.
Presenters: Eric Schmidt, Solutions Architect at Google and Richard Leavitt, CMO at AlchemyAPI
Tell us about the AlchemyVision project. What did the team set out to accomplish?
The goal of the project was to build a large-scale system that could be used reliably to add valuable structured data to unstructured entities in the form of image data. This would provide a balance in the scope of our company, whose primary goal at that point was adding structured data to unstructured text.
At the outset, the team knew that, eventually, the vision project should be able to extract accurate knowledge from the potentially billions of images that make up the world's growing corpus of data.
What were some of the AlchemyVision project outcomes?
Our first metric of success was to measure up with a seminal paper published by Hinton’s group from 2012. The team reached that goal rather quickly. From that point on, it was a new frontier as there were no other groups who had moved past that point. After we started to see the release of other competing products, we made surpassing those products’ results our goal. We’re proud to say that the AlchemyVision system can accurately tag a wider range of images than these other competing systems.
We’ve had positive feedback from several customers, such as Simply Measured, whose CPO and Co-Founder, Aviel Ginzburg, is excited to use AlchemyVision in its product.
"As shoppers become increasingly comfortable making their purchases online, major brands are driving a large amount of sales through eCommerce offerings," said Aviel Ginzburg, CPO and Co-Founder of Simply Measured. "It's important that we provide brands a way to track which campaigns resonate and drive action online. The ability to track and measure everything in a campaign, including the images used, gives brands a competitive advantage when targeting customers and driving sales. With AlchemyVision, we have been able to accurately tag and classify a good portion of images at very high rates with minimal human effort."
AlchemyVision now has several customers, including CamFind, who has seen response rates as fast as a second when querying our API. They use our service to provide the first-response tagging results for their image recognition app.
What was one of the major lessons you learned during the AlchemyVision project?
KISS, or “keep it simple, stupid,” was a consistent theme in many stages of development of AlchemyVision. The simplest idea usually ended up being the best. Some approaches seemed like they were too basic to work, but in the end, they were the optimal solutions to our problems. In the course of a brainstorming session, the team would come up with an idea that was so simple it appeared to trivialize the problem. We thought that there had to be a more elegant solution. But when we tried these, most often we’d end up back at the first idea, refining it to make it more robust.
Topic Artificial Intelligence APIs: An Intro for Developers Who Must Build Smarter Applications
Perfect for: Developers, programmers, engineers & hackers getting started with AI
Presenter: Devin Harper, AI Research Scientist
Application programming interfaces, or APIs, allow developers to utilize the power of artificial intelligence (AI) to create applications and build features on top of them. AI is being used everyday to detect fraud, recommend relevant content and products, power e-commerce platforms, listen to consumer sentiment in social media channels and much more.
But, we think the coolest thing about artificial intelligence APIs is that they show no bias toward company size, industry or job title. Anyone with a little programming experience can use them.
Below are four cutting-edge “Alchemists” that demonstrate what you can do with a creative idea, the right tools and a little perspiration. While there are many more examples worth recognizing, we had to pick just a few!
For even more ideas, join Devin Harper, AI Researcher at AlchemyAPI on Thursday, August 28 for "Artificial Intelligence APIs: An Introduction For Those Who Must Build Smarter Applications." Learn about the cloud APIs available to you today like AT&T Speech, AlchemyVision and Google Translate and get ideas for how you can apply AI to your specific business challenges.
Advertising network, AdTheorent matches web page content to reader interest with hyper-relevant ad targeting, which goes far beyond simply categorizing a web page or a tweet. They incorporate AlchemyAPI’s Keyword Extraction and Sentiment Analysis APIs to process more than 2 billions records each day and tie in important factors such as emotions, intent and facts expressed within the content. Their efforts have increased click-through rates (CTRs) on their ads by more than 200% and enable more effective monetization of audiences for their clients.
BrainJuicer, a market research agency, drives new sources of revenue for their clients by providing product recommendations aligned with consumers’ online behavior. To fuel their recommendations, BrainJuicer created digital avatars, or DigiViduals®, to seek out online content and discussions aligned with designated buyer profiles. When combined with AlchemyAPI’s Keyword Extraction, Language Detection and Relation Extraction APIs, it becomes easy to uncover trends, connect multi-channel activity and expose buyer preferences from the data to make high-performing product recommendations.
“We have run DigiViduals® for a couple of years now,” Richard Shaw, VP and DigiVisionary explains, “Our clients are pleased… In pre-market testing, we have noticed that ideas coming from DigiViduals® outperform ideas coming from other approaches like focus groups and brainstorming.”
After extensively exploring open source tools and considering building their own system, CrisisNET chose to partner with AlchemyAPI to accurately and quickly fill their “firehose of crisis data” with images from around the world. The team at CrisisNET uses AlchemyVision to pull in images from thousands of data sources, ranging from individual Facebook posts to UNHCR refugee updates to LERN's ebola case data, to drive their platform that aggregates and disseminates timely, relevant, and accurate information to news organizations reporting on natural disasters and humanitarian conflicts.
Altura Interactive, a Spanish digital marketing agency, uses AlchemyAPI to hone their SEO strategies. They employ the Entity Extraction API to help translators understand the entities they should translate and the ones that should remain untouched. They also use the Keyword Extraction API to map keywords to specific pages.
These services help Altura Interactive enrich their content by providing relevance scores and sentiment analysis for terms. And, they use the Language Detection API to divide the backlinks (incoming links), analyze them and reach out to websites that are in other languages to ask them to point their links to the appropriate pages.
In the world of market research, you can’t avoid the need to understand consumer actions and preferences. That can take a lot of time. In this case study, BrainJuicer shows us how you can bypass the amount of time your team spends parsing all of the information for useful signals with natural language processing (NLP).
The team at BrainJuicer spends a lot of time determining what intrigues their clients’ audiences and using that information to develop new ideas for products and campaigns that drive revenue. However, it is difficult to figure out exactly what consumers want. The sheer volume of data regarding their online and social interactions is enough to overwhelm any researcher.
Seeking the ultimate focus group to solve this problem, Richard Shaw, VP and DigiVisionary at BrainJuicer had an idea. Why not get insight into consumers’ real preferences and interests by creating digital buyers that mimic online behavior and gather information on their own? By going to where consumers are (Twitter conversations, forums, articles, etc.), BrainJuicer would be able to take the guesswork out of campaign strategies.
There was one problem. How would they create these avatars, now known as DigiViduals®? With a tight timeframe and a small budget, Shaw looked for a partner for help. “I tried a few APIs and found AlchemyAPI’s services to be the fastest to implement and easiest to use. And the documentation they provide is extremely user-friendly… Someone like me, who has a great concept but not millions of dollars or a team of developers, can realize their idea,” he states.
DigiViduals® have run for a couple of years and clients are pleased. “It is a great way to bring new ideas to life and it has shortened the time it takes for ideas to go from concept to production and release. In pre-market testing, we have noticed that ideas coming from DigiViduals® outperform ideas coming from other approaches like focus groups and brainstorming,” says Shaw.
Are digital avatars the ultimate focus group? Maybe. But for Shaw and the team at BrainJuicer, this is just the start of helping companies determine how to better serve consumers. Next up, BrainJuicer will enhance DigiVidual® profiles using AlchemyVision to process images posted on sites such as Instagram and Pinterest.
With the rise of young entrepreneurs like Mark Zuckerberg of Facebook, Andrew Mason of Groupon and others, millennials are focused more than ever on developing the next “big thing” and starting their own companies. Free Ventures, a non-profit startup incubator founded by students at UC Berkeley, is now helping budding entrepreneurs fulfill their dreams by acting as a launchpad that provides resources, funding, mentorship, and workspace to build products into companies.
“For a student to build a ‘cool idea’ into a full-fledged startup takes resources, capital, hard work and guidance from a mentor,” explains Cameron Baradar, Co-Founder of Free Ventures. “This is exactly what Free Ventures offers our teams.”
Two Free Ventures teams, Einstein and Iris, use AlchemyAPI to power their startup ideas.
Einstein, halfway into its second year, is a product recommendation platform that uses AlchemyAPI to process consumer reviews and make intelligent purchase suggestions to buyers.
The second team, Iris, uses AlchemyAPI to power their keyword-based aggregation of high quality blog content, intelligently connecting bloggers discussing similar content.
“While not every Free Ventures team will raise a seed round or launch publicly, a community of supporters like Amazon Web Services, AlchemyAPI, and others give our teams the ability to build without concern. Whether they garner a six figure investment or disband after a semester, first and foremost, these teams are here to learn,” shares Baradar.
If you are in the UC Berkeley neighborhood, share your startup prowess by becoming a mentor for a new team. Learn more at freeventures.org or contact the Free Ventures team at free[at]berkeley.edu.
Gartner estimates that a staggering 80% of business data is unstructured, which means it is in hard-to-analyze formats such as emails, tweets, chats, blogs, images and more. Development teams are being overwhelmed with requests to create applications and services that automatically gather and synthesize data so that organizations can make better content and purchasing recommendations, extract keywords for search engine optimization (SEO), collect brand intelligence to develop effective messages, and more.
Many application developers, engineers and their leaders are supporting their image and text analysis efforts with deep learning, a new area of machine learning and one of the ten breakthrough technologies featured in the 2013 MIT Tech Review. At a high-level, deep learning deals with the use of neural networks to improve things like computer vision and natural language processing to solve unstructured data challenges. With deep learning, businesses can efficiently process and make sense of all of the data at their fingertips to drive increased productivity, innovation and profit.
Here are five of our favorite deep learning resources. Take a look and let us know if you have others to add to the list. And for a more interactive approach, join our Chief Scientist, Aaron Chavez on August 14 for the first webinar in our Deep Learning Webinar Series -- “What is Deep Learning AI and How Should You Use It Today?”
Bookmark these resources for future reference: