Ordr.in, "the world's only restaurant ecommerce platform," asks their new interns (Orderinterns) to build anything they want during their first week, as long as it uses their API. Their new intern Michael decided "to build something that would simplify menus by limiting the number of items in a menu."
He used AlchemyAPI to extract keywords from reviews instead of extracting the entire review text. He used our Python library so that it only took a few lines of code to get the keywords from a text string.
What a great way to get an intern going right away, encourage creativity, and get them familiar with the companies API.
Find out more on the ordr.in blog.
Our API is used by over 30,000 developers across the world. Every day we come across new applications using a variety of our text analysis features. Today we are highlighting an Article Optimizer, and API mashup that takes advantage of our Text Categorization API.
According to ProgrammableWeb, Article Optimizer is an API mashup created by Zack Proser that is a, "A content analysis tool that shows you your keyword density, makes recommendations on new trending keywords to use, and offers up copyright-free images for use in your content."
Article Optimizer uses the following APIs: AchemyAPI Text Categorization, Flickr, MailChimp, and SendGrid. Article Optimizer claims to simplify article writing and provide detailed keyword density analysis and projects article earnings.
For the National Day of Civic Hacking this past Saturday, 11,000 civic activists, developers, designers, and entrepreneurs gathered at hackathons in 83 cities across the U.S. to create software to help their local communities. The White House welcomed more than 30 developers and designers to their second hackathon, giving them access to their new API for We the People. Over nine hours, hackers worked along-side White House Staff to create software that find, share and analyze We the People petitions.
Developer Yoni Ben-Meshulam created We the Entities, which enables users to "perform text-based analysis of petition data" using AlchemyAPI along with several other text analysis APIs.
Sixteen projects were presented to the group, several of them can be seen in the We the People API gallery, where you can see demos and downloadable code.
The Great Cat Detective is a web application that emphasizes better research by having the user, (targeted towards Elementary and Middle School Students), choose better topics and articles using a detective story involving cats.
We originally wanted to do an "Easter Egg" style app where the user can go through a library and get book titles using relevant data, but instead decided to make it completely virtual and use a detective story as a way to present the fact finding information.
The idea to use cats was kind of random, but the team decided that it was a great idea since you will be too distracted by the cuteness to realize that you are actually performing high-level research and topic brainstorming that is focused, relevant and in-depth.
The features of the Alchemy API we used the most was its Concept Tagging, Sentiment Analysis and (though this wasn't used due to time) Author Extraction. For all of these features, we simply used the URLGetRankedResponses.
Concept Tagging: We decided to use Concept Tagging instead of Keyword Extraction in case the user later wants to use the Topic Engine (the tool in the Back End that performs all the data extraction and content scraping from the URL provided by the Front End) for other criteria aside from Relevance. The user can extract more information down the road if they need it. For Example: they can find locations from a concept to see what other things happened around a topic and utilize that information to get location articles.
Sentiment Analysis was used to try to isolate whether or not an article had an unfair bias. We want to make sure the user will get an article that was has neutral as possible so it will get the user to not pick articles that had a bent one way or another.
Author Extraction (though this wasn't actually used in the presentation) was to get an author from an article so we could use it to look up his/her social media information and other information so if the user would like to look up related articles based off the author, they can.
Our plans for this app vary right now and we are still working it out. We would like to make the front end be like a "skin" that people can exchange the dialogue, pictures and everything to let them create their own stories. So, if you don't want the great cat detective, you can be the great dog detective.
As for the back end, we are thinking of expanding on the engine utilizing AlchemyAPI's more robust features to get better results from URLs, HTML and Text. Also, expand on those features of topic selection and try to find more robust feedback for the user.
Your documentation was rather intuitive and nice. It was really easy to read and the API is responsive.
George Petersen: I am currently a Computer Science student at Metro State University of Denver (MSU Denver). I am working on my second degree already having a degree in English. My job is split between two departments at MSU Denver: One job is to create and implement a teaching web/mobile application for the Technology Training and Education department at MSU Denver IT and the other is the Software Developer and Web Programmer for the Center for Advanced Visual and Experiential Analysis (CAVEA) where I am primarily programming the applications that this department will use for both internal and external uses. Also, I was last year's President of the MSU Denver's Computer Science club where I help get some projects on the way for the students.
Maggie Gourlay: I am currently a math major at MSU Denver with a minor in Computer Science. I've been a geek my whole life and love to code. I have a diverse skill-set having been a web developer and a systems administrator but my favorite job was at EA Sports. I am currently working on exploring the fascinating and beautiful union between computers and vision because it utilizes my academic interests as well as a long-held curiosity about the acquisition of knowledge and learning. Participating in the hack-a-thon was a great experience and I look forward to working on a team again soon!
Jesse Nelson: I am a computer science major at Metro State University of Denver. As of last semester my job was outsourced to India, so I am now going to school sponsored through the governments Trade Act Agreement ( TAA ). TAA covers all tuition and also provides bi-weekly checks so my only focus is to perfect my craft in the computing realm. I have familiarized myself with several languages since I started programming two years ago. I am the most proficient in Java as this was my "mother language" but I am not far behind in my knowledge on Scheme and ANSI LISP. Over the past five months I have also become fairly proficient in C and Objective-C. My current goals that I have been working on are increasing my general knowledge in Python and also understanding the Web side of Java Script. Once I have finished my degree at Metro, I plan on going to grad school with a focus somewhere in Artificial Intelligence.
Kaitlyn Culley and Jeff Olsen: We're both seniors at Metro majoring in Applied Math and Digital Art respectively (both with minors in Computer Science). Recently we've founded a contracting startup called Invisible Dry Goods, focusing on gamification of training and presentation software. Our projects have been ranging from surgery simulations to interactive oil rig sales material. When we get the chance, we try to squeeze in a little time for interactive media art, illustration, papercraft and beer.
Interview with Ahmad Assaf, the creator of SNARC, which won first place at the 2013 ESWC (European Semantic Web Conference) AI Mashup Challenge. Ahmad is currently an Associate Researcher with SAP Labs in France in the Business Intelligence Department and a PhD Student at Telecom ParisTech/EURECOM. His main research interests are Information Retrieval, Semantic Web and Data Mining, and his thesis concentrates on building a "Framework to Enrich Business Intelligence with Semantics."
SNARC started out as a proof of concept for an idea Ii had in my mind for a while now that aims at aggregating social news based on the user's interests. SNARC tackles issues in the areas of information retrieval and semantic search that I think is of growing importance recently.
Social Media has transformed the way users consume information, becoming a source for live and up-to-date content. Keeping up with it can be an issue to most users, not to mention the difficulty of finding relevant news from social media streams in the first place.
Moreover, I felt a need to enrich the users' experience while browsing the web. Often, you are browsing the web and want to get more information about a certain thing mentioned in an article and by using SNARC you can get short in-place textual information about entities found in the current web page.
For example, let’s say you were browsing the web after the killing incident in London last week. First you might come across the town name of Woolwich and you might wonder where it’s located, taking you to Wikipedia. You read that the incident was recorded on tape, so you end up on YouTube searching for that video, and so on, so that you end up with multiple tabs for Twitter, Google, Wikipedia, etc., just to get more information about that specific incident. SNARC solves this issue for you!
SNARC is a service that accepts a web resource URL as in input and does two main things:
SNARC is a side project that I will continue working on. I will be improving it to ensure more accurate results. I am now working on tools to help make sense out of microposts and social streams. My goal is to have a platform that aggregates these streams based on interests and user preferences. I am also working on a Semantic Bookmarking tool leveraging AlchemyAPI's capabilities.
When: May 31-June 2
Where: Galvanize, 1062 Delaware Street, Denver, CO.
"The event will leverage the expertise and entrepreneurial spirit of those outside federal, state and local government to drive meaningful, technology-based solutions for state and local government. It demonstrates what’s possible when we all work together to strengthen our communities and will provide citizens an opportunity to do what is most quintessentially American: roll up our sleeves, get involved and make change happen. Winners will be presented at the event, at Startup Week and at the COIN Summit. Themes include Sports & Fitness, Health & Wellness, Sustainability, Veterans, Tourism & Education."
We've teamed up with Built In Denver to create a startup crawl-inspired happy hour. 2nd Thursday of the month, join us at a different Denver startup for networking, spirits and startup community building!
Ever wonder what Denver startups are like? This is your chance to take a monthly tour. Mode Set and Guiceworks are hosting the first Built In Brews, so bring your co-workers and friends and enjoy some brews on their deck.
Date: April 11
Location: Mode Set, 1782 Platte St.
More information: www.builtindenver.com/blog/join-us-built-brews-next-thursday
RSVP: Built In Brews, by Built In Denver and AlchemyAPI
For the millions of WordPress sites out there, wouldn't it be nice if there was a plugin that automatically tagged your blog posts by analyzing your writing and suggesting useful categorization tags? Well, there is, and it's called AlchemyTagger and it automatically works in the background while you're writing your blog post. It suggests tags that make your posts easier to navigate, better-ranked by search engines and easier to find for your blog readers.
*AlchemyTagger has been tested against WordPress version 2.5.1 -> 3.5.1
We recently discovered Kruganmantimes.com, a webpage that takes the recent version of the New York Times and replaces everything with Paul Krugman media. This project by software developer Vincent Woo uses AlchemyAPI’s Keyword Extraction feature to make the magic happen. Here is a short Q&A with Vincent Woo:
Foreign Policy asked me the same thing earlier, so I have an answer prepared:
"I sent a friend of mine a NYT article (this one) and told him I thought it was great article. He replied, cheekily, 'Great? But it's not by Paul Krugman!' From there it was a quick jump to 'What if Krugman wrote every article on the NYT?' and buying krugmantimes.com."
I suppose I've been a fan ever since I saw Krugman decimate some British MPs in a Austerity debate, so about a year now.
Krugman definitely saw my site, and he even blogged about it! It was very satisfying, and responsible for pretty much all the traffic I got.
The Krugman Times is exactly one Node.js dyno running for free on Heroku, and stood up admirably to hundreds of concurrent users. It makes use of a free Redis addon for heavy caching of both the nytimes.com homepage and AlchemyAPI’s endpoints. Essentially, I pull down the nytimes.com page and do some DOM analysis to pull out stories. I send each story separately to AlchemyAPI for Keyword Extraction, and replace those with appropriately styled and cased economics gibberish. I also take the time to swap out images for pictures of that handsome devil, Paul Krugman.
My day job is at everlane.com, where we are an online-only fashion retailer that uses the cost savings associated with not maintaining physical retail presences to put high quality clothing into the hands of consumers very affordably. I am also working on a Twitter bot for spamming people, but I don't think Twitter's gonna like that one as much.