Google is working on a new experience within their search result, offering a way to the users to customize their search (voting for or against a result). Unfortunately this behavior would be only for the users not for all the research neither than for a network of friends / coworkers:
[…]This experiment lets you influence your search experience by adding, moving, and removing search results. When you search for the same keywords again, you’ll continue to see those changes. If you later want to revert your changes, you can undo any modifications you’ve made.[…]
It sounds quite obvious why Google doesn’t want to open the result to a voting systems (security, spams, relevancy,…). I’m quite persuaded that Google thinks the best way to deliver pertinent search results is to analyze the content of a page and it network of link. Meta tags proved us that we can’t trust people on the web, but meta tags were created by the webmaster (or any things / persons involved in the creation of web pages) and here I’m talking about social results. A platform like Digg proved that a lot of users can give a liability to a voting system, so why won’t Google trust the mass?
After displaying a (quite boring but illustrative) video,
the author look at the changing world of advertisement with the Web 2.0.
[…]The concept behind this video is that advertisers think they know consumers and know what they want but they really don’t[…]
[…]Will my online profiles and detailed behavioral tracking draw a truer picture of me? Will I like that picture? Who do I want to see it?[…]
[…] Given different contexts, you could consider yourself a different person[…]
[…]Google also followed traditional media’s rules that kept a clear separation of paid placement and useful content. This separation has defined the dual role of modern media — to serve its audience while also serving its advertisers. This so-called separation of church and state serves us well, but it’s not clear whether the distinction between content and advertising will survive the web.[…]
[…]I hope the threat of user migration is enough to keep Web 2.0 sites honest, and counteract the aggressive tendencies of advertisers.[…]
Might be hard to understand only with these quotes (and I really encourage you to read the article) but still the question is hot as fire, most of the popular website get their incredible valuations thanks to advertising opportunities. My concern is what are the limit of only advertisement?
As an “Internaute” I rarely pay attention to add on the website I’m on, but as a web technologist I pay a lot of attention to where I should display add, what type of add should be displayed and how we can try to get the most information from the visitors and what type of add he will think to be useful…
Are you already annoyed by all the add on social network or it doesn’t yet represent a problem for you, what do you think of the next step of Facebook and other big social network?
[…]As a result of recent technology licenses acquired by Microsoft, the “click to activate” restrictions are no longer mandatory. Microsoft plans to remove the activation behavior from Internet Explorer in April 2008.
Microsoft has indicated that developers will not need to make any modifications to existing websites; controls will function as they did before the activation change was made in April 2006.[…]
[…]How will this be rolled out to customers? An optional Internet Explorer Automatic Component Preview will be released for general download via the Internet Explorer Download Center in early December 2007; also a Internet Explorer Automatic Component Preview is planned for February 2008. In April 2008, the activation behavior will be permanently removed for all customers as part of the April 2008 Internet Explorer Cumulative Update.[…]
Very basically we have created a technology which can represent the world’s knowledge in a form that is clear and accessible to humans, as well as being comprehensible to computers. This is different from the knowledge stored in websites and books, which is written in natural language that is good for humans, but incomprehensible to computers.
Interesting initiative from True Knowledge. This semantic search engine is semantic not in the way we generally think about (~data that are linked by semantic classes) but if the demo is true it is a nice work around.
To simplify what I understood so far:
the user type a query like “When was the Eiffel Tower built”the query is translated from Natural language to an understandable query for the systemthe system then aggregate answer from a knowledge database and external feedsfinally the user see an answer to his question
Some screenshots here.
This type of answer can already be extracted with ask.com. However the answer isn’t as clear as the one provided by True Knowledge (IMHO).
In the title of this post you can also read “User Input”, it comes from the fact that when the system cannot provide an answer to your query it will ask you to provide the answer (would I really asked the question if I already knew the answer?). I like the integration of Users knowledge to increase there Knowledge database, now they will have the questions of data reliability.
So far Google isn’t dead but we see more and more “semantic search engine” in beta version and Google didn’t yet made an alternative proposal. Maybe they are just waiting for the better one to win the race, then they will buy the company…
In terms of advertisement, semantic understanding of natural language can be a real shift in pertinence of advertisement.
If you are interested to test there technology they have launched the private beta.
According Andrew Shorten’s blog, Adobe is preparing the onAIR Tour for Europe
Seems that Adobe is taking Europa very seriously, we are having the European User Group Tour, oriented for the User Group Community (from November 6th to November 16). The last night of this Tour is taking place in Geneva, if you are around I hope you didn’t forget to register as you got until November 9 to do it
I had came across the notion of Credibility some times ago but at that time I didn’t take the time to read more about it. The notion is quite old (buzz started 2002) Now that I’m going a bit deeper in the art of web design if found Credibility very interesting.
Credibility / Persuasion is going further than usability, here we are not talking about “ease of use” but we are talking about “pushing to action”:
Captology is the study of computers as persuasive technologies. This includes the design, research, and analysis of interactive computing products created for the purpose of changing people’s attitudes or behaviors.
As the graphic shows, captology describes the area where computing technology and persuasion overlap.
The web is more interesting when you can build apps that easily interact with your friends and colleagues. But with the trend towards more social applications also comes a growing list of site-specific APIs that developers must learn.
Many sites, one API
Common APIs mean you have less to learn to build for multiple websites. OpenSocial is currently being developed by Google in conjunction with members of the web community. The ultimate goal is for any social website to be able to implement the APIs and host 3rd party social applications. There are many websites implementing OpenSocial, including Engage.com, Friendster, hi5, Hyves, imeem, LinkedIn, MySpace, Ning, Oracle, orkut, Plaxo, Salesforce.com, Six Apart, Tianji, Viadeo, and XING.
In order for developers to get started immediately, Orkut has opened a limited sandbox that you can use to start building apps using the OpenSocial APIs.
OpenSocial is built upon Google Gadget technology, so you can build a great, viral social app with little to no serving costs. With the Google Gadget Editor and a simple key/value API, you can build a complete social app with no server at all. Of course, you can also host your application on your own servers if you prefer. In all cases, Google’s gadget caching technology can ease your bandwidth demands should your app suddenly become a worldwide success.