The Web Hypertext Application Technology Working Group (WHATWG) and the World Wide Web Consortium (W3C) have decided to go separate ways. There is now a risk of a power struggle emerging over the HTML standard that could cause it to fork. Everybody that has to develop websites knows that energy and therefore money is wasted in accommodating how different browsers variably implement the features of different HTML standards. Forking the HTML standard could lead to two parallel webs. Even if it did not, it certainly would be more complex and expensive to implement for multiple versions of two living standards rather than one. Two parallel standards would inevitably stifle innovation with energy wasted on duplicated effort.
The thing that enables the really useful ‘world wide’ part of the World Wide Web, just like any other sophisticated undertaking, is standardisation. Standardisation is what enables and makes affordable complex undertakings requiring many specialists. Microsoft, Apple, and other influential players that could, have undermined standardisation efforts using their market dominance to their own advantage. Failure to regulate adherence to standards is what allows them to place their petty commercial self-interest above progress and the greater good. Some things can be left to choice, but others are too important and must be regulated. Adherence to a single HTML standard should be regulated across the world to ensure progress.
The inventor of the World Wide Web Tim Berners-Lee says that although data about us is held and used by others we should have automated access to it so that we can make more use of it. He thinks that this will spawn useful new services. Sir Tim says that standardised data forms are required to simplify use of the data by these new services.
Some entities may be reluctant to provide access to this data because their businesses are built on it or they gain commercial advantage through it. It is not just the new breed of IT companies like Facebook that hold this data. Government, supermarkets, leisure groups, hoteliers, travel providers and many others hold our data and access to it is not easy.
The UK Data Protection Act allows for data access, but does not mandate automated access. Inevitably we will need legal rights of automated access to our data as well as technical solutions to exploit it. This Guardian article reports on Sir Tim’s thoughts and links to some audio of him.
AV-Comparitives have recently published two reports: “On-demand Detection of Malicious Software” and “Whole product Dynamic Real World Protection Test”.
The March 2012 detection report shows that the three systems finding the most malicious software in order of success are: GData, Avira, and Kaspersky.
It also shows that those three in order of least false positives generated were: Kaspersky, GData, and Avira.
The whole product report for March 2012 shows the top three systems as: Bitdefender, GData, and Kaspersky.
Naturally the relative effectiveness of systems varies over time, but effectiveness is built on effort, not luck. Obviously more weight should be given to the most recent results, but consistent good results are also important.
I have used GData and Kaspersky so far in 2012 and can recommend them both. GData has the heaviest resource usage, but then according to the tests it makes good use of that to find the most malicious software. Kaspersky has a very nice user interface that works well for those interested in looking deeper at their system but is fairly unobtrusive for those that aren’t so interested.
Google are improving their game. They know a good deal of information about us and many other entities we commonly deal with, and associations between entities. Therefore they can infer more useful search results from the usual thin stream of information we give in a search because the context of a search is much greater than what we type in. This should make for more accurate search results. However this could also tend to push the more interest results down the list, reducing the chance for serendipitous discoveries. Perhaps they can provide a serendipity slide control. Anyway, this also opens up a new frontier for SEO and dilutes a little the heavily abused priority attached to links. Good thing too. Semantic search
The risk of interrupted access to data can be measured in terms of the amount of computing hardware and software it must pass through. So if the only or main data store and processing is by internet services, the risk to access is much higher. The same can be said of data security. Indeed, by UK law some data must be held with specific constraints that naturally militate against the main principle of the cloud, which is to delegate IT management, reducing IT to a set of benefits and business level decisions. Nonetheless, cloud solutions do present opportunities to do things that were difficult or impossible before, especially in terms of scalability and volatility of resource requirements.
All this raises an important question: Are cloud services a sound choice if more proximate services are an option?
I think the cases where cloud services are the only option are few. I also don’t believe that a substantial cost reduction is possible because of purchasing large amounts of equipment, especially when the costs of running a large organisation are taken into consideration. Locating services in economies with lower costs definitely has price advantages, but at the cost of access time and risk to access and security. I have noticed that where worthwhile cost reductions have been claimed it is often at the expense of IT jobs. As a result either the cloud provider must be adding a similar number of people and hence costs which will eventually return in service prices, or the services offered cannot be as well matched to the specific needs of the business. That standardisation of IT and hence business processes reduces the opportunity for business differentiation.
In the end there is no generic answer to the question. Care must be taken to make the right decision.