Digital cultures are commonly well established and can be researched with key words through search engines leading to forums and websites and blogs and other digital media. I don’t believe there will ever be a way to catalogue every piece of information into a digital culture efficiently or effectively. Pieces of information sometimes do not belong to a culture, or there are no clues as to what that culture is without context “how to gather useful and meaningful information from the Web has become challenging to all users because of the explosion in the amount of Web information” (Tao, et al, 2010, p: 235) This is a very audacious task and would require backtracking and analysis for context to establish culture if the information given is insufficient or incorrect. “…Problems of information mismatching and overloading” (p: 235)
Most established communities of culture have frequently used keywords on their forums or other digital place of residence. Sites online are already able to be identified with keywords in articles and forums, and other sites utilising hashtags and keywords to categorise and sort and find relating blogs. Other sites have functions to reverse search text and imagery, giving lists by quality, date, file type, and so on, finding where the original content was posted and where else it was posted to. Engine searches can bring up millions of results for key words and phrases to find culture, for example “gaming” brought up 651 million results, when more key words are added less, and more precise results show, adding “MMORPG” and “community” to my Google internet search lowered the result to 9.14 million. Sites already use this function of keyword searches, and utilise it for things such as online shopping, sorting things into categories and giving results by “best match”, results that have the most keywords matched. The internet and its information is so massive, and ever expanding and changing, it’s almost impossible to visualise and examine global digital cultures efficiently at least for us to comprehend, a computer can manage this, but the information would not be very accessible and useful to us. But everything can be found again (provided it has not been completely erased). IP addresses can be tracked with metadata, where, when, how often, and how many views a site has can be tracked and traced with metadata, and information can be found through search engines.
Tao X, Li Y, Zhong N, 2010, ‘A knowledge-based model using ontologies for personalized web information gathering’, Web Intelligence & Agent Systems, 8, 3, pp. 235–254, Applied Science & Technology Source, EBSCOhost, viewed 23 August 2015.