Methodology

The sources that our team used include a diverse group of media, ranging from economical critiques to cultural histories, which all contribute to our critique of the film industry’s rising presence in Atlanta. We found this vital information through UCLA Library’s online access. The UCLA Library website made the articles accessible and easy to cite through the use of Zotero. Within the search engine, keywords, languages, time published, and other factors were available to help sort through the available journals and articles through online databases. Our team could look up specific information that we would want to be included in the published work by searching for terms like “gentrification AND film AND Atlanta” which would ensure these criteria by the use of “AND” or “OR”. UCLA Library’s website also makes it possible to find articles that have been peer-reviewed which is important for guaranteeing accurate and reliable information that we will reference within our project findings.

Because we spent a lot of time characterizing Atlanta and the film industry, we also used web articles and culture magazines to paint an image of the two. For example, the web allowed us to find out which movies were made in Atlanta and talk about the films themselves. Magazine articles talk about commercial success and behind-the-scenes filmmaking, which gave us a better idea about why Georgia wanted the film industry to become a significant part of the state’s economic infrastructure. We also used culture magazines like Ebony in our background section to characterize Atlanta as a city. It was important that we used sources that describe Atlanta and the film industry alone so we could better understand how they interact and why things changed the way they did.

In addition to finding sources in literature, we also searched for datasets that could accompany our research and provide deeper insight into our investigation. We started our search for these datasets by exploring data published by government organizations. Due to our project’s focus on US city economies and demographics, we believed that government data would offer the most extensive and reliable source of information. By integrating both literature sources and data into our research, we aimed to develop a comprehensive understanding of the many factors surrounding the introduction of the film industry in Atlanta and the gentrification brought about by it.

Once we had sourced all relevant literature and data relating to our topic of gentrification caused by the film industry in Atlanta, we began the step of processing this information. In terms of the literature, we read through our sources and highlighted specific information that would be useful to our research. We then grouped this information by topic so that it would be easily accessible for the creation of our website narrative. To process our collected data, we began the procedure of data cleaning. Using R, Python, and Excel, we compiled information into single datasets, removed any errors or unwanted information from the datasets, used regular expressions to extract specific information from the datasets, and carried out mathematical procedures on the data to create new variables. All of these steps were taken in order to transform our data into a state where it could be easily visualized.

We also utilized techniques such as web scraping in order to curate a large amount of data from separate sources. Web scraping is a technique used to parse html code from pages on the Internet. By using web scraping, our group was able to collect and transform data from the internet into well-organized data ready to be analyzed.

After compiling and processing our sources, we began the step of determining how to present our data. The tools we utilized in this process of visualizing our data were the ggplot package in R, the BeautifulSoup and Seaborn packages in Python, and Tableau. We carefully decided which tool we should use for each dataset based on the technical details of the data and the message we wanted to convey from it. We selected R and Python to create visualizations for their capability to manage large, complex datasets and produce sophisticated graphs. Using these two languages we created the bulk of our graphical data visualizations. In addition to the programs already mentioned, we used Tableau to visualize one of our datasets on a map of Atlanta. Tableau’s interactive visualization features were integral for engaging users and allowing them to interact with graphs and maps directly. Through these strategic decisions, we aimed to create an informative, user-friendly, and visually appealing website.

After thorough research, feedback, and numerous discussions, our team consolidated our analysis and data visualizations on the Global Data Dive WordPress website. To enhance the flow and interactivity of our content, we thoughtfully grouped all of our information into separate sections and introduced sections like “Historical Background” which incorporates peer-reviewed papers to provide a comprehensive overarching narrative. We implemented dropdown menus to streamline the site’s layout, making it easier for users to navigate and engage with the content. Given our project’s focus on Georgia, we chose orange as the primary color in reference to the emblematic Georgia peach. This color was also selected for its vibrancy and ability to capture attention. This choice aligns with our goal to create an engaging user experience. 

Click below to continuing reading our Data Critique section on the Atlanta’s “U.S. Census Data”

Scroll to Top