Four guidelines for conquering new media

By Matthew Gillis

After a semester of exploring new media, I’ve decided to reflect on the digital experiences I’ve had and decide which lessons are worth taking with me as I continue to venture into the ever-changing world of technology. I’ve compiled a list of the four biggest lessons I’ve learned throughout my digital adventures that one should follow when conquering new media.

1. Create your own new media. As we’ve discovered, we’re all submerged in a world that is dominated by technology (whether we like it or not), and we’re dependent on it in order to function in society. Technology structures the way we maintain communication, from the use of cell phones to Facebook, to how we educate ourselves, from Google searches to social bookmarking. But the biggest lesson I’ve learned from our use of technology is that we have to learn how it works and the best way to do so is by creating our own new media. In the introduction to Douglas Rushkoff’s book Program or be programmed: Ten commands for a digital age, he stresses that in order to maintain control in a world dominated by digital technologies, one must learn how they work to be able to manage his or her everyday life. Learn HTML, start a blog, or make a Twitter. Don’t you want control over your reality?

2. Embrace technology. The benefits of using many of the new media technologies that I’ve written about throughout the semester are seemingly endless. Specifically, social bookmarking sites like Pinterest, Google Reader, and Delicious make gathering information easier and faster than ever before and supplement traditional media forms, such as newspapers and encyclopedias. Take advantage of our culture of shared thinking by connecting with others on LambdaMoo or by sharing your “random, fleeting observations,” which Julian Dibbell describes in “Future of Social Media: Is a Tweet the New Size of a Thought,” on Twitter. Without a doubt, I’ve learned that there’s no harm in trying these technologies, which aim to comfort us and make our lives easier. You’re only doing yourself a disservice if you don’t.

3. New media isn’t perfect. Danah Boyd writes in her article “Incantations for Muggles: The Role of Ubiquitious Web 2.0 Technologies for Everyday Life,” “As you build technologies that allow the magic of everyday people to manifest, I ask you to consider the good, the bad, and the ugly.” While you should take advantage of the benefits of new media, don’t forget that technology is not perfect, as I’m sure many of you have frustratingly experienced before. Your iPhone could break at any minute, your Wi-Fi could go down without warning, and your Facebook could become hacked. While new technologies usually generate utopian hopes for its users, as Fred Turner describes in “How Digital Technology Found Utopian Ideology,” it’s important to be aware that technology isn’t actually foolproof. In the same token, it’s important to question what you see when using these new technologies. In an age where everyone is a publisher online with sites such as Wikipedia, information is bound to be wrong. We can’t expect that what we’re reading is perfectly correct, or else we all may fall into a culture of misinformed people.

4. Look out for the future. I know it may seem impossible to predict what’s coming next in regard to new media, but I think it’s important to the process of deciding how to interact with the technology around us. As I wrote about it my last post, I believe that technology is headed toward a reduction of information with the use of imagery over text as seen in technologies such as Pinterest and Instagram. It’s important to be prepared for new technologies by educating ourselves about imagery as a communication form, for example, especially because that new media may end up being a part of our everyday lives.

Now go forth and dive into our media-filled world.

A photo-filled future

By Matthew Gillis

Predicting the future of technology is like asking someone to fly to the moon; it’s nearly impossible, unless, of course, you’ve got genius connections. How can one predict something that literally changes from day-to-day?

If you’ve been keeping up with this blog, I think you understand that much of our daily lives depend upon technology. But more specifically, much of that technology involves the Internet, and I believe that future technology will too. Yes, I know what you’re thinking. That’s not a very groundbreaking prediction. However, I believe that even more aspects of our lives will be experienced online, from high school education to religious ceremonies.

But, surprisingly, it isn’t the idea of spending our lives on the virtual reality of the Web that scares me. It’s the continual reduction of information used to communicate online that does. In my lifetime, I’ve seen the way that the Internet and the tools we use from it shape how we maintain relationships through communicating. Like many of my peers, I was introduced to social networking with the ever-popular MySpace, where I was able to have complete control over my profile design, bio, interests, music, and heroes, among other categories. Once MySpace wasn’t “cool” anymore, I migrated to Facebook, where control over my profile’s design was lost. Similarly, expressing interests and music on Facebook is limited to a single picture of the activity or musician, for example, of which you have no control over. Even between these two social networks, I’ve seen how communicating your public image online is depending on less and less information, from the use of lengthy bios to now displaying a row of “interest” picture icons.

It seems that visuals, such as Facebook’s icons, are replacing the extensive forms of communication. Just look at an iPhone. The rows of app icons replace text-based descriptions.

I predict that the future of technology will be dependent on visuals rather than text. We may even be in the midst of this visual-focused future right now. Just look at the popularity of photo-sharing technologies such as Pinterest and Instagram. While I enjoy photography as a form of expression, I don’t think that pictures can replace text-based communication. Just as movies haven’t replaced books, I don’t think that photos can replace text. (I mean, who actually enjoys the movie version of a book over the book itself?)

Photo By Matthew T. Gillis/Instagram.

But it’s more than just my personal opinion that leads me to this concern. Our current educational system is still based on the use of text as the primary form of communication. Until schools begin giving courses on how to interpret imagery as a form of communication and how to produce such images appropriately, the future of image-based technology is quite frightening.

I’ve said it before, and I’ll say it again: Just because you can take pictures, doesn’t mean you should.

______________________________________________________

Find me on Instagram: mattgillis

Lambda-what?

By Matthew Gillis

I’m assuming that most of you have never heard of LambdaMoo, and I don’t think you’ll ever hear of it again after this. LambdaMoo is described as a “text-based virtual community.” For those of you who are still confused (just as I was), imagine creating a character and his or her virtual world in The Sims and then having all of your creative work converted into text format and pasted into a chat room. Welcome to LambdaMoo.

From the minute I logged into LambdaMoo, I immediately rejected the idea of a virtual community in which you could not see but only read. After spending several minutes trying to understand the site’s new lingo, I then struggled to navigate through the imaginative hallways of this virtual world, all of which only exist in writing. It felt like I was trying to find my way while being blindfolded.

In Program or be programmed: Ten commands for a digital age, Douglas Rushkoff says, “The bias of our interactions in digital media shifts back toward the nonfiction on which we all depend to make sense of our world…” (112). I not only struggled with the technical aspects of LambdaMoo, but I also had a hard time making sense of a virtual reality in which there was little connection (especially visually) to what I experience in daily life. Could I not make sense of a world crafted out of nonfictional text?


Despite my initial struggles, I finally got a hang of the site enough to have a conversation with a veteran user named Kephalos. He said that he had begun using LambdaMoo at the height of its popularity when he was an undergraduate student about 18 years ago. Kephalos explained to me the dynamics of the site and how it was now a ghost town where mostly dedicated users revisited to mingle with old friends they had met through the virtual community. It still amazes me that social networking sites and even text-based virtual reality sites like LambdaMoo have the power to create and sustain relationships. Rushkoff explains, “…The invention of technology gives us the ability to program: to create self-sustaining information systems, or virtual life” (144). Technology is another platform where one has the ability to build and foster friendships.

 

But the number one thing that surprised me about the site is the way my initial impression of it changed. After unintentionally spending close to an hour conversing with various users on LambdaMoo, I began to understand its draw. LambdaMoo functions through the use of one’s imagination. Interacting in the virtual community is like reading a book that you have the ability to write yourself. If I can make sense of a world crafted out of nonfictional text in books, I could sure understand a “novel” that I draft up on my own using LambdaMoo.
However, my conversation with Kephalos taught me the true function of sites like LambdaMoo: it is a way to keep people connected. “If living in the digital age teaches us anything, it is that we are all in this together. Perhaps more so than ever” (Rushkoff 150).

What’s so great about Twitter?

Twitter in Plain English from leelefever on Vimeo.

By Matthew Gillis

Since I began “tweeting” in May 2011, I’ve grown to like Twitter (despite much uncertainty). My first hesitation stemmed from the content that many people post on the site. I’m not a fan of hearing about how much laundry you have on Facebook, so why would I want to see it on Twitter?

But I think that’s what makes Twitter so intriguing. In “Future of Social Media: Is a Tweet the New Size of a Thought,” Julian Dibbell describes Twitter’s format as a type of microblogging, in which people publish steady streams of one-line updates.

What makes a successful blog is one that incorporates aspects of one’s daily thoughts, struggles, and triumphs, and I see that style taken with Twitter. Twitter users approach the site as a diary, publishing honest and vulnerable content that doesn’t aim to seek any rewards, which contrasts much of users’ goals on Facebook, where one updates his or her status in hopes of accumulating a high number of “likes.” I find Twitter’s sincerity to be refreshing.

Because Twitter lacks a feature like Facebook’s “like” button, users aren’t reassured that followers are reading their content. I find myself publishing random information about my day that I don’t even bother telling my closest friends, not knowing who (if anyone) is reading it.

Twitter favors anonymity, which I believe also encourages the truthfulness (however bleak or brutal) of users’ updates. Dibbell quotes Farhad Manjoo, who sees an unknown risk to Twitter: “I think there’s a question whether Twitter is going to be the thing everybody does…” I think that being able to use a fake username or profile image downplays Twitter’s competitive advantage in the world of social networking and cautions many from using the site.

However, I believe Twitter’s advantage lies in this idea of users’ content being “random, fleeting observations,” as Dibbell describes it. Users have the ability to publish what they feel at the exact moment they feel it; Twitter is a real time diary. In an age of shared thinking, Twitter capitalizes on no longer being alone in our own thoughts and allows people to form strong connections with those they “follow.”

Twitter’s success lies on one basic and human feeling: there’s comfort in knowing that you’re not alone.

_______________________________________________________

Follow me on Twitter: @MatthewTGillis

Question what you read

Photo By Matthew Gillis/Isla Mujeres, Mexico. Original.

Photo By Matthew Gillis/Isla Mujeres, Mexico. Edited.

By Matthew Gillis

Wikipedia has capitalized on today’s societal idea of shared thinking. What could be more valuable than an encyclopedia that is created and maintained by the one’s experiencing and learning the things they’re writing about?

After signing up for a Wikipedia account, I began looking for photography-related articles that I could knowledgeably contribute to. However, the surprising part of the Wikipedia experience is that you don’t have to know about the articles you want to edit. As Stacy Schiff explains in her article “Know It All,” Wikipedia doesn’t favor the university professor over the high school cheerleader. Like many aspects of the web, anyone can contribute.

However, I found many drawbacks to this collective contribution. As I was reading the photography page, I discovered countless grammatical errors, several biased entries, and numerous inaccuracies. Desiring to contribute to Wikipedia as a member of our shared-thinking culture, I corrected several of the grammatical errors, added information about amateur photography, and removed biased facts about commercial photography.

Before learning more about the process of a wiki site like Wikipedia, I had taken what I read on the site as truthful information, despite many of my teachers’ warnings. I find that to be true with many written and published information. You take what you see at face value, and you believe what you read. I mean, honestly, how many of you have ever questioned the veracity of a newspaper article? But what makes something in writing, whether online or in print, seem worthy of our trust?

Photo By Matthew Gillis/Isla Mujeres, Mexico. Original.

Photo By Matthew Gillis/Isla Mujeres, Mexico. Edited.

I never believed that the edited information I provided was false, which made me surprised to find several of my contributions removed just hours later. While I see the practicality of having an “oversight” function, as Schiff describes, to eliminate vandals and inaccurate information, I was offended to see my work dismissed. Similarly, I am insulted when others edit my photographs to their liking. We gain a certain attachment to the works we create, whether in writing or through photography, and we tend to defend them as if they are the best, most truthful development.

I believe this is why we trust what we read. Once an idea is put into writing or captured on camera, we become attached to the validity of it, because we place high value on the time we took to create it. I think we overextend the idea of valuing effort to other people’s work as well.

Schiff suggests Wikipedia’s “breadth, efficiency, and accessibility” to be the site’s defining features over traditional encyclopedias. However, the reader’s false sense of trust with Wikipedia’s content (created by anyone, including Joe Shmoe) may be creating a culture of misinformed people.

Research trials and tribulations

By Matthew Gillis

After my post last week, I started thinking more about Facebook and its implications on society.

I went straight to Google (obviously). I searched “effects of social networking on society” and glanced at the first sentence or two of each article, returning to Google when necessary.

Quickly, I selected “The Health Effects of Social Networking” by Robert Mackey from The New York Times, not necessarily because I read the entire article and found it useful (because I didn’t), but because the first two sentences seemed promising, and it’s from a credible source. After actually reading the article, I found that it discusses the potentially harmful consequences of the constant stimulation qualities of social networking sites on the brain.

Immediately I thought of the “skimming activity,” which Nicholas Carr describes as the process of hopping from one source to another on the Internet without returning to the previous ones due to a decreased attention span in his article, “Is Google Making Us Stupid?” That’s just what I had done. Was my mind no longer capable of concentrating long enough to read an article online?

Next, I turned to Google Scholar. I became extremely frustrated, because I was getting no articles that pertained to social networking, even though I had only spent about five minutes searching.

Finally, searching “Facebook,” I came across an article, “The Benefits of Facebook ‘Friends:’ Social Capital and College Students’ Use of Online Social Network Sites,” which discusses the correlation between Facebook and psychological well-being. For research purposes, I find this article useful in providing an opposing argument for The New York Times article.

Lastly, I turned to Loyola’s library resources for research help. Again, I became frustrated with how difficult (in comparison to Google) it was to find relevant articles. It took about ten minutes to find a useful article using the search “effects of Facebook on society,” which is by Zizi Papacharissi and describes a comparative analysis of several social networks and the way privacy shapes self-presentation. This article provides an interesting outlook on the way social networking sites shape our identities in society.

Throughout my research, I found myself searching for articles that reflected my current attitudes toward social networks, which confirms Douglass Rushkoff’s idea in Program or be programmed: Ten commands for a digital age: “…we overvalue our own opinions on issues about which we are ill informed, and undervalue those who are telling us things that are actually more complex than they look on the surface” (66). Not only was I facing the struggle of a reduced attention span, but I was also avoiding learning anything new about the particular subject.

However, I did learn something.

Steve Kolowich’s article, “What Students Don’t Know” is accurate in saying that students overuse Google and don’t know how to properly use scholarly search engines. I’m not claiming to know the answer to what has caused this new lack of concentration and avoidance of effort, but until these search engines are as efficient and easy-to-use as Google, I don’t see myself or other students switching loyalties.

When it comes to getting knowledge in the age of Google, we’re impatient and lazy.

Facebook, please be perfect

By Matthew Gillis

A perfect Facebook would be one that would not have let me post those angry “I-hate-the-world” statutes when I was 16. It would be a site that would have automatically deleted those God-awful pictures of me sleeping in public (with my mouth wide open, of course) the minute they were uploaded.

But I think a utopian Facebook would also include unlimited privacy settings, unflawed access to communication, and, of course, Words With Friends without advertisements.

After reading Fred Turner’s “How Digital Technology Found Utopian Ideology,” I found myself agreeing to his idea that new technologies always generate utopian hopes for its users. The reason I stopped using MySpace (with much reluctance) and started using Facebook was built on the belief that Facebook could solve the problems Myspace had, giving me, and every other pubescent eighth grader, hope for a utopian social network.

Merriam-Webster’s first definition of utopia reads, an imaginary and indefinitely remote place.

Imaginary. As I now know, a perfect Facebook is imaginary.

With the availability of digital cameras, being able to click a button has given everyone and his or her grandmother the ability to be a “photographer,” and now Facebook has given everyone the opportunity to be a publisher. But just because we can take pictures and we can publish them on this dystopian site doesn’t mean we should. I mean, we all have that Facebook friend we choose to keep “friended” due to his or her drunken Friday night pictures that we love to stalk.

I know that a Facebook free of flaws has the potential to connect people all over the world without negative consequence. But I also know that the dystopian Facebook we have today has the potential to create another technopanic, explained by Alice E. Marwick in “To Catch a Predator? The MySpace Moral Panic,” due to it’s ability to ruin relationships, careers, and our chances of ever having a clear slate for the future with just one click of a camera.

Just remember that until Facebook becomes a perfect place where the pictures of you sleeping with your mouth open or the ones of you passed out with “stupid” Sharpie-d across your forehead automatically delete, your utopian hopes are far from reality.

The age of digital dependence

By Matthew Gillis

I think it’s safe to say that all of us have experienced a time when the technology we take for granted suddenly stops working. Luckily enough, it’s happened to me more than once in the past week.

The exact moment in which you discover that the technology is broken becomes filled with emotions: shock, wondering how something you’re so dependent on is even capable of breaking, panic over how to fix it, and anxiety over being yelled at by your mom for “not appreciating anything you get.”

On Friday, my digital camera stopped working, and yesterday, my Internet wouldn’t load Delicious’ webpage (which is ironic, due to the fact that this was the only webpage I needed to access).

This range of emotions that came with the frustration of my nonfunctioning technology got me thinking about my dependence on technology. I was feeling all of these things at once because I knew I needed to use this technology but had no knowledge of how to fix it.

In an age where everything in our lives exists through technology or is online, how do we face the inevitable glitches? Are we putting everything important to us at risk in the hands of a technology that is bound to have flaws?

Today, after my Internet finally let me access Delicious, I perused the site and found myself particularly fond of its overall purpose. In essence, the site acts as a digital bookmark for everything and anything you find interesting on the web.

After bookmarking several websites, blogs, and articles and creating “stacks” that pertain to my interests, I became intrigued at its feature of putting everything I selected in one designated place. I could put everything I found on the web on my own single profile. Everything.

From variability, in which each user can customize his or her individual stacks, to hypermedia, which allows users to connect media to their profile, Delicious is the embodiment of the principles of new media as described by Lev Manovich in “The Language of New Media.”

I can’t help but wonder if Delicious and similar sites, like Pinterest, are slowly replacing traditional pen and paper bookmarking. While I find it to be a very beneficial tool, I feel a sense of reluctance to put everything of importance on a technology I know nothing about.

In summarizing my concerns, I turn to Danah Boyd‘s article “Incantations for Muggles: The Role of Ubiquitous Web 2.0 Technologies for Everyday Life,” in which she writes, “As you build technologies that allow the magic of everyday people to manifest, I ask you to consider the good, the bad, and the ugly.”

I can only imagine the day when my digital bookmarking tool, which could eventually become a place for not only my favorite sites to be stored, but also my entire reality, stops working.

I doubt then that the worst of my problems would be my mother’s reaction.

_______________________________________________________

If you’re interested in other media and technology-related blogs, which I follow using Google Reader, check out: Technology in the Arts, CNN Tech, New York Times: Technology, Technology Review, and The PhotoArgus.

Technology: an uncontrollable monster?

By Matthew Gillis

Overwhelmed. This is the only emotion I can distinguish after reading the introduction to Douglas Rushkoff’s Program or be programmed: Ten commands for a digital age.  I keep coming back to the same question: do the digital technologies we willingly choose to be a part of actually have control over our individual realities?

To put this uncontrollable digital monster into a more manageable context, I’ll use photography. When I take a picture, the lens in which I’m looking through presents a portion of the world. I, being the photographer, am able to choose what aspects of the scene I want shown in the frame of the image. These deliberate decisions reflect my position as the producer of each image and show my role in constructing a new or altered reality for the viewer of these photographs. I have the ability to omit or highlight aspects of the scene.

As a photographer, I am responsible for both understanding how to use the camera and also how to produce my desired image, because I am shaping a potential viewer’s idea of reality. I am in agreement with Rushkoff, who similarly believes we are responsible for knowing how to use technology and being able to program, or create, it. Photographers are creating “reality” through images, while programmers are creating technology that is able to think and operate, controlling our realities and, ultimately, us (Rushkoff 21).

If we choose to ignore that photos aren’t necessarily reflections of true reality, we choose to be falsely influenced. Doesn’t this seem similar to Rushkoff’s idea that failing to understand programming of technologies, which control our realities, is choosing to be programmed?

Going back to my original question, I have come to an answer: yes.

I think it’s our duty as consumers of technology to be able to discuss the function of technology. If it’s our job as a society to shape each technology’s use and meaning, according to “What’s New About New Media?,” then isn’t it equally our responsibility to be able to understand how to use it and how to create it?