Human-Computer Interaction

This is a discussion board for course CS 07.430 Human-Computer Interaction


Comments

Human-Computer Interaction — 41 Comments

  1. I decided to look at the Oculus Rift seeing as how I might want to invest in one sometime in the future, prior to this assignment I didn’t really know much about the product though. Wareable.com had an article on pretty much every aspect of the Oculus Rift and I thought it was fascinating. I’ve been wondering for a while exactly how the 3D aspect works and this article gave a nice and simple way of thinking about it, “it involve spitting out two near-square video feeds to the same screen – think playing a vertically-split-screen two-player game. The clever bit is that each feed comes from a slightly different angle, so that the player’s brain is tricked into thinking that two 2D images are one 3D one.”

    Obviously the biggest pro for this kind of technology is how awesome it is. It can take video games to the next level and give them a more realistic feel to them. I think the biggest con at first is how expensive it is. The price will obviously go down over time, but for the first few years not everyone will be able to afford this kind of product. Developing standards for menus and controls will also be very interesting because users will have nothing familiar to compare it to. They will have to learn all new skills in order to use this. I think VR can have a very big impact of the future of video games and is a step in the right direction for the industry.

    Nield, David. “How Oculus Rift Works: Everything You Need to Know about the VR Sensation.” Wareable. N.p., 29 Mar. 2016. Web. 24 Apr. 2016.

    • I was watching a streamer on Twitch play some games with the HTC Vive and it looked like alot of fun. He mentioned that it was much more immersive than it seemed from viewers perspective, also he mentioned getting motion sick from using it for a few hours.

    • I wrote my comment before reading any of the previous comments and see the first post is one about the oculus rift. I did mine on the HTC Vive and find that VR is becoming way more relevant and starting to pick up a lot of speed. The absolute immersion of a 3D space in virtual reality bringing about fears of heights and vertigo is pretty awesome. I am really excited to test these devices out myself.

    • I was on the fence about purchasing the Rift considering the first version of every new product always has a few bugs in it. Reading this and the original article I might just buy one.

    • The possibilities are endless. I would be an amazing teaching tool for students and children. Practical learning is better adaptable for teaching children.

  2. The study I chose to discuss talks about recent advances in drone technology and how to design better ways for interacting with drones. The paper was called “Drone & Me: An Exploration Into Natural Human-Drone Interaction” and detailed ways in which a group of researchers studied how users could best interact with drones. Their goal is to have drones respond to gesture commands and, more importantly, figure out what gesture commands would be most logical to use that could be easily understood. The tests they conducted involved going outdoors with a drone and having participants interact with the drone in ways they felt to be the most natural. A researcher would control the drone and have the user perform tasks to make the drone do certain things. Afterwards, users would explain what tasks worked and didn’t work for them. For instance, a lot of users didn’t like using voice commands to get the drone to do something and instead preferred gestures.
    One interesting thing I read from the study was how users physically interacted with the drone. “We thought users might be afraid of the drone and be uncomfortable interacting with it. Instead, their reactions could not be further from our expectations. 16 participants reported feeling safe interacting with the drone…. as users became comfortable with the drone, they got closer to it than we expected. In our preliminary look at proxemics, 7 participants brought the drone within their intimate space (1.5ft), 9 in their personal space (4ft), only 3 preferred to have the drone in their social space (10ft) at closest, and none in the public space (> 10ft)” (Cauchard 4). I was surprised to find that so many users were okay with being so close to the drone. I’m sure that the natural interaction with the drone created almost a personal connection with the device, like having a pet, and users didn’t mind being close to it.
    One pro of creating gesture based drone tech is that is makes the technology a lot easier to use. Instead of having to master flight navigation controls on a clunky remote, users can just point to where they want a drone to go and control it with their body language. Of course, a negative is that a misread gesture could lead to some problems. Also, I don’t really see a need for having personal interactions with a drone anytime soon. The paper stated one reason for interacting with a drone was for taking selfies. That just sounds silly to me. Otherwise, I guess the solution holds some merit for the purposes they want a drone used for. For intensive drone flying, I still feel like a classic remote control set up is the best option.

    Source:
    Cauchard, Zhai, et al. “Drone & ME: An Exploration Into Natural Human-Drone Interaction.” http://hci.stanford.edu/publications/2015/gestures/paper116.pdf. Stanford. 25 Apr. 2016.

  3. This article blog post by John Seibert, a CEO of Tranquil Blue, wrote a small article called “What Is The Next Frontier of Human Computer Interaction” on useabilitygeek.com questioning the new frontiers of how humans can interact with technology. With so much processing power being fortified in a small package, augmented reality, and gestures being refined into interaction with technology, the field seems to be expanding rapidly. Seibert mentions that how even eye gestures can be tracked and can greatly change the way we literally look at data. Just by staring at an object or character can cause it to instantly interact with the user. With the mentioning of video games, Seidbert goes onto mention:

    “Entertainment is one market where HCI is witnessing deep innovation” – John Seidberg

    This, in my opinion, is a very true statement that holds a lot of pros within it. With entertainment there are many ways that virtual reality and augmented reality can change the scope of how a user can now approach data visualization or interaction. The article throws an example of users looking for more ways to interact with videogame characters. If eyes are laid onto that character, it can read the user’s gaze and respond accordingly. Another example for augmented reality is how the microsoft hololens was portrayed at CES in 2015. During showcasing, the user was able to play minecraft by just staring at a barren wall and also manipulate the world that minecraft players were on by generating a 3d map view on a simple coffee table. He was able to play god and cast lightning strikes, summon monsters to attack the players and also change the terrain. This technology is just impressive as more time is spent improving it.
    The cons that can be seen is the amount of money and availability of this technology. Although it has come a long way it still hold a very steep price mark which might deter further innovations. Along with that if not enough people become interested and start trying to find loopholes that abuse some aspects of these innovations that create new innovations and styles to how we can use this technology… there wouldn’t be enough people to evolve the technology.
    Seeing how popular and hyped up augmented reality and virtual reality is coming to be. The fact that years ago this seemed almost impossible with the failed attempts, like the virtual boy by nintendo. The technology has grown so much now with so much interest the merit seems high for this new way of interacting with information and visualization. If there is enough investment and creativity being abused within these technologies, then it will grow even faster than we ever will have expected.

  4. The push towards autonomous driving took a giant leap forward with the introduction of “Auto-Pilot” on new Tesla vehicles. The new technology, while officially in a beta test, is the most advanced intelligent driving system that is currently available on a consumer level vehicle. Similar intelligent drive features available on Mercedes-Benz and BMW vehicles pale in comparison to “Auto-Pilot”. “Auto-Pilot” allows drivers to TEMPORARILY take their hands off of the wheel, while still paying the same amount of attention necessary to control the vehicle. The vehicle controls speed adjustments, lane placement, parking, changing lanes, and can even drive itself out of a tight parking space so you can get in it more easily. Most importantly the system attempts to avoid car accidents at all costs – often in a situation a human driver would be too slow to respond in. To see accident avoidance in action check out this video and read the drivers description of utilizing “Auto-Pilot” and how it avoided the accident: https://www.youtube.com/watch?v=0-M3RY9lSks This groundbreaking technology represents the first introduction to a new technology that most adults will certainly be interacting with in the near future.

    Autonomously driven vehicles are significantly safer than any human driven vehicle. From the ability to make decisions based purely on calculations – with out emotion playing any roll whatsoever, to the fact that the vehicle has complete 360 degree awareness with out any possibility to lose focus they are significantly superior drivers. In fact googles fleet of fully autonomous vehicles has never been in an accident caused by the vehicle after logging more than a million miles. The general population does not seem to trust this however, preferring a more traditional human driver – a driver that gets tired, can lose focus, and let emotion play a role in making decisions. Many drivers, specifically Tesla drivers, have done the research and are willing to put their life in the hands of an autonomous car. They activate “Auto-Pilot” mode mistakenly believing they are activating a fully autonomous driving system. Auto-Pilot was never intended to be fully autonomous, in fact unless a setting is manually changed the drivers hands are required to be touching the wheel at nearly all times. The greatest challenge the “Auto-Pilot” system will face, is overcoming the stigma of safety that is attached to autonomous vehicles.

    While the system is an incredible breakthrough on our path to fully autonomous vehicles, it has its fair share of benefits and negative elements. It is no surprise that the system aids in journeys of long duration – when the driver can become tired… according to Wired: “They covered 2,994 miles at an average speed of 51.8 mph, a figure that includes the time spent plugged into Supercharger stations along the way. They had autopilot mode engaged 96 percent of the time, Reese says, using it at speeds around 90 mph. It eased the burden on the team, a big deal when you’re in a car for 57 hours straight.”. The system is also well documented avoiding high speed car accidents, applying the brakes at maximum pressure seconds before a human would have been able to respond, proven by numerous videos of the car activating breaking pressure before the humans face changes to acknowledge the threat that lies in their immediate path. However the very fact that people would utilize the system for 2874 miles – all of which was certainly not roads that would be recommended by the manufacturer, states its own problem! People are over trusting the system, it is not an autonomous driving system – rather an assistance features like adaptive cruise control! The system is totally capable of reading speed limit signs – it even displays the speed limit on the dashboard – but it pays no attention to the speed limit in autopilot mode, rather relying on the human driver to set the appropriate speed. Wired speaks to this issue: “Even Roy, a trained racing driver who set this speed record with the help of Autopilot, is concerned about how the public will use it, he says. “There’s no reason this car should be allowed to go 20 or 30 miles per hour over the speed limit in autonomous mode.” The greatest challenge “Auto-Pilot” faces is getting people to understand that the system is not yet ready to drive on its own. It can drive, but it needs a fully competent, fully aware driver behind the wheel to “hold its hand”.

    I fully believe that the future of safe automobiles lies in autonomous driving. In fact over 99% of car accidents are caused by human error. By removing the human from the equation, we may be able to significantly reduce the number of car accidents that prove to be fatal each year. I believe that driver education is the most important part of how we will learn to interact with this interim “Auto-Pilot” like system.

    By: Damen Tomassi

    Works Cited:

    “Obviously Drivers Are Already Abusing Tesla’s Autopilot.” Wired.com. Conde Nast Digital. Web. 30 Apr. 2016.

    • I’m really excited to see how autonomous driving advances in the future. There will be a significant reduction in accidents which should reduce insurance costs. Autonomous systems don’t even need to be perfect for them to be an upgrade from human driving. As long as autonomous driving is safer than humans driving their own cars it will be an improvement.

  5. The article that I chose to write about is one on Enhancements and the effect they have on web design. A quote from the article explaining what an enhancement is: “An enhancement is a feature that speeds up or enriches the interaction for some of the users, but is not essential for accomplishing a task.  In other words, it’s something that some users can take advantage of, but they don’t have to and they can easily live without it. Some forms of enhancements are also known as accelerators or shortcuts. In many user interfaces, they are a key way to support one of the classic usability heuristics: flexibility and efficiency of use.”
    The article then goes into detail about “Progressive Enhancement” explaining that it’s something that is determined by that the application is running off of. If there is a search bar, for example, the device running the application might choose to allow for voice recognition so the user can dictate their search. This would be a progressive enhancement because it is something that the user doesn’t need, however for a phone it would be a convenient shortcut.
    It also goes into detail and explains how the IPhone 6S has 3D touch and that gesture should only be used to enhance an interface. The reason being is that the actions that 3D touch can perform should still be available for other users in some other way. However, the enhancement of 3D touch would create a shortcut for that specific action if users have it.
    The rest of the article goes into some more detail about how enhancement should only be used to take advantage of specific device features and should not become the only way of doing a specific action. The interface should be intuitive and easy to understand but if your device allows for it, then there should be a quick way to do the action via enhancement. I think this article is makes a lot of sense and it even goes into and explains that even though redundancy is usually not something we want in our programs it is welcome in this specific case, which I completely agree with. A lot of the quick gestures are a lot more convenient that clicking though a bunch of different links to get where you want to be.
    In conclusion, I agree with most of what this article has to say however, I do see a pitfall. To add an enhancement the designer needs to know about how each device the program is going to be running on works, what the devices are capable of, and what kind of shortcuts and enhancements can be made specific to each device. This could take up a lot of development time and add another level of frustration for the developers, however in the end it would be worth it because the users will be happy and much more likely to use the application if it’s more discoverable and easy to use though this design feature.

    “The Role of Enhancement in Web Design.” The Role of Enhancement in Web Design. N.p., n.d. Web. 02 May 2016.

    • I have to admit that i love 3d touch, but the biggest problem i see with these enhancements is that it can be hard for users to even know that these enhancements are even there. So many times users have no idea of the enhancements that the designers chose.

      • The caveat with letting users know about every little detail is that it would make the iPhone no longer appear as simple and easy to use as it’s marketed. I think features like 3D touch are reserved for the kind of users that would keep up with that sort of thing, anyway.

    • I think people would get lost with the 3d improvements and most users i feel would not know how to even use them to start with.

  6. My article talks about new trends and opportunities with new devices that are going to be released in the near future such as Google Glass. It wasn’t too long ago that imagining a digital interface that displayed daily information such as the news and weather was nothing more than a pipedream. “New forms of HCI will significantly change our lives. New interaction paradigms offer the chance to improve quality of life for people who can’t take advantage of current interfaces — due to physical disabilities, for example.” A major advantage for this technology is its accessibility. It provides functionality for all users regardless of their physical state. However, an issue could be eye strain and trying to read something at a short distance, as well as the fragility of glasses. It’s not too uncommon to drop your glasses and have them break.

    Source: P Montuschi, A Sanna, F Lamberti, and G Paravati, “Human-Computer Interaction: Present and Future Trends,” Computing Now, vol. 7, no. 9, September 2014, IEEE Computer Society [online]; http://www.computer.org/web/computingnow/archive/september2014. – See more at: https://www.computer.org/web/computingnow/archive/september2014#sthash.Nvu8cL3A.dpuf

  7. I looked at an article from PCWord.com about Skype’s new translator features, which interested me because I used to use Skype almost everyday before I switched to Discord. The feature includes “voice to voice translation for seven different languages including English, French, German, Italian, Mandarin, Portuguese, and Spanish”, while also allowing text to text based translation for over 50 different languages. Based on the users language settings, the translation features will automatically activate based on the language settings of their contacts. For example if you have your language on English while one of your contacts has their settings on French, when you connect to a voice call with them it will translate to English for you and translate to French for them. Otherwise if they do not have their language set, you can always click the drop down on their name and change the language to what ever they will be speaking.

    There are a few pros and cons for the translation feature. The pros include the fact that it can break language barriers between people, which can help connect the world more. People who have friends in different countries can now communicate easier with each other and you can meet new people who do not speak the same language as you. I could also see it being used in work environments if two people need to communicate and their is a language barrier involved. Possible cons could be seen with the automatic translation based on language settings. Say that two people have different language settings but typically speak to each other in the same language, that creates an inconvenience in having to disable translation every time you want to speak to that person. Although I have not used the feature and do not know how good the translation quality is, one challenge could be how the translator deal with accents, connection, and mic quality. I feel that if even one of these factors are present, then the quality of translation would drastically decline. In the article, they state that the translator “certainly won’t be perfect anytime soon”, so I wonder if any of those challenges have anything to do with that.

    In my opinion, the feature is probably one of the best things that Skype could have added to improve user experience. Even if the solution is not perfect yet, it is still pretty powerful and useful for what it is.

    Source:
    Paul, Ian. “Skype’s Magical Real-time Language Translator Tool Goes Live for All Windows Users.” PCWorld. N.p., n.d. Web. 04 May 2016.

    • This seems like a very interesting new feature with Skype. Although Skype seems very buggy and I wouldn’t really trust something like this to work 100% of the time. This new feature seems like a step in the right direction in terms of breaking down language barriers between people. The way the text is displayed on screen as the person talks is also a nice touch so you aren’t looking in the chat box for translations. It is like having real time subtitles, pretty neat.

  8. The article I chose was written by Joanna Stern for the Wall Street Journal and discusses the prevalence of live video technology. Titled “What to Know About Live Video, Social Media’s Latest Craze,” the article is of the opinion that live video is and will continue to emerge as a main facet of social media and broadcasting. Many big tech companies are cashing in on the technology. Mark Zuckerberg, for instance, recently announced new live broadcast features on Facebook. Twitter’s Periscope has found it’s place in live-streaming. There’s even expectations that Youtube will announce a smartphone-based live video streaming service. The article states that by the year “2020, 75% of the world’s mobile traffic will be video” (Stern). I’ve seen some live streaming show up on my timeline before (via periscope), but I never imagined that live-video would become so popular.
    I think this technology generally has pros from the HCI perspective. Easy streaming of live video is something that will benefit many through the use of technology. As mentioned in the article, this would allow the benefit of many viewers being able to see one person stream a valuable broadcast. For instance, a professor could stream a lesson to a large amount of students, allowing the students to write back questions if they have them, which the professor could answer while live. The same type of interaction, which resembles that of a one-to-many broadcasting system, could have many practical implementations. As for cons, I can see this technology changing the way humans socialize in at least some form, as most technologies do. The author of the article also explained that there are many fallbacks to these technologies, one of them being the poor quality of live video that results when anyone has the power to broadcast. The videos taken can be blurry, shaken, or simply boring. In addition, some services might send annoying notifications to users whenever someone is streaming live video, as Facebook has already done. This can be seen as a nuisance to users.
    Overall, I say there is a lot of merit to this new technology. As I’ve already mentioned, it can be used to stream video for the purpose of entertainment or even education. There are a wide amount of possibilities when it comes to live-broadcasting. The ability for anyone to broadcast will only increase the number of ways in which this technology will be utilized.

    Source:
    Stern, Joanna. “What to Know About Live Video, Social Media’s Latest Craze.” Wall Street Journal. Down Jones & Company, 26 Apr. 2016. Web. 05 May 2016. .

    • Ive seen some of these too, I haven’t noticed any intrustions on the mobile app. Most of them have been by celebrities that are promoting something.

  9. The idea is that a user can use the human body itself as a replacement for the touch screen interface. The idea is that to call a favorite in your phone you touch you thumb and pinky finger and that will dial for you. The Ui would be imaginary in that there would be no screen for the user to see they have to remember the UI. The prototype shown looks like it came from stare trek and goes behind the year and can sense when you touch it. Currently the prototype has some issues “For this system to actually work, there must be some way for users to hear the computer, such as using an earbud. More important, there must be a way for the computer to know what part of the hand the user is touching. In Gustafson’s research prototype, this was done with an optical motion tracker that seemed too clunky for real-world applications. But that’s okay; we can easily imagine advances in gesture recognition that would be more portable and less intrusive.” said Jakob Nielsen

    The pros are how convenient it would be to use and the how revolutionary it could be for the visually impaired.
    One con could be with no visual input what if you forget what finger calls your wife and which one calls your girlfriend, that could be messy.

    The biggest issue will be getting to get this to work outside of a lab, would humans need to have implants for this to work in the real world. Truthfully l i can see some small applications but i dont think anyone will be using their hand to make phone call in the future.

    https://www.nngroup.com/articles/human-body-touch-input/

    • I like this idea because it’s another step up from the article that I found. Mine was about touch screens being more responsive and popping up menus the user seems likely to click on before they even touch the screen. I like the idea of the body replacing touch screens, but like you said, there are some serious cons. I don’t think the average user would want to remember the UI, so there would have to be some kind of prompt to guide them through. But it was a really interesting concept.

  10. I found an article from WinBeta about Microsoft’s appearance at the ACM CHI 2016 conference being held from May 7-12 in San Jose. The conference is about improvements in HCI, and Microsoft will be presenting improvements that make it easier to work with information from devices.

    One of their researchers, Ken Hinckley, has been working on an improved interaction technique for touchscreens. The article states, “Utilizing the motion of the finger or hand, pre-touch screens would be able to sense when a finger is hovering over a specific item on the device and open up menus regarding the content. Not only that, but users would be able to hold the phone with one hand and the ambidextrous sensors will open up menus within the thumb’s limited reach. The interface also supports utilizing multiple gestures and touches for more subcategories making the pre-touch sensing stand out as the potential future for mobile interaction.”

    I think that the pros of the new touchscreen interface would be that it would be more responsive to user input, and it might be useful to have your screen open up additional menus based on what your hand is near on your screen. I also think that the sensors opening up menus within reach of the thumb would make things easier for users.

    Some cons and challenges for this technology would be deciding when to show the menus regarding content, including how long the user’s finger would have to hover before the interface responded. If the interval is too short, it may lead to unintended menus being opened. Also, regarding the ambidextrous sensors – these would have to be based on the size of the user’s thumb in order to be useful, so the interface would need to take that into account as well.

    My opinion on this solution is that if these challenges are worked out, it would provide a better experience for the user when their touch screen device reacts to them even before they have pressed anything.

    Source:
    McDonald, K. (2016, April 28). Microsoft to showcase advances in virtual reality and more at CHI 2016. Retrieved May 06, 2016, from http://www.winbeta.org/news/microsoft-showcase-advances-virtual-realty-chi-2016

    • I think my phone has some sort of hover sensor like this, however it tends to not work at all. Hovering over an item is supposed to give a brief description of that option, app, etc.

      I think it has potential, and touchscreens are more prominent than ever in interfaces and its amazing how far they have come so far. Anybody still remember those horrible 90’s resistive touchscreens that never worked even with the stylus?

      I feel as though for those hover sensors they would required a lot of testing in order to get right since as you said, getting the right intervals and the accuracy for the devices can be tricky with all of the different factors involved.

  11. I recently found an article posted by Newsweek about how Google has patented the technology for a cybernetic implant that will go into the user’s eyes. This implant will allow the user to “… connect to wireless devices, take photos, and automatically adjust the eye’s focus.” (Cuthbertson) It essentially would act as a Google Glass based device, but instead be implanted directly into the user’s eyeball. Google hopes that this technology, will revolutionize the way that user’s view the world around them and “.. could be used to help people with vision problems like myopia (short-sightedness), hyperopia (long-sightedness) and astigmatism…” (Cuthbertson)

    Both this device and Google Glass, and just about every other recent visual innovation in the past few years, all appear to have the same issues associated with them. Just about every article I’ve read about the patent, including this one, all stat that they have the same worries with this device with Glass, if both ever reach to the mass market. The fact that users will be able to take pictures wherever they want and at anytime, brings forward a ton of privacy and restriction issues. Similarly, it begs the question as to how much Google will be able to monitor the information the user sees and captures within their device.

    Despite these glaring issues with these types of visual technologies, there could be a lot of benefits as well. As the Google representatives stated in the article, it would be amazing to be able to help those with many different visual impairments or potentially even be a cure for blindness. It could also potentially revolutionize the way that people interact with the world around them, allow them to quickly see pop ups of reviews for a restaurant or store, or find landmarks that other people have linked to them, for example. Google Glass I believe was also featured to have face recognition technology as well.

    As much as I would love to have those features and the new interactivity it could bring, the amount of privacy issues that these technologies raise I think is just too much, at least for the time being. I remember when Google Glass was being sent out to certain people to test, many places outright banned them in fear that they would record or take pictures of them. Many were terrified that they could be recorded, or have their picture taken without them ever knowing that it is occurring. While Glass is a much more obvious implementation of this technology, a cybernetic implant into one’s eye can make it even less noticeable and more fearful for those concerned about privacy issues as well a the fear as to how much Google or any company with this technology, will be able to access information from these devices.

    Overall, I highly think these kind of devices, have a long way before actual implementation, if they will even be implemented at all, just due to fear of outright privacy and information breaches. However, if done right, these new visual devices could potentially forever change the way we think about interaction devices.

    Source: Cuthbertson, Anthony. “GOOGLE PATENTS A CYBORG LENS THAT INJECTS INTO YOUR EYEBALL.” Newsweek. Newsweek LLC, 5 May 2016. Web. 07 May 2016. .

    • Scott this is incredible. I love how Google is continuing to invent new technology to improve our lives. I would be a bit wary of implanting a device into my eye, but I suppose if I had eye ailments then I would definitely consider it. It’s also nice that people who do this won’t have to look as silly as someone wearing the Google glasses.

  12. I found an article about how Microsoft’s Tay AI failed. For anyone who doesn’t know, Tay was an AI developed by Microsoft that would try to mimic a typical 19-year-old woman’s personality by learning from users on Twitter, Kik, and GroupMe, but instead ended up becoming an anti-feminist Nazi. The article discusses how this happened.

    The author states, “Like humans, AI requires good teachers.” This is especially true for an AI like Tay, which had no safeguards and no voice in the back of its digital head thinking, “maybe I shouldn’t be a Nazi just because these people keep using Nazi rhetoric.” In other words, filter out the negative and use only the positive.

    Tay is an interesting AI, it just had no training wheels. A rethought approach to the bot’s artificial intelligence would make it able to carry on positive conversations, which is pretty significant when considering how that would work for something like a typical sci-fi robot. The AI would have a lot of challenges facing it: being self-reflective, ignoring trolls, and finding deeper meaning in words; Tay doesn’t know what a feminist is, she just saw it as a commonly used noun.

    The current solution holds little merit, as observed by how disastrous it became, but it’s a step in the right direction at least. Clearly, the AI was able to learn what it was taught and carry on conversations, it just happened to be taught obscenities and carry on inflammatory conversations.

    Source:
    Pierson, R. (2016, March 28). What went so wrong with Microsoft’s Tay AI? Retrieved May 07, 2016, from http://readwrite.com/2016/03/28/went-wrong-microsofts-tay-ai/

    • This was a very interesting project in AI Microsoft was able to carry out even though it failed. We learned that we are capable of creating an AI that can learn and perform complex conversations. It is really interesting to see how quickly this chat bot was influenced and became hostile.

  13. I found a book on the Rowan online library that is full of research articles on new medical developments involving HCI. One of articles that caught my attention was one where some people created games for the Kinect that are used as therapy for people after they have a stroke. They have games such as bowling, ping-pong, and basketball that are designed to help rehabilitate arm movements that tend to be limited after a stroke. The idea is that this will make patients home therapy more fun and lead to higher compliance in regards to regularly completing therapy. The clinicians that performed usability tests on the system found it useful.

    “Only two of the twelve clinicians had previously used the Kinect, and each of these clinicians had only used the Kinect once. The clinicians generally felt that the programs had good usability as shown in Figure 3. All of the clinicians felt that this system would be helpful for home exercise of individuals with stroke.”

    This is very positive feedback, although there were a few cons noted by the clinicians. The sensor for the Kinect at times was not entirely accurate which is an issue, because accuracy is very important when the motions needed are very specific. The graphics could also use an upgrading, since the game looks like stick figures made in MS Paint.

    I think the pros of this system by far outweigh the cons, because this could make completing therapy easier and more enjoyable for people after a stroke. The cons on the other hand seem to be focused on hardware limitations and the lack of great graphics. If the system continues to be worked on though, these issues will eventually be corrected.

    Overall, this looks like a great idea that has a lot of potential. With newer iterations of the Kinect that have improved sensors, and stronger focus placed on developing improved graphics, this could be a widely used tool for the medical field in the future. This system could improve our lives, which makes it a worthwhile cause to pursue further.

    SOURCES
    R. Shumaker (Ed.): VAMR/HCII 2013, Part II, LNCS 8022, pp. 22–31, 2013.

  14. I found a book on the Rowan online library that is full of research articles on new medical developments involving HCI. One of articles that caught my attention was one where some people created games for the Kinect that are used as therapy for people after they have a stroke. They have games such as bowling, ping-pong, and basketball that are designed to help rehabilitate arm movements that tend to be limited after a stroke. The idea is that this will make patients home therapy more fun and lead to higher compliance in regards to regularly completing therapy. The clinicians that performed usability tests on the system found it useful.

    “Only two of the twelve clinicians had previously used the Kinect, and each of these clinicians had only used the Kinect once. The clinicians generally felt that the programs had good usability as shown in Figure 3. All of the clinicians felt that this system would be helpful for home exercise of individuals with stroke.”

    This is very positive feedback, although there were a few cons noted by the clinicians. The sensor for the Kinect at times was not entirely accurate which is an issue, because accuracy is very important when the motions needed are very specific. The graphics could also use an upgrading, since the game looks like stick figures made in MS Paint.

    I think the pros of this system by far outweigh the cons, because this could make completing therapy easier and more enjoyable for people after a stroke. The cons on the other hand seem to be focused on hardware limitations and the lack of great graphics. If the system continues to be worked on though, these issues will eventually be corrected.

    Overall, this looks like a great idea that has a lot of potential. With newer iterations of the Kinect that have improved sensors, and stronger focus placed on developing improved graphics, this could be a widely used tool for the medical field in the future. This system could improve our lives, which makes it a worthwhile cause to pursue further.

    SOURCES
    R. Shumaker (Ed.): VAMR/HCII 2013, Part II, LNCS 8022, pp. 22–31, 2013.

    http://download.springer.com.ezproxy.rowan.edu/static/pdf/684/bok%253A978-3-642-39420-1.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Fbook%2F10.1007%2F978-3-642-39420-1&token2=exp=1462594202~acl=

  15. So the article I decided to settle on was an article of a rundown for the developer kit of the HTC Vive. The HTC Vive is a VR device with complete motion control and spacial recognition. The device has a headset that is very much like the Oculus Rift, but a lot more comfortable (so most people have stated). It also has two motion tracking controllers that have been said to be extremely accurate in capturing motion. The author of the article stated that one of the best games they played while using it at home was a minigolf game that put you into these fantastical settings nad you simply played minigolf. What he loved so much about it though was how absolutely accurate the motion control was. Another thing the HTC Vive comes with is two base stations that must be set up around the room. Once the stations are set up, you use the motion controllers to draw out an open space. The reason for this is unlike other VR setups, the HTC Vive has spacial recognition. The games take advantage of this space you have drawn out to allow you to physically walk around environments created in VR to really drive the experience home. I think this simple feature really immerses the user in whatever world they are in. The big thing I have heard with the Vive is just how immersive the experience is. Articles speaking about actually getting vertigo, motion sickness, and other effects the human body may face when actually put into these circumstance. “After awhile, I started to forget I was actually just standing in my office with a weird looking contraption strapped to my head. That’s the sort of presence every VR experience strives for…” -Devindra Hardawar

    The pros of this technology are pretty large. The gaming side of things stands to really benefit from it, but the HTC Vive comes with a little program called tilt brush which was made by Google. Tilt brush allows an artist to draw things in a 3 dimensional space, where the brush strokes are represented in 3 dimensional space. It is a really interesting concept and opens up a whole different way to look at 3D art, and art in general, almost like sculpting these 3 dimensional images. Here is a great video of Glen Keane, the famous Disney animator and artist, talking about art and tilt brush with the Vive, as was mentioned in the article: https://www.youtube.com/watch?v=GSbkn6mCfXE.

    Drawback and challenges are really the tech. How to make it more comfortable to use and wear, more affordable to anyone. And when we reach that point, where do we go with it. Are user interfaces in three dimensions a direction that seems useful and practical, and could we make is useful and practical. Full operating systems represented on a virtual reality platform, certainly sounds pretty awesome, but again, is it practical, and does it have a place or purpose. I find it hard to say whether it is or isn’t practical. I think it’s possible to get to a point where it is, if done right, but I have a hard time saying that it will ever replace the experience we already have with mobile touch screen platforms being as easy and convenient as they already are, and are continuing to become.

    Source: http://www.engadget.com/2016/03/03/a-day-with-the-htc-vive/#/ Engadget article “A day with the HTC Vive” by Author Devindra Hardawar

  16. Virtuix Omni – Technology of Virtual Reality

    The Virtuix Omni is a new device that might change the how Virtual Reality Gaming works. The Omni is basicly a small circular treadmill that allows the user to walk, run, sit and strafe within a 360 degrees of motion. This is done with specially designed shoes and sensors up the legs of the user. The US Military developed these originaly for combat simulation but the company Virtuix took the idea a step futher by trying to incorperate that into Virtual Reality Gaming. In my article the author Will Fulton states, “The concavity takes advantage of gravity to pull your feet back to the center, creating a more natural stride than a flat surface would. Sensor pods mounted on top of the shoes track your feet to translate your motion into the game”. This show off how the machine works making the user the actual controller. You would still need a Virtually Reality headset like the Rift to get the full experience, luckily it is compatible with any mobile headset. There are still plenty of problems with the Omni as the walking and running isnt always smooth, but this is still the first comercial version costing only around $700. This to me is the next best step into having actual Virtual Reality and simulating that the player is actually within the game. That being said the Omni is still not perfect and will take time and a lot of money to improve over the years. The pros to this device is that its a brand new concept to playing video games and a way to get people moving while playing those games. The cons are that it doesnt have a natural-feeling motion in terms of walking. When you walk it feels more like you are slipping around the ground which can hurt the product since its trying to stimulate real walking. I do believe that once the kinks are removed and a solution to the natural walking can be fixed the omni will be something everyone would want to buy.

    Source : Fulton, Will. “Slipping and Sliding around Virtual Worlds in the Latest Virtuix Omni.” Digital Trends. N.p., 10 Jan. 2016. Web. 08 May 2016.
    http://www.digitaltrends.com/gaming/virtuix-omni-hands-on/

  17. Researchers at Carnegie Mellon University’s Human-Computer Interaction Institute have developed a new tool for modeling human routine behavior. (Routine behavior here is defined as “behavior blueprints that help us accomplish repetitive tasks”.) One problem with current behavior modeling algorithms is that they don’t show us the situational context behind the behavior, or what events caused the person to do what things. This new tool uses the byproducts of an existing behavior prediction algorithm to show the causal relationships between people’s actions and the situations they are in. The tool also detects variations in routine behavior and shows the causes of them as well. It can also be used to predict in what situations people will engage in routine behavior, and determine if the routine itself is suboptimal. The researchers’ work is actually being presented right now at CHI 2016 in San Jose, CA.

    The pros of this tool are that it helps give us the context behind people’s actions with regards to routine behavior. Understanding why and when we begin routines can help computer applications predict our behavior better and even help us with those routines. In their research paper, the researchers also note that their work could be expanded upon for use in other domains such as health, accessibility, and software user interfaces. The one con of this tool would be that since it is so new, it may need further testing with different datasets for different applications outside of what the initial testing was done with.

    I think this tool could be incredibly useful in HCI in the future. As the article’s author, Susie Cribbs, puts it: “Understanding this information, along with variations that occur in routine behavior, would allow researchers to develop new technologies to improve peoples’ lives, like helping them drive more safely or coordinate complex commuting schedules.” The implications for safer driving are the most apparent; for example, the tool could be used to detect dangerous or aggressive behavior and then either warn the driver or possibly activate safeguards to protect them or others on the road. As for general HCI applications, anything that needs to predict human behavior could probably find this useful too.

    Cribbs, Susie. “CHI Preview: HCII Researchers Develop Novel Tool for Modeling Human Behavior”. Human Computer Interaction Institute. hcii.cmu.edu, 5 May 2016 Accessed 8 May 2016.

    (link to research paper in above article)

  18. Nvidia is putting out a new graphics card that is going to out perform their latest card. While being half the price. The card is going to be able help VR reach more of a main stream audience, using a new architecture. Cutting down from 250W to 180W, a price cut from $999 to $599. The Ceo of nvidia said “The GeForce GTX 1080 is almost irresponsible amounts of performance.” The pro of this technology from a HCI perspective is that, the reason they are using a this architecture is because a lot of current cards cannot handle the ultra wide displays without warping around the sides. The con of this technology is that the benchmarks are set by nvidia themselves and not an independant company.

    http://www.theverge.com/circuitbreaker/2016/5/7/11615806/nvidia-gtx-1080-1070-pascal-specs-price-release-date

  19. Looking Back At The Phenomenon of Twitch Plays Pokemon

    The article was a retrospective review of the famous social experiment that was Twitch Plays Pokemon. For those who don’t know Twitch is online streaming site dedicated to video games. On the site streamers will play a selected game and viewers can chat, donate, and subscribe to the user for the content. In February 2014 one streamer decided to make a new more interactive stream allowing users to enter commands into chat, read the command and input the command into the game. For this the classic video game of Pokemon Red was chosen for its simplistic gameplay and popularity.
    The stream would quickly rise to become the most viewed on Twitch attracting millions of views, and close to 100,000 concurrent players. With thousands of commands being typed near continuously the game advanced in a hive mind like mentality towards its final goal. Weeks went by and eventually they did manage to succeed in beating the game a feat Steven Schneider said “If taken at face value, it was simply a bunch of gamers trying to beat an 18-year-old RPG — but looking past the obvious reveals a moment in which thousands upon thousands of people all worked together toward the same goal.”
    The main pro of this is allowing people from around the globe to work together to achieve a common goal. It was a unifying effort and truly was a fun experience. As far as cons go it may have very well been a one off thing as no stream since has been able to replicate the success. The limitation of only being able to accept simple commands is something that will need to be improved upon if it were to expand to other games/industries.
    Coming from someone who played Twitch Plays Pokemon I would love to be able to play something similar. I think it was an awesome experience to not just play but to watch in an almost reality TV sense. While there have been attempts since the first success none have even come remotely close so the longevity is definitely questionable.

    http://www.techtimes.com/articles/133181/20160212/looking-back-at-the-phenomenon-of-twitch-plays-pokemon.htm

  20. Last year, Amazon released Amazon Echo with its built-in Alexa app. The Echo is a cylinder you put in your home that is constantly listening and waiting for commands. It is connected to the cloud via wifi and allows for a multitudes of apps for download. It connects to your house and lets users control different parts of the house. It can dim your lights, control the TV, play music, order food, etc. It’s like in the movies when you saw the rich person talk to their house of the future to help them set the mood to relax. The best part is Alexa. It’s like Siri. It’s voice activated and is constantly learning your speech patterns and habits to better your lifestyle.
    http://www.zdnet.com/article/how-to-use-the-amazon-echo-and-why-you-should-get-one/

    • “What the Amazon Echo can do for you is limited only by your imagination.” The pro to this technology is it makes your life easier. The con is that it’s constantly listening and easily used to listen in. I feel this can make our lives easier and will soon become a normal household object.

    • Kendrick, James. “How to Use the Amazon Echo and Why You Should Get One | ZDNet.” ZDNet. 9 Feb. 2016. Web. 09 May 2016.

  21. Drones
    A brief description of the content of the article/blog

    The article was a HCI test with drone and humans how they they interact with the drone. The finding basically where most human act as if a drone was a pet the way they acting. An example would be they put there hand up for the drone to stop.

    At least one quote (please cite fully) from this source

    “We found strong agreement on nearly half (44%) of the gesture, voice, and multimodal interactions that felt intuitive to participants. This was due to most participants interacting with the drone in a similar way to how they would with a person or a pet. We contribute a set of design insights to develop Human-Drone Interaction. We expect drones to become smaller and quieter so that they will resemble humming birds, flying by the user and coming into play when needed. Giving people natural, easy control will enable incorporating drones into our daily lives.”

    What you think the pros/cons/challenges are for this technology from an HCI perspective

    I think drones provide a cool way for humans to interact with flying in a way but causes danger to others due to possible accidents.

  22. The article I found interesting is about voice- controlled assistance in an evolution as it looks to prove its worth as the next shift in human – computer interaction. The creator of Siri, Dag Kittlaus shows its first public demo for Viv, which is an artificial Intelligence interface for everything. The idea is that Viv would be a new paradigm of computer interaction through conversation. Viv is a solution to the limitations of Siri but many steps further, implied Kittlaus. Viv, is said to take multiple instructions and executive each one without a hitch. “A clear strength of the Viv platform was the “stackability” of inquiries. As opposed to short-term-memoried platforms like Siri, Viv was able to embrace follow-up questions without stuttering or gasping for context that was just said seconds before.” Stated Lucas Matney of Tech Crunch News. The first version of Viv is set to be released later this year. As per its pros it would be extremely helpful to give a set of instructions to the phone with Viv on it and have each instruction executed to the letter, it would make life a lot more convenient for the user. The con that I foresee is that the voice – controlled assistance would not be able to decipher the emotions in the user’s voice, which could be frustrating to a user.
    Article Link
    http://techcrunch.com/2016/05/09/siri-creator-shows-off-first-public-demo-of-viv-the-intelligent-interface-for-everything/?ncid=mobilenavtrend

  23. The article I chose is “Eye tracking is the next frontier of human-computer interaction.” As we learned in class, eye tracking is used designing websites by tracking the movement a user’s eyes when browsing a site to build a heat map which than can be used by the designers. This article explores other possibilities that eye tracking can be used for. “Our eyes can reveal a lot about a person’s intentions, thoughts and actions, as they are good indicators of what we’re interested in. In our interactions with others we often subconsciously pick up on cues that the eyes give away”, says Melodie Vida. A person that is paralyzed or does not have the use of arms may use this technology to move a mouse cursor with their eyes. When reading on a computer, the screen may automatically scroll down when the person is reaches the bottom. If you look away from a television, the movie or show that you are watching will automatically pause. This technology is beginning to be tested within the Oculus Rift and Google Glass. Eye tracking is also being tested with many new video games.
    Computers and the internet play a huge role in our lives today. I believe that eye tracking is a great technology that can provide an alternative way for people with disabilities to share our lifestyles. This technology can possibly even help them use everyday appliances as well. As far as video games go, I think that this technology will provide some unique game play, but will not catch on. However, I do think that this technology may show interesting use when reading or browsing an application’s menu, such as Netflix or YouTube.

    http://phys.org/news/2015-02-eye-tracking-frontier-human-computer-interaction.html

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>