Self-driving Cars — April 8, 2017

Self-driving Cars

One of the motivations for developing and building self-driving cars is to eliminate “human error” that occurs in driving. One article stated that human error is a factor in 94 percent of fatal car crashes that killed more than 35,000 people in 2015. Another article states that 1.4 million measurements are taken by the car per second. The amount of data a car can process is more than a human can process in real time driving. Tesla states that their Autopilot system “provides a view of the world that a driver alone cannot access, seeing in every direction simultaneously and on wavelengths that go far beyond the human senses. The data allows a self-driving car to be more experienced than young drivers when they both respectively start driving on the road. It could have years of experiences instantly.

Another motivation for building self-driving cars is to move the car industry from individual car ownership to predominantly ride-sharing options. One article estimated that by 2025 you will see a death in personal car ownership in major U.S. cities. It comes as no surprise then to see that automaker companies like Ford and GM are investing and partnering with ride-sharing companies and startups to make sure they are in the right business in the future. Some automaker companies have called for the production of self-driving cars to slow down, however, it is important to note that these particular carmakers do not appear to be heavily invested in the market for autonomous cars.

An argument against self-driving cars is that they are not assertive enough. I have heard about a story of a self-driving car getting stuck at a 4-way stop sign as it did not know how to assert itself. This article states that the experience of riding in a self-driving car is similar to being stuck in a car with an overly cautious driver that only goes 2 to 3 miles over the speed limit. However, although humans may be irritated that cars would not be as aggressive I think it would make rides safer. Self-driving cars would ensure that people follow the rules of driving and eliminates problems when drivers are in a rush to get somewhere and take unnecessary potentially illegal risks like speeding through a stoplight when it changes from yellow to red. In addition, companies are attempting to make their cars be more aggressive in order to blend into traffic as one article points out that Google is teaching its self-driving cars to honk in certain situations to help promote overall safety of all drivers involved.

A major social dilemma of autonomous vehicles is who is liable for an accident if it should happen. I think that whatever car is proven to be at fault simply should take the blame, however, I think currently more media is focused on the presence of a self-driving car in an accident rather than focusing on what car is at blame. I think the technology still needs to be improved though in order to prevent more accidents. One article pointed out that Uber’s self-driving system does not perform well on bridges as the environmental cues are not as strong as areas crowded with pedestrians and buildings so perhaps Uber will have to incorporate more sensors into their system.  An article from New York Times points out that there is a chance that the fatal Tesla accident in the fall could have been avoided if the car had “Lidar” technology in which a camera resides at the top of the car to get a 360 view.

Another dilemma is that this technology could take away jobs from truck drivers and people who drive for ride-sharing apps like Uber and Lyft. With widespread deployment of autonomous vehicles, I think that there could be an initial increase in human unemployment and it could be difficult for people who have been relying on that source of income to suddenly change fields. I think that something might have to be done by the government or car companies to outweigh these effects, however, I think overall autonomous vehicles provide an exciting opportunity for our society to development. We should pursue automating the car industry. This article demonstrates how self-driving vehicles can actually help human endeavors such as farming as it can help cut down planting costs because the systems could be so precise.

An interesting problem that self-driving cars present is what to do in a life or death situation. This article points out the ethics problem of whether or not a self-driving car should aim to minimize total death involved or should protect the driver. I am not sure what the car should ethically be obliged to do, however, I do believe in the importance of safety and that the cars should not try to protect their riders at all costs.

I think that the government should play a role in regulating self-driving cars by ensuring that autonomous cars that are allowed on the road are safe. An article points out that this may not be an easy feat to demonstrate that a self-driving car is safe as it would be impractical to demonstrate this with the currently existing fleet of self-driving cars being required to drive a large distance. 

Personally, I cannot wait until the day that I have a self-driving car. I do not like driving and I am unable to drive for hours without my back hurting. I sometimes have to drive when I am tired or distracted, which I am guessing is the case for most people and I do notice the negative effects of these symptoms on my driving. I’m confident that a self-driving car in the near future would be able to drive better than me and make the road safer. If it is a safer option, why be against it?

Articial Intelligence — March 31, 2017

Articial Intelligence

According to Kris Hammond“any program can be considered AI if it does something we would normally think of as intelligent in humans”. When considering Artificial Intelligence, it is not say that it does have to be programmed in a certain way but rather just has to have the capacity of doing something we would classify as an intelligent task. The Economist offers a different perspective on AI stating that it is a process of getting computers to know things by producing the rules themselves rather than rely on a programmer specifying a sequence of rules to follow. Artificial Intelligence is an exciting frontier as they are not just making us think differently about technology but rather we are also changing the way we think about ourselves as well as thinking itself

Several companies have invested in projects that illustrate they’re capability at Artificial Intelligence such as AlphaGo, Deep Blue, and Watson. Each project represents a different section of AI. The IBM Deep Blue project is considered to be a “weak AI” representation of artificial intelligence such that the project was a system that was a master chess player but did not rely on solving the problem the way humans would do. As chess has a limited amount of options, Deep Blue was able to use immense computing power to evaluate the possible positions and would pick the move that would force the best possible final board position. So the project was able to solve an intelligent task but did not solve the task like a human would

IBM Watson was closer to achieving thinking as it relied on natural language processing, different strategies to find an answer in its knowledge database. IBM Watson is proven to be able to do more than just Jeopardy as it has been utilized to offer different treatment options for cancer that a human doctor may have not considered. The curious thing about IBM Watson is that it relies on confidence levels similar to how a human could be more or less confident on their answer to a question.

AlphaGo is perhaps the closest project to a “strong AI” project that exhibits thinking. As there are more possible Go positions than there are atoms in the universe, it was not possible for AlphaGo to rely on searching every possible option in how it conducted selecting the next move. It had to replicate intuitive pattern recognition through its neural network similarly to how humans would play the game. It seems like AlphaGo is more than an interesting trick or gimmick as it has proven to be able to beat the human champion. A problematic nature of competing against computers is that the computer can always improve itself: it does not take breaks. It seems that because this system has conquered Go it could conquer many tasks that are easier than the board game.

The Turing Test relies on the suggestion that if an AI machine could fool people into believing it is human in conversation, this could be a valid measure of intelligence. I think that this test is useful to measure AI but extensions must be made to the test to fully investigate other systems. One article suggested a use for a “visual Turing Test” that would require computer vision recognition systems to be able to extract meaningful relationships in photos in addition to correctly labeling the items. For Turing, the question that a machine to think was meaningless but the test provides a way to get around this meaningless question through an imitation game. The Chinese Room argument does not seem like a significant counterargument for me. Yes, computers could search for an answer like a library would provided a question, however, as humans we have been studying all of our lives. We have an internal library of knowledge that we can use when faced a question.

I think to some extent the growing concerns over the power of artificial intelligence is warranted. I know that Tech luminaries like Bill Gates and Elon Musk are wary of the technology’s power, however, I think that currently most of the fears are just thought experiments.

We should be more afraid of the implications of artificial intelligence to our current society than the power it may or may not have.  AI could inhibit white-collar jobs and have a significant effect on personal contributions to society. One article pointed out that AIs only need to be able to a task well enough in order to replace a human’s job. This could require people to have to reinvent themselves over and over again in order to be relevant in modern society. In order to limit these implications scientists and corporations will have to work in collaboration with the government in order to make sure that we do not innovate the world too fast without planning what to do with humans who may not have the skills or resources to reinvent themselves. A paradox we have reached is that while we have developed machines to behave more like humans we have developed education systems to push children to think more like computers. Are we okay with this?

Whether or not a computing system could ever be considered a mind is a tricky subject. Neural networks represent an interesting comparison to AI to the human mind. The structure simulates how neurons work in a brain and according to this article they can be organized in a hierarchical manner with the latest hardware which leads to a level of “deep” learning. However, this does not mean that a neural network is essentially a human mind.  AlphaGo needed to learn how to play Go from 150,000 games where a human would require far fewer. The human brain is immensely complex with around 100 billion neurons and relies on biological machinery not digital decisions. The ethical implications of classifying a computing system as a mind is that we would need an ethics committee in place in order to understand the technology and give necessary rights to the computing system. There would need to be controls and laws in place to figure out where the blame should fall in situations such as a car crash involving a self-driving car. 

Project 3 Reflection: Privacy Paradox —

Project 3 Reflection: Privacy Paradox

After having gone through the challenges, I have decided to reflect more on my technology habits and how they affect my online persona that is sold by data brokers. I think I will be more mindful of how the pages I like on Facebook affect my profile. A lot of them are remnants of my “middle-school” Facebook profile. I plan on taking a look at what I currently have on my Facebook profile and edit the information that is no longer necessary. It blew my mind to hear that there are 52,000 different labels Facebook can assign to you. I knew they did profiling but I was previously unaware of the exact scale of how many different classifications their algorithms work with. Therefore, I would like to adjust the amount of information I share on Facebook in attempt to keep more things private.

The panopticlick challenge was the most surprising to me, I did not know about a browser’s fingerprint or the scale of tracking that is done online. To combat this, I am installing the add-on software they discussed in the podcast called Privacy Badger.

The focus of this podcast was privacy and what we can personally do to defend our Internet persona, however, I think we should consider the role of advertisers here. Do companies that rely on advertisements like Facebook have a right to your personal data because you are not paying to use the web page? Are you instead paying them for their service through your data? You do sign a terms of service to use the web page although most of us do not read them. It is important to note that I do value my privacy but I think some concessions could be made. As we are not paying for particular services it means that a  lot of what the Internet gives us for free is dependent off of our using and selling our data. It is essentially a trade although it does seem to come at a personal cost. However, if companies are not using your data for harm then I think there could be a compromise between privacy and user collected data.

I think it is a tough decision to choose between personal privacy and technological convenience. Currently, I choose technological convenience but perhaps later in my life I will value my personal privacy more. Part of the reason that makes it a bit easier to choose that side is that I do not think I have something to hide. I am privileged such that I am not a part of a marginalized group that could be harmed by ethnicity targeting. The degree that my ads are personalized are reflected in makeup and shoe advertisements. The stakes are not really that high.

However, I still think it is wrong to the degree my data is being brokered without my personal knowledge. I think with certain aspects of web browsing (e.g. frivolous online shopping) it may be okay to broker this data but other data like medical data seems to be too personal to be bought. It was mentioned in one of the podcasts that it could utilize your property data because it is a public record. I think there is a line we need to cross. Just because it is legal to collect and bundle the data does not mean we certainly should.

In my Data Science class we did a case study to consider a scenario where a company paid for medical data about its employees in order to flag employees for poor health. When I first heard about this I was not overly concerned about this situation occurring but the more I look into privacy it seems that there is too much data that you can collect about a person that could lead to unjust assumptions. It is a difficult topic because although there is a lot of data brokering occurring, it is currently protected by law. It is not illegal to collect data or sell it. I think more awareness needs to be made to the general public. I do not think a lot of people know of what is really going on with their data online and perhaps with more awareness this would lead to a change in the way our data is handled. Perhaps some things like unknown tracking could be made illegal.

I think privacy in general is definitely worth fighting for and protecting, however, I think we will need to change the structure of the Internet in order to protect privacy. It is unfair to ask companies to offer users services for free without taking any data from them if the company must invest in expensive data centers and servers. Companies need a way that they can still continue to make money. On the flip side of this argument, I think there needs to be more transparency to users about what is happening to their data. One of the podcasts mentioned that the UK government stated that a simple click on the Internet should be stored for a year. I do not know if this type of surveillance is necessary. I think we need to alter the Internet as well as government laws that allow data to be sold and transferred. Privacy is not gone entirely but it will take significant effort to bring more control back to the users. 

Fake News — March 25, 2017

Fake News

“Fake News” is everywhere on the Internet. “Fake News” to me means something that is not true. The content is sometimes harmless if the subject of the article does not matter too much. It can also be annoying if you see an article that contains a known lie. The most problematic aspect of “Fake News” is it can be dangerous if the articles influence important decisions like what business to work for or who to vote for in a presidential election. The stakes are too high to make mistakes based off of falsehoods. The distribution of “Fake News” is problematic as users are lazy. One article points out that users can just read the headline of a story and then the false story becomes a talking point online and in real life. 

Nobody fact checks things anymore as one article pointed out that the “fake news” scene has changed in the past years. The article mentions that the writer of “fake news” outlet originally tried to hurt Trump’s presidential campaign but it instead had an opposite effect. 

I think technology companies should make a conscious effort of monitoring and suppressing “Fake News” but it would not be possible to suppress all of it realistically. For example, looking at Facebook and social media feeds around the time of the presidential election there was a decent amount of “Fake News” on my timeline including personal Facebook statuses. People have a right to their beliefs and to share them online but it is problematic because it can contain falsehoods. When I see “Fake News” I usually roll my eyes and scroll past but I could see people reading things online and believing “alternative facts” to be true without doing more research. I do not know where you should draw the line and censor users though. If someone posts something false, do you remove their post? I think Facebook and Twitter should use mechanisms blocking news articles that are false but I do not know what can be done at an individual user level. In a sense, this means that Facebook and Twitter will be providing censorship on the Internet. In my mind, I am comfortable with this idea although I could see why others could find this problematic or too overreaching for a private entity to perform this.

“Fake News” is not a new concept. An important thing to point out is that in things like politics there has always been lies, Facebook simply makes it easier to spread. I do not think Facebook can say that they did not influence the election results though although Zuckerberg claims otherwise. Something surprising I read is that a US Facebook user is worth about four times a user outside the US” when it comes to clicking on ads, which incentivizes foreigners at creating “Fake News” that is targeted to American issues. 

I am comfortable with a private entity classifying information as “fake”. I think if something is false, Facebook or Google should mark something as fake. Something needs to be done especially considering the fact that false stories on Facebook had more user engagement than mainstream news. However, I am not sure if private entities will do enough. The article points out that although companies could shut down “Fake News” they may choose not to because of the money it generates through ads. Companies claim that they will not although “Fake News” sites to use their ad networks but it is unknown what the criteria will be. Some of the responsibility to call out “Fake News” has to fall on readers themselves. Something Facebook could consider is reinstalling human news curators to check news. According to an article Facebook got rid of this position after being accused of favoring particular media sources and now the job rests on an algorithm. However, it is important to note that the human news curators could have been biased.

Another article mentioned that they did come up with an algorithm to detect and block fake news but this also found a disproportionate number of right-wing “fake news” articles so the updated algorithm was never released.  Facebook claims that the trending topics listed on the site will no longer be personalized based off of your interests such that you will get an unfiltered look at what is going on.

I do not rely on Facebook or Twitter for my daily news exactly. I think they can be vehicles to figure out what is going on with the world but I look at other news websites or YouTube videos of shows in order to get my news. However, the majority of US adults (62 percent) now get their news from social media. I don’t really think that I am living in an echo chamber.

The rise of social media and “Fake News” does not translate to me that we live in a “post-fact” world. Truth does stand a chance in a world that is dominated by “Fake News”, only if we continue to question sketchy news sites and to point out falsehoods like with the use of fact-checkersPeople do believe this job of defending the truth is Facebook’s responsibility as it is used as a news source for many americans. as but I think it is a job of companies as well as individuals online. This will of course rely on users to take action at pointing out false news stories. If we do not draw attention to fake things posted online and “let things slide” then we are taking a risk of normalizing “Fake News”, which would be an insult to truth.

Corporate Personhood: Muslim Registry — March 19, 2017

Corporate Personhood: Muslim Registry

If you think that corporations are not given the same rights as an individual person you can also make the argument that tech workers do have a right to pledge not to work on an immigration database. Individuals do have a right at determining what projects they are comfortable working on based on their moral and ethical views. In some cases this may mean that it is up to the individual whether or not they are willing to sacrifice their job at a specific company as personal views do not dictate what a company is willing to work on. In the case of the individuals who signed neveragain.tech they claim that they would rather quit their job than to participate in working on a Muslim dataset.

I do believe with the concept of Corporate Personhood where morality and ethics apply to corporations. Although the company itself may have different goals than an individual, such as choosing to optimize the profit they take in (although this could also be an individual goal), a company at is heart is made up of individuals who have a responsibility of ethical and moral obligations. As NPR explains, “the dictionary defines “corporation” as “a number of persons united in one body for a purpose.””. The National Review article states that there is a long history of corporate personhood in legal recognition. I think it is not a sufficient counter-argument to say that just because a company does not have the right to marry it means that the concept of corporate personhood does not apply.

Therefore, I do not think that you can claim that the individuals’ moral obligations do not transcend to a company they work at. At the same time, it is true that a corporation may not choose to uphold ethical and moral obligations and perhaps this process is more easily done than an individual making a moral lapse if the goal is to advance the business. I think too often we do not hold companies to an ethical and moral standard. I think a company does have a right to work on something that does seem to cross an ethical boundary, however, they must also be willing to face consequences. Kent Greenfield writes, “Corporations should be seen as having robust social and public obligations that cannot be encapsulated in share prices.”. Depending on the nature of the business the “consequences” may not come at all if the work they are doing is lawful and the general public does not know about it. Consumers often express their distaste of a company by boycotting buying a particular product or a news journalist can criticize a company.

I think corporations should make business decisions based on morality and ethics. However, the people who decide what is right and wrong for a company are the top-level management with perhaps some influence from their HR department. An entry-level employee can act as a “whistleblower” and voice their concerns over an unethical or immoral decision to their manager. That manager can further push the issue upward the company hierarchy but ultimately the ethical or unethical decision is regarded as a product of the management, say the CEO, even if he or she did not come up with the idea at the first place. As they say, with great power comes great responsibility.

Speaking about the Muslim registry specifically, I think the existence of such a dataset is unethical. Another solution to this problem is to track everyone in the country and their religious beliefs. Although this idea seems like a “Big Brother” nightmare I would prefer this solution over creating a dataset specifically to track people who come from one religion. It’s less discriminatory. A counter argument to my opinion would be that the dataset could provide more safety to the United States. One article pointed out that the concept of a Muslim dataset is not new. In fact, the government implemented a similar program called NSEERS over ten years ago. Kaveh Waddell writes that it was used to track immigration data and the existence of the program did not lead to a single terrorist-related warning in addition to costing the government ten million dollars annually. Based off of past experience with a targeted Muslim dataset, it does not seem reasonable to create this from a security and budget standpoint as well as ethics.

It inspires me that so many computer scientists and companies are speaking out on this issue. At first glance it seems that this will mean that unethical and immoral products and services won’t be made. On the other hand, the people who would probably be in charge of making this “Muslim registry” are not speaking out (e.g. software consultants), which could be an indication of their commitment to fulfill their client’s needs regardless of the ethical consequences of the project at hand. Individuals and companies do have a right to refuse work on an ethical basis but part of me that is pessimistic feels like although some people may have these standards, myself included, I think an unethical and immoral product will still be made. Not everyone shares my beliefs and may not view a Muslim Registry as immoral, unethical, or simply may not care about the unethical implications.

 

Online Advertising — March 5, 2017

Online Advertising

When you willingly chose to visit a site the company does have a right to collect information to customize your experience, however, there are ways that a company can utilize your data that is unethical.

As this article points out users crave free services. Servers and other computing resources cost a lot of money that is abstracted from the everyday user. I think it is right for companies to take information that they can use as revenue as they are providing a free service. Perhaps if users are adamant that their data should not be collected companies could come up with paid versions of their sites with a promise not to collect data as backed by this article. However, I do not think many users would switch to this model even if they do not like the feeling of someone collecting their data. They would rather get something for free if they know that their information is already being collected. One of the articles pointed out that only 11 percent of Americans would be willing to pay $1 per month to withhold their data from their favorite news site although 69 percent of Americans were not willing to accept a $1 discount on their Internet bills in exchange for allowing their data to be tracked. This suggests that few people will pay to hold back data if they think data is already flowing to a website but a large majority will be unwilling to share it going forward.

The collection of data leads to web page personalization that can have a positive affect. If a website suggests a similar style of clothing to you that you had not heard about or reminds you of a product that you forgot to purchase, this can be helpful to a user. Although even if this is a positive effect, I can understand why someone may feel like although there may be nothing embarrassing about their data they may not want to share the information to companies to be subject to manipulation for someone else’s financial gain.

The situations get trickier when you move to from something more personal like assumptions about one’s medical data. One article pointed out how users were incorrectly targeted for anorexia studies, which seems like a breach in privacy. It goes on to discuss the topic of “uncanny personalization” that occurs when the data collected is close but not close enough on its analysis of an individual.

So when data personalization is done well, an app may be more useful to you but when done wrong (even by a slight margin) it could lead to anger, frustration, or sadness from users. When a company sends out a congratulatory email to expecting parents they could find themselves in a tough situation if the email reaches a wider than necessary audience. Even if the data personalization is correct, companies may need to take steps in order to not creep out their users, which makes it sound like the whole process is a delicate mind game into tricking users into buying items. Is this process terrible if the user does need these items though?

Consequently, the specific nature of the data collected as well as the conclusions companies make may lead to different levels of privacy through data mining. Data that is less personal does not feel like an attack on my privacy but to be honest I am not sure what data brokers are concluding about myself from my fashion choices. Am I being flagged for a serious diagnosis that I am unaware of? I can see why people may think that complete customization could lead to minimized personal privacy overall. Do the benefits outweigh the negatives?

I think it depends on the particular website whether or not online advertising is too invasive. In the case where someone is alerted on services on how to come out to someone as gay I think that is too invasive but it is a double-edged sword because someone may want that information. It gets trickier if profiling is not exact and websites make incorrect assumptions on the nature of the users. A system having 88% accuracy on making a prediction whether or not an individual is homosexual does not seem like a system is correct enough to make assumptions about users as the consequences of a false positive could be grave.

I use Adblock mainly because it was easy to set up and people made comments to me that suggested I did not know how to “correctly use the Internet” without the latest plugin. Overall, I love it, as while I tolerate ads I do not necessarily enjoy the side effects, as I really do not need to be reminded of that top I forgot to buy and should not. Blocking ads is appealing as it speeds up load times and cut down on security risks.

The articles I read made me conflicted on whether or not it is ethical to use these tools. One article stated that to block ads is to block food from a child’s mouth. While this at first seemed like a grave overstatement, it does seem apparent that companies may have to cut staff and cut benefits because a huge portion of readers block ads. I pride myself in not illegally streaming entertainment, as it is a form of stealing. Reading the articles, I now feel like I could be stealing although ad blocking is currently legal.

The situation may not be as grave as some of the articles make it seem. I have seen web pages that block you from accessing content until you turn off an ad blocker. Maybe companies need to invest in this software or require paid subscriptions in order to access their web pages although some businesses may not flourish under this model. If ad blocking is that bad to Google’s ad revenue stream, why does Google Chrome allow the plugin? 

 

 

 

 

 

FBI vs Apple — February 24, 2017

FBI vs Apple

Technology companies should not purposely weaken encryption as they are dedicated to computer security and keeping their customer’s data as safe as they possibly can. I am against implementing backdoors in products for the purposes of government surveillance because you cannot guarantee that if you make a backdoor only a few individuals will be able to access it.

I think companies like Apple are more ethically responsible for protecting the privacy of their users over their ethical responsibility for helping prevent violent or harmful activities that their platforms may enable. In a free market, Apple or other corporations have more responsibility to their users than the government. In an ideal world, companies could protect their users as well as prevent harmful actions at the same time, however, these two conflicting goals have to be balanced in a world of free-flowing communication. If companies choose to not implement as strong encryption, bad people will still find outlets that they can communicate through as one article cites the use of “Telegram” by ISIS terrorists. Another article discusses how the practice of making data less secure would be contradictory to recent practices of making things more secure as computer security is heavily emphasized in the industry now.

 

Apple wants to protect information from hackers and criminals who want to access it, steal it, and use it. Customers rightly hold Apple and other companies liable to protecting their personal information as that is one of their duties. I do not believe that companies should not cooperate with the government. Apple gave backups to the FBI on the desired phone and listened to court mandated orders to them. It seems wrong to ask engineers who worked hard to encrypt the iPhone to change the software to make it less secure.

Extreme terrorism is an unfortunate reality, however, companies that choose to do less encryption could be exposing their customers to privacy attacks more so than preventing extreme terrorism. The FBI article admits that one of the civil liberties people have is the right to privacy and that the free flow of information is vital to a democracy. Why compromise these rights for everyone to watch a few extreme people?

If Apple were to be ordered by the government to create software that implements less encryption or a backdoor, there is no guarantee that the government would be paying them to do so. In fact, I think that this situation would force a private company to do free work for the government as Apple is adamant that the software they are looking for does not exist. The government claims they got into the phone by paying a million dollars, however, from what I have read so far it seems as if Apple was expected to create this software in order to deal with safety concerns. If the company was to focus resources on this feat, the company would be losing money as this software does not add anything to their business. I could also see where this situation would lead to a negative effect such that customers might be willing to leave Apple.

I am against the concept of a golden key that can unlock a system. I think privacy is a right, however, this does not mean that saving lives is not important though it is hard to draw the line how much daily privacy we should give up in order to make sure that everyone is safe. If you are considering the argument “If you’ve got nothing to hide, you’ve got nothing to fear” perhaps this works in an ideal world. If you choose to prioritize government surveillance over privacy through means as a golden key, you must acknowledge that you are opening up a risk that someone else may get into your system. It allows a means of access. One of the situations that the FBI wanted to implement with the San Bernardino iPhone is having a means of a brute force approach to unlock a passcode. If Apple were to implement this OS change across its phones, then every stolen phone (approximately 3 million phones in 2013 alone) could be liable to a brute force infiltration. Consider the case where it is not an OS change. People could still be working to achieve this goal who do not have the software if they know it is still possible. If tech companies retain a means of accessing user communications, the encryption keys have to be stored in their system somewhere which would make for a high-stakes target that would be subject to more malicious attacks.

Finally, I believe that it is impossible to control the government to only use this golden key for this one phone. The government would continue to use this golden key and unfortunately I could anticipate situations where the key is compromised if software gets leaked to the public. Once the software is created, it could be used again and again and could be used against innocent people.

 

Project 2: Hidden Figures Reflection — February 22, 2017

Project 2: Hidden Figures Reflection

Hidden Figures is a great movie that highlights three black women who made significant contributions to NASA’s space program when it was in it’s early stages. The movie illustrates the challenges that women and minorities face when they attempt to break into STEM fields that are traditionally dominated by white men.

The movie illustrates a time period where it was especially difficult for black men or women due to segregation which led to separate but not equal rights. As I said in our podcast, in recent ages the term “Glass Ceiling” has been popularized as a term meaning that an invisible ceiling exists that makes it more difficult for women or minorities to succeed. When we take a look at this time period, the ceiling was an actual reality. Depending on the color of your skin, you were not allowed to attend certain classes or go to a certain school. You were not allowed to use certain books at the library. This led to disparity between the level of education readily available to minority students. A powerful scene in the movie is at the beginning about Katharine Johnson. Her parents are told that they have to move their family to go to this school so their child can become her potential. There were not as many opportunities to receive a higher education as a minority.

While segregation is rightfully a thing of the past, there are still challenges that face minorities from breaking into the STEM fields. Although there are no longer laws that prohibit minorities of certain schools, people may feel societal pressure that they do not belong at predominantly white schools like Notre Dame especially if they feel like they cannot fully experience the culture they grew up with. Some women may face criticism of their capability to perform in a STEM program. In contrast, people who are different like a female computer science student may feel like they have to carry the torch for their group and to admit defeat at an academic or professional level would be letting down their group at large.

Katherine Johnson seemed to be the first female “computer” in that control room let alone the first black women. Her coworkers seemed to be alright with a female secretary but were unsure how a women would handle the math involved with launching a rocket. Women or minorities may feel like the outsider to a situation and could find themselves as the only one on their team that looks a certain way. This is not always a bad situation as I believe that people can grow significantly when placed in uncomfortable situations, however, coworkers are not guaranteed to be inclusive or supportive. Females in STEM fields may face discrimination or sexual harassment. This past week, a story about Uber has gone viral that depicts an unfortunate reality of how a women could be discriminated against due to her gender in her career. No one should have their performance review changed in order to foster a different agenda.

Role models can be very important to individuals. I think one of the reasons why you do not see as diverse of a group in Computer Science is the lack of role models that cover certain groups because individuals may feel like they relate better to a role model that is like themselves. In high school, my brother was my role model. I was fortunate that I had a close family member pursuing Computer Science such that I got a real life view into what CS is really about. Having a role model at an individual level for me was much more powerful than hearing about a successful company’s CEO. While I respect Steve Jobs, Bill Gates, and Mark Zuckerberg, I could not relate to that level of success. They made the industry seem more exclusive to me no matter how many times I heard that computer science should be for all people. My brother was helpful to me as he encouraged and motivated me to pursue my STEM field and never seemed to doubt my capability at programming.

The movie Hidden Figures is important because I think the three main characters: Mary, Dorothy, and Katharine, could be great role models for people. Their stories should be celebrated as they overcame hardships and did great things for NASA. If young girls watch the movie, they could be inspired to become an engineer or a mathematician. I think that the movie will be celebrated for years to come and through its circulation it could expose generations of young people, maybe young girls or young black girls to enter in the STEM fields.