enderben: (Default)
2025-04-22 01:49 pm

My Answer to HWcase6, Q1

Case Study: Ethical Implications of Knightscope's K5 Security Robot Deployment

Citation:
The Tricky Ethics of Knightscope's Crime-Fighting Robots – WIRED
https://www.wired.com/story/the-tricky-ethics-of-knightscopes-crime-fighting-robots


Key Facts

  1. In 2017, the San Francisco SPCA employed a Knightscope K5 security robot to patrol its premises, aiming to deter crime and ensure staff safety by monitoring for vandalism and hazardous materials.

  2. The K5 robot autonomously navigated the area, capturing 360-degree video and collecting various data to identify potential security threats.

  3. The robot's presence sparked criticism from community members, particularly the homeless population, who felt targeted and dehumanized by the surveillance.

  4. The organization stated that the robot was intended to prevent crimes like break-ins and vandalism, not to target homeless individuals.

  5. The incident raised questions about privacy, the ethical use of surveillance technology, and the potential for such tools to disproportionately affect vulnerable populations.

  6. Due to the negative reactions and ethical concerns, the SPCA ended the robot's deployment.

  7. The case underscores the complexities of integrating AI and robotics into public spaces, highlighting the need for ethical considerations in design and deployment.

  8. Experts suggest that engineers and ethicists must work together to develop responsible AI systems that consider societal impacts.


Discussion Questions

  1. What ethical considerations should organizations evaluate before deploying surveillance robots in public spaces?

  2. How can the use of such technology inadvertently marginalize certain groups, and what measures can prevent this?

  3. Should organizations be required to inform the public about surveillance technologies in use, and obtain consent where possible?​​

  4. How important is community input in decisions about implementing surveillance technologies, and how can organizations effectively gather and incorporate feedback?

  5. Computer Security Question: What are the potential risks related to data security and privacy with autonomous surveillance robots, and how can these be mitigated?

  6.  

    What does virtue ethics say about this case?
  7. What does utilitarianism say about this case?
  8. What does deontology say about this case?
enderben: (Default)
2025-04-15 02:24 pm

My Answer to HWcase5, Q3

Me: "I'm sending you a Case Study I put together with some facts and questions about the case. I want you to teach me about the ethics via Socratic Questioning. Ask me a question about the case, I'll answer, and then ask another follow-up question."

ChatGPT: "Great—thanks for sharing this case. Let's dive in using Socratic questioning to explore the ethical layers.

Here’s a starting question:

If the U.S. were to adopt propaganda tactics similar to its adversaries—perhaps spreading exaggerated or misleading information to support a just cause—would that be ethically justifiable? Why or why not?"

Me: "I think it is ethically justifiable because they are spreading false information for a good cause. It might leave a bad taste in your mouth but the goal behind it is good."

ChatGPT: "That's a thoughtful answer—you're leaning toward a consequentialist perspective, where the outcome justifies the means.

Let’s dig a bit deeper:

If we allow ourselves to use falsehoods for good causes, how do we then draw the line between acceptable and unacceptable deception? And who gets to decide what qualifies as a “good” cause?"

Me: "The line between acceptable and unacceptable should be drawn at actively harming people or their livelihood, and a good cause should be something along the lines of upholding the law of the land"

ChatGPT: "That makes sense—you're trying to ground justification in harm reduction and legal norms, which adds an ethical boundary to otherwise murky tactics.

But let’s explore that boundary:

If propaganda doesn’t directly harm someone, but it undermines trust in public institutions or erodes belief in truth over time, does that still count as harm? Why or why not?"

Me: "I don't think it counts as harm because I think people need to be wary of putting complete trust or faith in any kind of public institution or information"

ChatGPT: "That’s a strong stance—you’re emphasizing individual responsibility over institutional paternalism. There’s real merit in empowering people to think critically."

enderben: (Default)
2025-04-08 11:33 am

My Answer to HWcase5, Q1

 Case Study: Beyond the Headlines — RAND’s Christopher Paul Discusses the Russian 'Firehose of Falsehood'

https://www.rand.org/pubs/articles/2016/beyond-the-headlines-rands-christopher-paul-discusses.html

Important Facts About the Case

  1. Christopher Paul, a senior social scientist at RAND, discusses strategic communication and propaganda in modern conflicts.

  2. He emphasizes that information itself has become a battleground, where state and non-state actors compete for influence and public opinion.

  3. Paul points out that adversaries like ISIS and Russia use strategic messaging and propaganda very effectively, often outpacing the U.S. in the information space.

  4. The U.S. government struggles with internal coordination, a lack of agility, and regulatory constraints that hinder rapid, cohesive messaging.

  5. Strategic communication should be seen as a long-term campaign, not just isolated information efforts or reactions to events.

  6. One of the key recommendations is for the U.S. to integrate communication planning at every level of military and diplomatic strategy.

  7. Paul asserts that truth should be a central tenet of U.S. messaging, as credibility and transparency are strengths rather than weaknesses.

  8. He draws a distinction between "propaganda" and "strategic communication," the former often being manipulative and deceptive, while the latter focuses on consistent messaging aligned with values and goals.

Discussion Questions

  1. What are the ethical implications of using propaganda, even for a just cause?

  2. How should democratic societies balance free speech with the need to counter harmful propaganda?

  3. What challenges do governments face when competing with agile, decentralized actors in the information domain?

  4. In what ways can misinformation campaigns disrupt international relations or even incite violence?

    - 3 additional Questions:
  5. What does virtue ethics say about this case?

  6. What does utilitarianism say about this case?

  7. What does deontology say about this case?

Computer Security-Related Question

  1. How could vulnerabilities in computer security (e.g., hacking, disinformation bots, or data breaches) be exploited to support or counter strategic messaging campaigns?
     

enderben: (Default)
2025-03-18 11:19 am

My Answer to HWcase4 Q1

 Source of the Case:

https://www.asce.org/publications-and-news/civil-engineering-source/article/2024/08/14/5-confounding-civil-engineering-ethical-dilemmas?

 

Eight Important Facts:

  1. Project Overview: A civil engineering firm was contracted to design and oversee the construction of a major infrastructure project aimed at improving urban transportation.

  2. Budget Constraints: The project faced significant budget limitations, leading to pressure on the engineering team to find cost-saving measures.

  3. Safety Concerns: During the design phase, engineers identified potential safety risks associated with the proposed materials and construction methods.

  4. Management Pressure: Company management urged the engineering team to proceed with the original plans despite the identified risks, emphasizing the importance of staying within budget.

  5. Ethical Dilemma: Engineers were torn between adhering to management directives and upholding their professional responsibility to ensure public safety.

  6. ASCE Code of Ethics: The ASCE Code of Ethics mandates that engineers hold paramount the safety, health, and welfare of the public.

  7. Whistleblower Consideration: Some team members contemplated reporting the issue to external authorities to prevent potential harm.

  8. Outcome: The ethical conflict led to internal disputes, project delays, and increased scrutiny from stakeholders.

Questions for Discussion:

  1. How should engineers balance budget constraints with their obligation to public safety?

  2. What steps can engineers take when management decisions conflict with ethical standards?

  3. How can engineering firms foster an ethical culture that empowers employees to voice concerns without fear of retaliation?

  4. Computer Security Question: In what ways could computer security vulnerabilities impact the ethical responsibilities of engineers in large infrastructure projects?
  5.  

3 Additional Standard Questions:

  1. What does virtue ethics say about this case?
  2. What does utilitarianism say about this case?
  3. What does deontology say about this case?
enderben: (Default)
2025-02-25 09:35 pm

My Answer to HWcase2 Q3

 The following is the transcript between me and ChatGPT discussing my HWcase1 Q1 case:

 

Me:

Here is an ethics case with a list of sources and facts regarding the case:

<I copy - pasted my answer to HWcase2 Q1>

Teach me about ethics of this case using Socratic questioning. You ask me questions to guide my learning.

ChatGPT:

Great! Let’s explore the ethics of this case using Socratic questioning. I’ll guide you with questions to help you think critically about the ethical dimensions of moral injury in healthcare.

How is moral injury different from burnout?

Me:

I think moral injury is different from burnout, because burnout refers to simply being tired of something, rather than actually feeling negative about it

ChatGPT:

That’s a great starting point! Burnout is often associated with exhaustion, detachment, and reduced effectiveness due to prolonged stress. But moral injury goes beyond that—it involves an ethical or moral conflict.

Let’s dig deeper:

Can you think of a situation where a healthcare worker might feel burned out but not morally injured?

This will help clarify the distinction between the two.

Me:

A healthcare worker might have a couple of weeks where they are consistently working extra hours to meet demands, and that might cause burnout but not moral injury

ChatGPT:

Exactly! In this case, the worker is exhausted from the workload, but they are not necessarily facing an ethical conflict—they are just overwhelmed by the physical and emotional demands of the job.

Now, let’s flip the question:

Can you describe a situation where a healthcare worker might experience moral injury but not necessarily burnout?

Me:

If a doctor is not allowed to preform an abortion due to laws against abortions, but they know that an abortion is the only thing that can save pregnant patient I believe that could easily cause moral injury but not burnout

ChatGPT:

That’s a strong example! The doctor in this case is not necessarily exhausted or overworked, but they are experiencing moral injury because they are being forced to act against their ethical beliefs—putting a patient’s life at risk due to external restrictions.


enderben: (Default)
2025-02-18 09:48 am

My Answer to HWcase2 Q1

 Source of case:

www.npr.org/sections/health-shots/2023/08/02/1191446579/doctors-have-their-own-diagnosis-moral-distress-from-an-inhumane-health-system

Important Facts:

  • Moral injury occurs when individuals, particularly healthcare workers, are forced to act against their moral values due to systemic pressures.
  • The modern healthcare system often prioritizes financial incentives and efficiency over patient care, leading to ethical conflicts for medical professionals.
  • Physicians and nurses frequently face situations where they must deny or limit patient care due to insurance constraints, hospital policies, or resource shortages.
  • Many doctors experience emotional distress, burnout, and a sense of helplessness when they cannot provide the best possible care to their patients.
  • The issue of moral injury is distinct from burnout—it stems from ethical dilemmas rather than physical or emotional exhaustion alone.
  • Some medical professionals leave the industry entirely due to the stress of working in a system that forces them to compromise patient welfare.
  • The problem extends beyond doctors and nurses to include other healthcare workers, such as social workers and therapists, who also face ethical conflicts in providing adequate care.
  • Potential solutions include systemic reforms, better support for healthcare workers, and changes to insurance and hospital policies that align with ethical patient care.
Discussion Questions:

  1. How do you think moral injury in healthcare affects patient outcomes, and what changes could be made to prevent it?
  2. Should healthcare professionals be held accountable for ethical compromises they make under systemic pressure, or should the system itself bear responsibility?
  3. How can hospitals and insurance companies balance financial sustainability with ethical patient care?
  4. (Computer Security Question) How could patient data privacy concerns contribute to moral distress in healthcare, and what ethical considerations should be taken when managing sensitive patient information?

Ethics Questions:

  • What does virtue ethics say about this case?
  • What does utilitarianism say about this case?
  • What does deontology say about this case?

 
enderben: (Default)
2025-02-11 11:11 pm

Notes on Theme of Lecture 2/11/2025

David Hume's ethical theory places a strong emphasis on the idea that morality is based on human emotion rather than objective reasoning. He challenges the idea that ethics can be determined only by reasoning by arguing that moral judgment is shaped by emotions. This viewpoint calls into question whether morality is fundamentally human-specific or should be universal, meaning it should apply to all rational beings.

Hume's views are consistent with those of older philosophers, who believed that human compassion is natural. Problems with a sentiment-based approach to ethics include the subjectivity of individual feelings and moral disputes. Nonetheless, Hume contends that certain emotions, are universal and can form the basis of integrity. His perspective is more sentiment based, holding that the advancement of social idea and human happiness determines moral worth.

enderben: (Default)
2025-02-11 01:59 pm

My Answer to HWcase1, Q3

 The following is the transcript between me and ChatGPT discussing my HWcase1 Q1 case:

 

Me:

Here is an ethics case with a list of sources and facts regarding the case:

<I copy - pasted my answer to HWcase1 Q1>

Teach me about ethics of this case using Socratic questioning. You ask me questions to guide my learning.

ChatGPT:

Alright! Let's explore the ethical dimensions of this case through Socratic questioning.

What do you think it means to "own" an image?

Me:

I think ownership of an image comes from having created it yourself. If you use AI to make an image, then I think part of the ownership belongs to the AI. But also, AI is effectively using images it doesn't own, so I don't think AI art can fully be 'owned' by a single person

ChatGPT: 

That's an interesting take! You're considering both the creator’s role and the AI’s contribution, which brings up more ethical questions.

You mentioned that AI is using images it doesn’t own. Do you think that makes AI-generated images ethically questionable, or is it similar to how human artists learn by studying other works?

Me:

I think artists learn from the styles of other artists to make their own, truly unique style. It may be similar and take inspiration from other artists, but the style is still unique to the individual. AI art fails to capture this uniqueness property because it fails to develop it's own 'truly unique' style

ChatGPT:

That’s a strong distinction! You're saying that human artists interpret and transform what they learn into something uniquely theirs, while AI simply recombines existing patterns without true originality.

If AI is fundamentally different from human creativity, should we treat AI-generated works differently under copyright law? For example, should they be in the public domain since no human is the true creator?

Me:

I believe so. I think it would make sense if, when an AI generated piece is created, it references the different art it used to make its own piece, and those artists receive partial credit for that piece.

ChatGPT:

That’s a compelling argument! You’re proposing a system where AI-generated works aren’t fully "owned" but rather acknowledge and compensate the original artists whose works contributed to them.

enderben: (Default)
2025-02-06 01:54 pm

My Answers to HWProj1

 Q1. List several possibilities you are considering for a project topic. (If you already have decided on a topic, then explain what options and possibilities you see related to that topic.) There should be some connection to the course, but mostly it is up to you – you should seek a topic that you would find interesting to dig into. Length requirement: at least 10 sentences or bullet points. Put your answer in your blog, noting that it is the answer to question #1 of HWproj1, so we can easily find it.

Answer to Q1:

I have already decided on a Project Topic. My Project Topic is going to be on the ethics and authenticity of AI generated art. It's a topic I'm personally passionate about, because my sister is an artist. We have had many discussions about the topic already, and I hope to share many of our thoughts with my project. Important concerns regarding ethics and authenticity in the creative industry are brought up by AI-generated art. Since AI art is trained on pre-existing human-made works and produces graphics based on patterns rather than individual invention, many argue that it lacks true originality. Some people (such as my own Dad) argue that AI is only a tool, similar to a paintbrush or camera, that makes it possible to create new kinds of art but still need human guidance. The use of copyrighted content in AI training raises ethical questions, as there are continuous discussions on whether this use amounts to fair inspiration or intellectual property infringement. AI Art also brings up the question of authenticity, because AI art is using patterns and styles of existing art to be created, it begs the question of whether or not the art is original.

Q2. List the possibilities you are considering for project format, the pluses and minuses of each, and your preferences and opinions. Also discuss the question of individual vs. group project. Length requirement: at least 10 sentences or bullet points. Put your answer in your blog, noting that it is the answer to question #2 of HWproj1, so we can easily find it.

Answer to Q2:

I am considering two different Project Formats. The first could be a piece of AI Art that I trace and/or reference to create my own piece of art. The second format I am considering is a poem of sorts. I like the idea of doing a piece of art, since it would better portray the topic visually, but it makes it difficult to get a point across without words. The Poem format suffers the opposite problem, but has the opposite advantage. I will make this an individual project. I like both formats a lot, so I am considering doing both together. With both formats put together, I could both visually and verbally portray the case in a way that makes sense to everyone.

Q3. (a) Go the the “Course Information” tab on the course web site. Read about the course, especially the parts about the project. Note on your blog any questions you have, or note that you don’t have any questions. (b) Then go to the “Syllabus” tab on the course web site.  Read over it. Note on your blog any questions you have, or note that you don’t have any questions.

Answer to Q3:

No Questions

Q4. Email the instructor your blog address (jdberleant@ualr.edu) if you didn’t already make your blog and give the instructor its URL in class .

Answer to Q4:


I did it.

enderben: (Default)
2025-02-04 12:14 am

HWcase1

Sources of Case:
1. www.reuters.com/legal/getty-images-lawsuit-says-stability-ai-misused-photos-train-ai-2023-02-06/

2. newsroom.gettyimages.com/en/getty-images/getty-images-statement

3. https://www.pinsentmasons.com/out-law/analysis/getty-images-v-stability-ai-implications-copyright-law-licensing

 

Important Facts:

1. Getty Images sued Stability AI in January 2023, claiming that the business had trained its AI model, Stable Diffusion, using more than 12 million of Getty's photos without authorization.

2. Getty Images claims that Stability AI violated its intellectual property rights by stealing millions of photos from its website without getting permissions.

3. These pictures were used by Stability AI to create Stable Diffusion, a generative AI model that can produce new images from the data it has been trained on.

4. The case, which was brought before the London-based High Court of Justice, represents a major legal dispute over the use of copyrighted content in AI training datasets.

5. The rules governing copyright and compensation in the creative industries may change as a result of this lawsuit, which might provide a precedent for how intellectual property laws relate to AI-generated material.

Discussion Questions:

  • What moral issues arise when AI models are trained using copyrighted content without express consent?
  • How can the protection of content creators' intellectual property rights and the promotion of innovation in AI development be balanced?
  • What rules or regulations might be put in place to guarantee moral behavior in the creation and application of generative AI models?
Standard Questions:
  • What does virtue ethics say about this case?
  • What does utilitarianism say about this case?
  • What does deontology say about this case?