Skip to content

Unit-III BIAS AND FAIRNESS IN AI SYSTEM

Q1 Explain the ethical considerations in AI development and

deployment. Ans: Artificial Intelligence (AI) has enormous potential to transform industries and society. However, it's development and deployment raise important eithical questions that must be addressed to ensure AI benefits humanity without causing harm. Ethical considerations guide developers, humans, and organizations in building responsible, fair, and safe AI systems.

Ethical Principles:

  1. Fairness and Non-Discrimination: AI systems must avoid biases that lead to unfair treatment across race, gender, and socioeconomic groups. Ensuring fairness allows diversified datasets and diverse training data to contribute to accurate results and decisions.

  2. Accountability and Responsibility: Citizens and stakeholders bear Responsibility and outcomes by Measuing the AI behaviours. This avoids impact assessment into the system accountability.

  3. Transparency and Explainability: AI models should provide transparency, explaining key decisions, accessing the values through Explainability will help users, empowers developers in avoiding the harmful biases in models.

  4. Privacy and Data Protection: Respecting personal data collection, storage, and sharing practices are critical to avoid misuse. AI developers should adopt privacy-preserving techniques and regulatory guidelines for data usage and privacy.

  5. Human Agency and Oversight: Human control over AI decisions should be maintained. Humans need appropriate intervention rights, especially in high-risk domains, upholding dignity and preventing unchecked autonomous decision-making.

  6. Inclusivity and Social Benefit: AI development should promote inclusion and equitable access, avoiding deepening social inequalities. Multistakeholder governance models involve diverse perspectives to build socially beneficial systems.

  7. Legal and Regulatory Compliance: Aligning AI development with international laws, sector regulations, and emerging AI-specific frameworks (e.g., EU AI Act) ensures ethical conformity and protects society.

  8. Environmental Sustainability: Ethical AI includes consideration of environmental impacts, advocating for efficient resource use and sustainability in AI lifecycle management.

  9. Recent Developments in 2025: Increasing focus on AI transparency, fairness, and accountability to build public trust after notable biased Al failures. Ethical and transparent framework address disparities in AI systems across healthcare, policing, and deep-fake detection. Training models using more representative datasets and reducing carbon footprint through energy-efficient Al are key priorities.

Q2. Define Bias in AI. Explain the types of Bias in Al.

Ans: Bias in AI refers to unfair, prejudiced, or discriminatory outcomes generated by Al systems due to flawed datasets, biased algorithms, or human-driven design choices. Ensuring fairness is critical for equitable AI deployment.

Types of Bias in Al:

  1. Sampling Bias: The sampling bias occurs when the sample of the training dataset taken is not diverse and doesn't include the whole population it serves which leads to bad performance and biased decisions. This bias can be caused due to incompleteness of data, unfair collection process and poor selection criteria.) This type of bias can occur in cases like facial recognition systems) this type of bias is that system is trained with a majority of white-skines and we saw earlier, if the perterm will on dark-skinned people and shortened people. So in an Al system, we need to collect a representative dataset of the entire population that includes all the groups of people. This is also often called a representative bias.)
  2. Algorithmic Bias: The algorithmic bias occurs due to flaws in the design and implementation of the algorithm. In this type of bias, system can prioritise certain attributes which lead to unfair results. For Taxif wearing system and lendiny routing can occur based on gender this bias can be repetitious as it is a systematic type of bias. It is also known as dom type bias. Due to this the algorithm only sticks with the reused or same rules, this signifies the need to review the algorithm and fix the decision.
  3. Confirmation Bias: Conformation bias occurs when Al system tends to only retress the needs of the users, as the model generates the ideas or opinions that will be wanted by them* This type of bias compeltely ignores the true image of the content. The behaviours with bias completely re-anforces the bias that is existing, on the Al system. For example: When watching a particular type of content on social media platforms the00 Algroit with show similar as they occur due to the same representation of the population.
  4. Generative Bias-(The generative bias is the type of bias that occurs in the generative AI model. The Generative AI model creates that data such as images, and texts, based on the various inputs they receive. This type of bias occurs when the model outputs unbalanced representations in the content))This bias leads to biases and influences in the data generated. This type of bias may occur when, a text generation model is trained using a certain ethnic cultural literature(Support Western culture) which may cause under-representation of other cultural literature.
  5. Reporting Bias (The reporting bias occurs when the frequency of events taken in the training dataset and real-world events do not align with each other. This type of bias is when the events are not accurately captured in the dataset that is used to train the system not accurately reflect the real-world event frequency. This type of bias can be seen in the sentimental analysis models, where the sentimental models may not accurately reflect the distributions of the sentiments. This can lead to biased sentiment used to reviews a product in the market. where the models represent more positive reviews than - negative reviews leading to a biased understanding of the sentiment.
  6. Automation Bias: (The automation bias occurs when the Al system errors are favored more than the non-automated systems even when the errors are considered)) This type of bias means the humans tend to rely on the AI systems completely more than human. This bias occurs. in the medical healthcare system where the doctors' * completely rely on decisions of the Al system rather than the reality of the situation* This type of bias needs human monitoring and intervention,
  7. Group Attribution Bias'(The group attribution bias occurs when the data is collected from groups, assuming that the individuals in the same group share similar characteristics.

Q3 Explain the approaches and strategies to address fairness and bias

in AI development and deployment. Ans: Artificial Intelligence (AI) systems are increasingly used in decisionmaking across domains, such as healthcare, finance, law enforcement, and recruitment) However, these systems can reflect or amplify existing societal biases if not carefully designed, trained, and monitored) Addressing fairness and bias in Al is critical to ensure that Al systems make equitable, transparent, and accountable decisions.) Various approaches and strategies can be implemented throughout the Al development lifecycle to promote fairness and reduce bias.).

  1. Data Collection and Preparation.(The foundation of any Al system is the data used to train it. Fairness begins with representative, balanced, and unbiased datasets.) Al systems trained on limited or biased data may produce unfair outcomes that disproportionately affect certain groups. To address this, developers must carefully analyze and mitigate bias within datasets.) This includes ensuring diversity among data samples and incorporating demographic parity measures.) Under representation or overrepresentation of certain groups increases the risk of bias generation. Preprocessing techniques such as re-sampling, data cleaning, or correcting imbalances in the dataset to mitigate inherent biases.

  2. Algorithmic Design and Development.'(Even when fair data, algorithmic developments and design also promote fairness.) Fairness aware models help developers implement fairness constraints, Al models can be designed to detect and mitigate bias in predictions.) Bias can be detected using evaluation metrices. Developers should ensure that the Al model behaves ethically and logically to give accurate results and unbiased outcome. Development also test predictions before deployment to understand the true nature of the systems LTR ens decisions.

  3. Evaluation and Validation (Once a model is trained, it is important to evaluate and validate its fairness alongside its performance. Developers should use established fairness metrics to measure potential disparities in outcomes across different demographic groups.) Rigorous testing in diverse and representative scenarios ensures that the Al system behaves ethically and consistently in real-world conditions. Validation processes should check for unintended biases that may not be apparent during development but can emerge in practical applications.

  4. Transparency and Explainability-(Transparency is essential to identify, understand, and correct biases in Ai. Developing interpretable and explainable Al models allows stakeholders to examine how decisions are made and why certain outcomes occur.Explainable Al (XAI) provides insights into the internal decision-making processes of Al systems, enabling developers: and users to detect discriminatory patterns. By understanding the rationale behind Al predictors, organizations can implement corrective measures and foster trust among users and affected communities.

  5. Monitoring and Accountability: (Al systems require continuous monitoring and auditing mechanisms to track their post-deployment behavior. Monitoring and taking feedback mechanisms enables the users to identify flaws in the system. After the deployment, Al system needs to be accountable for its performance. Organizations should perform external audits, review model outcomes to maintain stability and improve the quality of the model. Monitoring ensures Al systems remain unbiased, reliable by identifying ethical and legal implications of the incorrect outcomes caused by Al decisions.

Q4 Discuss security and privacy challenges in AI systems and possible solutions.

Ans: Artificial Intelligence (AI) systems rely heavily on extensive data collection and complex algorithms, creating new and significant challenges to security and privacy] As AI becomes embedded in everyday life and business, safeguarding these systems against cyber threats while protecting users' personal information is critical. This ensures AI's responsible and trustworthy deployment.)

Security Challenges in Al Systems:

  1. Data Poisoning and Adversarial Attacks: (Malicious actors may corrupt training data or input adversarial examples to trick Al models into incorrect or harmful decisions.)This compromises AI integrity, especially in critical applications like autonomous vehicles or fraud detection.
  2. Al-Powered Cyberattacks: Attackers use AI to craft sophisticated, adaptive malware and phishing schemes, posing evolving cybersecurity challenges.
  3. Model Theft and Intellectual Property Risks: Theft of proprietary AI models, can lead to competitive disadvantages and misuse of technology.
  4. Insecure Data Handling: Weaknesses in data transmission, storage, or access controlincrease risks of unauthorized data access or breaches.

Privacy Challenges (in AI Systems):

  1. Massive Data Collection: Al requires vast datasets, often including sensitive personal information, raising risks of privacy violations.
  2. Opaque Decision-Making: Al's "black box" nature makes it difficult to explain how personal data is used, leading to lack of user-control and transparency.
  3. Re-identification Risks: Anonymized data can be reverse-engineered to reveal identities, threatening privacy.
  4. Regulatory Complexity: Rapid AI-advances outpace regulatorynframeworks, complicating compliance with evolving privacy laws.

Possible Solutions for Security Challenges:

  1. Robust Security Protocols: Employ encryption, multi-factor authentication, and secure data access controls throughout AI architecture.
  2. Adversarial Training and Testing: Train Al models to recognize and resist adversarial inputs and conduct rigorous vulnerability testing before and after deployment.) Use-based approaches to identify modellacisciplinary enter the system.
  3. AI-specific Security Mechanisms: Al-powered monitoring Al proactively detect security breaches and anomalies.) Moreover, AInspecific security infrastructure strengthens the overall model.
  4. Privacy-Aware Design: Use techniques such as anonymization and differential privacy, which ensure personal data remains protected throughout the Al lifecycle.
  5. Ethical AI Frameworks: Adopt AI frameworks that emphasize fairness, transparency, privacy, and accountability throughout the AI lifecycle.

Q5. write a detailed note on ensuring reliability in AI systems in

world applications. Ans: (Ensuring reliability in Al systems is critical for their successful deployment in real-world applications, where inaccurate or unpredictable models can lead to serious consequences in fields such as healthcare, finance, transportation, defense, and customer service. A reliable Al system consistently delivers accurate, safe, and trustworthy results under varying conditions, ensuring trust, acceptance, and long-term use.))

Principles of Reliable AI System Design:

  1. Rigorous Testing and Evaluation.) Al models must undergo comprehensive simulation, pre-release, and stress testing to detect vulnerabilities and performance issues. Using large and divers test scenarios ensures the system can handle real-world complexities and edge cases.)User Acceptance Testing (UAT) is equally important to confirm alignment with practical needs.
  2. Continuous Monitoring and Observability. (Once deployed, Al systems must be continuously monitored using real-time tracking tools, logs, anomaly detection, and distributed tracing. Automatic monitoring pipelines help quickly identify performance issues, model drift, and unexpected behaviour.) This ensures the system remains aligned with real-world demands and maintains its reliability over time.
  3. Human-in-the-Loop Validation:(Reliable Al systems incorporate human oversight and feedback, especially in high-stakes applications like healthcare diagnostics, autonomous driving, or fraud detection.) Human intervention ensures the system's outputs are reasonable, ethical, and contextually relevant to minimize risks and unintended outcomes.

Real-World Examples of Reliable Al:

  1. Healthcare: Al diagnostic tools undergo continuous validation, strict data quality checks, and regulatory compliance to ensure safe and accurate patient care.
  2. Finance: Fraud detection models are monitored in real-time and retrained regularly to adapt to evolving fraudulent patterns.
  3. Transportation: Autonomous vehicles incorporate redundant sensors, human override mechanisms, and real-time monitoring to ensure safety under uncertain road conditions:
  4. Cloud Environments: Al-powered predictive maintenance systems detect hardware faults early and enable self-recovery, ensuring uninterrupted service availability.)

Q6. Role of Robustness in Designing Reliable AI Systems.

Ans: Robustness refers to Al systems being reliable, resilient, and secure across various operating conditions. A robust Al can withstand adversarial attacks, noisy or incomplete data, and unexpected scenarios without failure or degradation of performance.

Robustness ensures safety and trustworthiness, particularly in critical applications like autonomous vehicles, healthcare, and finance.

This includes rigorous testing, validation, and continuous monitoring to identify and mitigate vulnerabilities.

Robust AI. can gracefully handle errors and adapt to changing environments, improving user confidence and reducing risks.

// 3.3 Transparency in AI Systems

Transparency in AI systems refers to the ability to understand and explain how an Al model makes decisions. As Al becomes more integrated into critical fields like healthcare, finance, and law, it is crucial to move beyond “black box” models and build systems that are interpretable, fair, and accountable.

(explain) (trust) (accountability)

—> process —> data input —> decision making —> logical understand

The Cycle of AI Transparency

Build Trust
Ensure Accountability
Improve Reliability

The Importance of Transparency

Trust and Adoption: For users to trust AI-powered tools, they need to feel confident that the system's decisions are logical and fair. A transparent system allows users to understand and verify outcomes.

Accountability and Ethics: When an AI system makes a harmful or biased decision, transparency provides a path to identify the root cause. It makes developers and organizations accountable for their models' actions.

Debugging and Reliability: (Understanding how the AI system reaches its output is essential for debugging. By seeing which features most influence a prediction, developers can more easily identify errors, abnormal behaviour and improve the model's reliability.)

Key Concepts:

  • Interpretability: This refers to the ability to describe the behaviour of a model in understandable terms. It answers the question “How does the model work?”
  • Explainability: This refers to the ability to understand the cause-and-effect behind model outputs.
  • Auditability: This is the ability to track and verify the decisions processed and justified by the model.

A Framework for AI Transparency

A transparent Al system typically involves an explanation module that works alongside

the main prediction model. This module generates explanations for specific decisions, making the system's logic visible to a human user).

Al System: The core model that performs a task (e.g., classifying an image, predicting a loan risk.)

Explanation Module: A component that analyses the Al system's decision-making process. (This can be an algorithm like LIME or SHAP.)

Explanations: The output from the module, which can take various forms such as feature importance scores, decision rules, or counterfactual examples.)

Human User: The end user who needs to understand the reasoning.

Methods for Achieving Transparency

LIME (Local Interpretable Model-agnostic Explanations): LIME works by creating simple, interpretable approximations of a "black box" model for specific predictions. It identifies the most influential features used by the model.

SHAP (SHapley Additive exPlanations): SHAP assigns each feature a contribution value based on cooperative game theory. By building local models around a prediction and aggregating the results, SHAP provides consistent feature importance scores. For simpler models like decision trees or linear regression, feature importance scores can be easily derived to show which input variables have the greatest impact on the model's output.

(Transparency is not single tool but practice of building accountable and understandable Al. By integrating these methods into the development pipeline, we can create AI systems that are both powerful and trustworthy.)

Challenges to transparency include proprietary algorithm secrecy, model complexity, and trade secrets. Despite this, transparency remains crucial to avoid “black box" AI systems whose operations are opaque, leading to mistrust and ethical concerns.

(Example: Transparency enabled external audits that exposed bias in an AI criminal risk assessment tool, resulting in corrections to improve fairness.)

4 Accountability (responsibility) (Topic - 8)

Accountability in Al asserts that developers, deployers, and users must accept responsibility for Al system outcomes.

Governance structures: Clear roles and governance responsibilitie

© 2025-2026 Notes.Tamim’s.Space