[1] |
CHEN J,KALLUS N,MAO X,et al. Fairness under unawareness:assessing disparity when protected class is unobserved[C]//2019 Proceedings of the Conference on Fairness,Accountability,and Transparency(FAT). Atlanta,GA,USA:Association for Computing Machinery,2019:339-348.
|
[2] |
KARKHAH S,JAVADI-PASHAKI N,FARHADI FAROUJI A,et al.Artificial intelligence:challenges & opportunities for the nursing profession[EB/OL]. (2022-07-21)[2022-12-19].
|
[3] |
MHASAWADE V, ZHAO Y, CHUNARA R. Machine learning and algorithmic fairness in public and population health[J]. Nature Machine Intelligence, 2021, 3:659-666. DOI: 10.1038/S42256-021-00373-4.
|
[4] |
詹好. 大数据时代下数据挖掘中的算法歧视研究[D]. 长沙:湖南师范大学,2020.
|
[5] |
|
[6] |
OBERMEYER Z, POWERS B, VOGELI C,et al. Dissecting racial bias in an algorithm used to manage the health of populations[J]. Science, 2019, 366(6464):447-453. DOI: 10.1126/science.aax2342.
|
[7] |
KEUROGHLIAN A S. Electronic health records as an equity tool for LGBTQIA+ people[J]. Nature Medicine, 2021, 27(12):2071-2073. DOI: 10.1038/s41591-021-01592-3.
|
[8] |
KARNIK N S, AFSHAR M, CHURPEK M M,et al. Structural disparities in data science:a prolegomenon for the future of machine learning[J]. American Journal of Bioethics, 2020, 20(11):35-37. DOI: 10.1080/15265161.2020.1820102.
|
[9] |
CHEN I Y, SZOLOVITS P, GHASSEMI M. Can I help reduce disparities in general medical and mental health care?[J]. AMA J Ethics, 2019, 21(2):167-179. DOI: 10.1001/amajethics.2019.167.
|
[10] |
PARIKH R B, TEEPLE S, NAVATHE A S. Addressing bias in artificial intelligence in health care[J]. JAMA, 2019, 322(24):2377-2378. DOI: 10.1001/jama.2019.18058.
|
[11] |
BORGESE M,JOYCE C,ANDERSON E E,et al. Bias assessment and correction in machine learning algorithms:a use-case in a natural language processing algorithm to identify hospitalized patients with unhealthy alcohol use[J]. AMIA Annu Symp Proc,2022,2021:247-254.
|
[12] |
|
[13] |
LEPRI B, OLIVER N, LETOUZÉ E,et al. Fair,transparent,and accountable algorithmic decision-making processes[J]. Philosophy & Technology, 2018, 31:611-627. DOI: 10.1007/s13347-017-0279-x.
|
[14] |
SALEIRO P,KUESTER B,STEVENS A,et al. Aequitas:a bias and fairness audit toolkit[EB/OL]. (2019-04-29)[2022-12-19].
|
[15] |
KLEINBERG J M,MULLAINATHAN S,RAGHAVAN M. Inherent trade-offs in the fair determination of risk scores[C]//2017 8th Conference on Innovations in Theoretical Computer Science(ITCS). Dagstuhl:Leibniz International Proceedings in Informatics(LIPIcs),2017:1-23.
|
[16] |
ANDERSON A H, YANG W, HSU C Y,et al. Estimating GFR among participants in the Chronic Renal Insufficiency Cohort(CRIC) Study[J]. Am J Kidney Dis, 2012, 60(2):250-261. DOI: 10.1053/j.ajkd.2012.04.012.
|
[17] |
LEVEY A S, HOCINE T, TITAN S M,et al. Estimation of glomerular filtration rate with vs without including patient race[J]. JAMA Internal Medicine, 2020, 180(5):793-795. DOI: 10.1001/jamainternmed.2020.0045.
|
[18] |
MEHRABI N, MORSTATTER F, SAXENA N,et al. A survey on bias and fairness in machine learning[J]. ACM Computing Surveys, 2021, 54(6):1-35. DOI: 10.1145/3457607.
|
[19] |
GIANFRANCESCO M A, SUZANNE T, JINOOS Y,et al. Potential biases in machine learning algorithms using electronic health record data[J]. JAMA Internal Medicine, 2018, 178(11):1544-1547. DOI: 10.1001/jamainternmed.2018.3763.
|
[20] |
CARUANA R,LOU Y,GEHRKE J,et al. Intelligible models for healthcare:predicting pneumonia risk and hospital 30-day readmission[C]//2015 Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(KDD). Sydney:ACM,2015:1721-1730.
|
[21] |
PARASURAMAN R, MANZEY D H. Complacency and bias in human use of automation:an attentional integration[J]. Hum Factors, 2010, 52(3):381-410. DOI: 10.1177/0018720810376055.
|
[22] |
CHU C H, NYRUP R, LESLIE K,et al. Digital ageism:challenges and opportunities in artificial intelligence for older adults[J]. Gerontologist, 2022, 62(7):947-955. DOI: 10.1093/geront/gnab167.
|
[23] |
MOSS K O, HAPP M B, BRODY A. Nurses' role in reducing inequities for the seriously ill[J]. J Gerontol Nurs, 2022, 48(8):3-5. DOI: 10.3928/00989134-20220629-01.
|
[24] |
KAIROUZ P, LIAO J, HUANG C,et al. Generating fair universal representations using adversarial models[J]. IEEE Transactions on Information Forensics and Security, 2022, 17:1970-1985. DOI: 10.48550/arXiv.1910.00411.
|
[25] |
KOH P W,LIANG P. Understanding black-box predictions via influence functions[C]//2017 Proceedings of the 34th International Conference on Machine Learning(ICML). Sydney:PLMR,2017:1885-1894.
|
[26] |
|
[27] |
GAO Y, CUI Y. Deep transfer learning for reducing health care disparities arising from biomedical data inequality[J]. Nat Commun, 2020, 11(1):5131. DOI: 10.1038/s41467-020-18918-3.
|
[28] |
ZHANG Y,BELLAMY R,VARSHNEY K R. Joint optimization of AI fairness and utility:a human-centered approach[C]//2020 AAAI/ACM Conference on AI,Ethics,and Society(AIES). New York:Association for Computing Machinery,2020:400-406.
|
[29] |
ZHOU Y, LI Z, LI Y. Interdisciplinary collaboration between nursing and engineering in health care:a scoping review[J]. Int J Nurs Stud, 2021, 117:103900. DOI: 10.1016/j.ijnurstu.2021.103900.
|
[30] |
ZHANG L,WU Y,WU X. A causal framework for discovering and removing direct and indirect discrimination[C]//2017 26th International Joint Conference on Artificial Intelligence. Melbourne:AAAI Press,2017:3929-3935.
|
[31] |
PRYZANT R,YANG Z,XU Y,et al. Automatic rule induction for efficient semi-supervised learning[EB/OL]. (2022-10-14)[2022-12-19].
|
[32] |
RICHARDSON S, LAWRENCE K, SCHOENTHALER A M,et al. A framework for digital health equity[J]. NPJ Digital Medicine, 2022, 5:1-6. DOI: 10.1038/s41746-022-00663-0.
|
[33] |
Nature Machine Intelligence. Striving for health equity with machine learning[J]. Nature Machine Intelligence, 2021, 3:653. DOI: 10.1038/s42256-021-00385-0.
|