As we speak’s political surroundings is more and more hostile to knowledge administration as a career. In case you’ve made your profession in machine studying, knowledge mining, predictive modeling, or associated fields, these controversies might have you ever second-guessing your determination to pursue this line of labor. The principal flashpoints middle on points of knowledge privateness, algorithmic bias, and AI weaponization.
Prioritizing privateness over data-driven advertising
Information administration professionals face rising scrutiny over privateness violations, surveillance, and different intrusive impacts of the purposes they’re answerable for constructing and managing.
One of the crucial disturbing new themes is the ideological framing of how know-how allows “surveillance capitalism.” This time period, coined by Harvard Enterprise Faculty professor Shoshana Zuboff, basically stigmatizes the gathering, possession, processing, and use of shoppers’ PII (personally identifiable info). This notion regards any enterprise use of buyer PII—akin to microsegmentation, contextual supply focusing on, and cross-channel advert optimization—as a type of monitoring and management.
Issues over data-driven CRM aren’t restricted to teachers. It is clear from latest congressional hearings that this attitude aligns with fashionable opinion concerning the practices of Google, Fb, Amazon, and different 21st century digital companies. The general public is more and more uneasy about privateness encroachment, as large manufacturers compete to see who can purchase essentially the most complete vary of intrusive knowledge about each facet of our lives, together with our inside ideas, sentiments, and predilections.
Consequently, many knowledge professionals are dealing with a crossroads of their careers. On the one hand, their employers have constructed profitable companies fueled by predictive focusing on, one-to-one personalization and multichannel engagement. Then again, knowledge professionals are feeling extra squeamish in regards to the depth to which they use buyer PII to gasoline AI-driven CRM applications, akin to predictive persona profiling, next-best-action focusing on, or real-time behavioral pricing.
None of that is significantly unusual, sinister, or shameful. These practices are central to how enterprise is finished lately. If in case you have ideological misgivings about these or different data-driven CRM methodologies, you’ll most likely by no means work in enterprise knowledge administration or fashionable advertising. In case you refuse to work on a program just because it implements these fashionable customer-engagement methodologies, you received’t get a lot sympathy out of your employers and, in truth, they’re more likely to present you the door.
However that doesn’t imply you need to stand idly by whereas your employer runs amok with buyer knowledge. You possibly can develop into your organization’s foremost data-privacy advocate, for instance. If nothing else, you may make certain your agency complies rigorously with the European Union’s General Data Protection Regulation and similar privacy laws elsewhere.
Ridding our lives of data-driven algorithmic biases
Data has been on the front lines in recent culture wars due to accusations of racial, gender, and other forms of socioeconomic bias perpetrated in whole or in part through algorithms.
Algorithmic biases have become a hot-button issue in global society, a trend that has spurred many jurisdictions and organizations to institute a greater degree of algorithmic accountability in AI practices. Data scientists who’ve long been trained to eliminate biases from their work now find their practices under growing scrutiny from government, legal, regulatory, and other circles.
Eliminating bias in the data and algorithms that drive AI requires constant vigilance on the part of not only data scientists but up and down the corporate ranks. As Black Lives Matter and similar protests have pointed out, data-driven algorithms can embed serious biases that harm demographic groups (racial, gender, age, religious, ethnic, or national origin) in various real-world contexts.
Much of the recent controversy surrounding algorithmic biases has focused on AI-driven facial recognition software. Biases in facial recognition applications are especially worrisome if used to direct predictive policing programs or potential abuse by law enforcement in urban areas with many disadvantaged minority groups.
Many AI solution vendors have seen an extensive grassroots effort among their own employees to take a strong stand against police abuses of facial recognition. In June as the Black Lives Matter protests heated up, employees at Amazon Web Services called on the firm to sever its police contracts. More than 250 Microsoft employees published an open letter demanding that the company end its work with police departments.
This is an AI application domain in which practitioners will have to take their lumps as biases continue to surface, and associated legal and regulatory penalties follow closely behind. So far there is little consensus on viable frameworks for regulating uses, deployments, and management of facial recognition programs
Nevertheless, AI practitioners know that the opportunities in facial recognition are too numerous and lucrative to forgo indefinitely. Embedding facial recognition into iPhones and other devices will ensure that this technology is a key tool in everybody’s personal tech portfolio. More businesses are incorporating facial recognition into internal and customer-facing applications for biometric authentication, image/video autotagging, query by image, and other valuable uses. Social distancing has made many people more receptive to facial recognition as a contactless option for strong authentication to many device-level and online services.
Bias isn’t an issue that can be fixed once and for all. Where decision-making algorithms are concerned, organizations must always make biases as transparent as possible and attempt to eliminate any that perpetuate unfair societal outcomes. In addition, ongoing auditing of AI biases—not just in facial recognition but in all other socially impactful application domains—must become a standard task in AI devops workflows.
All of this raises the possibility that more data scientists will have to decide whether to participate in such projects and to what extent. Many may take the limited, near-term option of opting out of contributing to law enforcement applications of facial recognition. As a longer-term sustainable approach, data scientists who aren’t ideologically opposed to facial recognition will need to redouble efforts to eliminate biases that are baked into these models and the facial-image data sets that train them.
Opting out of AI-driven weapons development
Data is central to modern warfare. Data-driven algorithms, especially deep learning, give weapons systems the capability of seeing, hearing, sensing, and adjusting real-time strategies far better and faster than most humans.
AI is the future of warfare and of defenses against algorithmic weapon systems. Future battles will almost certainly have casualty counts that are staggering and lopsided, especially when one side’s arsenal is almost entirely composed of autonomous weapons systems equipped with phalanxes of 3D cameras, millimeter-wave radar, biochemical detectors, and other ambient sensors.
If you’re a career-minded data scientist, you’ll be tempted to lend your talents to military projects, which tend to be the most exciting, cutting-edge, R&D-driven initiatives in AI. The shortage of highly qualified AI professionals practically ensures that if you have what it takes, you’ll fetch a high salary with numerous perks. Also, the amount of VC money flowing into startups in this sector ensures that many data scientists who’ve cut their teeth on AI-centric weapon programs will become quite wealthy and powerful.
AI professionals are highly ambivalent about participating in military projects. In a recent survey cited here, U.S. AI specialists reported more favorable than unfavorable attitudes about working with the Department of Defense, though a large plurality are neutral on the topic. Respondents reported being more favorably inclined to accepting DoD grants for basic research than applied research. In the survey, AI professionals’ most cited reason for taking DoD grants was to work on “interesting problems.” “Discomfort with how DoD will use the work” was the most frequently cited downside. Approximately three-quarters of those surveyed had negative attitudes about battlefield applications of AI.
AI professionals may think that hitching their career to a commercial cloud provider such as Amazon Web Services, Microsoft, Google, or IBM will help them avoid pressure to work on military projects. That’s just not so. Many tech firms have been applying their innovations to DoD projects for several years. In fact, Big Tech continues to cultivate close ties with the U.S. military, as shown in this recent study.
If you think you can limit your contributions to defensive and back-office AI applications in the military—per the “ground rules” that former Google CEO Eric Schmidt proposed a few years ago—you’re in for a rude awakening. No such rules for commercial AI vendors’ military engagement can realistically stop the underlying approaches from being used in weapons systems for offensive purposes. In fact, the likelihood of that possibility is heightened by the fact that many military projects’ underlying AI technologies, including open-source modeling software and unclassified image data, are available freely.
In the unlikely scenario that all AI companies walk away from projects with the United States’ and other nations’ military establishments, that would still leave an opportunity for universities and nonprofit research centers to pick up the work. Considering how much money the military is likely to funnel into such contracts, this could easily reverse the brain drain that’s causing the best and brightest AI researchers to leave academia and seek their fortunes in the private sector.
None of this means you can’t opt out of participating in the cyber-industrial complex that’s sprung up around militarized AI. If you’re morally opposed to this sort of work, you can spend your entire data career without ever having to compromise your principles. AI specialists can find plenty of humanitarian or other unobjectionable uses for their talents.
Alternately, you might consider working on technological countermeasures designed to neutralize an adversary’s AI-powered weaponry. One of the most promising new professional opportunity is in building AI-driven counterdrone defenses. In recent years, militaries all over the world have deployed drones successfully as a fast-strike, low-cost, AI-driven alternative to conventional warfare tactics such as armored vehicles and fighter jets. For example, Azerbaijan used drones successfully in a recent war against Armenia, deploying the miniature unmanned aerial vehicles to destroy tanks and other armored fighting vehicles.
Counterdrone defenses are a hot focus of R&D and startup activity. Many such projects use AI to automate detecting drones, pinpointing locations, and predicting likely flight paths of drones. Many use AI to automate classification of approaching drones by model, operator, and threat profile, taking care to minimize false positive and false negatives. They also rely on machine learning to automatically trigger security alerts and activate the mechanisms to physically destroy, disable, distract, or otherwise neutralize weaponized drones.
Finding a middle road
In a politically polarized cultural landscape, data professionals may find it difficult to keep their leanings under wraps. They may also have pangs of conscience that deter them from engaging in projects whose objectives contravene their deep convictions.
Before you opt out of some otherwise objectionable datacentric project, consider whether you can contribute to implementing effective controls such as privacy protection mechanisms, debiasing processes, and automated countermeasures that mitigate the more objectionable aspects. That could allow you, on some level, to reconcile your political convictions with your professional ambitions.
In the process, you would be doing your part to make the world a better place.
Copyright © 2020 IDG Communications, Inc.