|
The AI Strategy course at UC Berkeley has come to an abrupt end, but formal learnings in AI will continue. I want to thank professors at UC Berkeley, Alberto Todeschini, Ph.D., and Stuart Russell, who developed a comprehensive course to implement AI in business. After three certificates, Alberto has encouraged me to pursue a master’s in Artificial Intelligence. My wife and I are contemplating this next step.
Part two of our AI Strategy Canvas will examine external factors and fairness, particularly in governing principles and human requirements that might impede or accelerate an AI strategy. Companies that stand a chance against their competition must adopt some form of AI if only to survive. Not adopting AI will make your company obsolete.
Governance is considered one of the essential AI requirements for any business that plans on prospering in a world of AI. For those companies deemed trustworthy by consumers with their data, US corporations have far exceeded human and cost efficiencies that were expected with AI, against those corporations who have not been transparent about how their AI is utilized or managed.
Governance should be based on a set of core principles or ideals. Aligned with the companies mission, this set of principles is the foundational cornerstone of any AI implementation. The rapid acceleration of AI in every industry, from Email to Healthcare, must be challenged by collecting conduct codes and beliefs before introducing a single AI concept or implementation. When we move to introduce a training dataset to our partner solutions, or a data model to our ecosystem, or even an AI Digital Tool to our site visitors, there should be a governing body that allows a project to progress, or have the project throttled or dismantled based on the set of core principles.
Below is an example set of the more detailed core principles that could be incorporated before AI is deployed in your company; it should inherently mirror, type of use-cases, societal and academic implications, along with private feedback to fine-tune your policies. We now have a massive set of unique datasets available to us from multiple cloud organizations, private companies, academic organizations, and many governmental institutions. These datasets, if or when introduced in your organization, should be precisely aligned with your principles to form a moral and virtuous baseline. Here are some examples of core principles:
Below are the human elements an entity will need to begin implementing AI:
Remember that 80% of the time invested in implementing a strategy and running a successful model will be based on cleaning and organizing your data. Nothing good can happen with data that is not cleaned, appropriately labeled, or has missing data points in its feature set. Depending on the amount of trained data, two data-scientists should suffice for medium-sized businesses. The company will also need access to the right tools in the cloud. For example, if your organization uses the Google Cloud, you will need access to modeling tools such as Big Query, XGBoost, DNNs, and AutoML and many others to take the idea from conception to production. As I noted in AI Strategy Part 1, start small and double down after every successful implementation. Your team will have to assess what success looks like in an AI implementation.
Human Bias is connected to Governance. Humans have been at the center of consuming AI implementations and building the models. There is human bias at every step of the AI implementation, including collecting, organizing, labeling, and training the datasets. From filtering, ranking, and aggregation to making decisions on introducing third-party datasets. Once an output is formed, user behavior bias informs future data collection. It is a vicious cycle of human bias throughout the implementation. Once a successful implementation is completed, scaling other AI strategies across your business will result in additional human bias. These tendencies are dangerous to your company and, more importantly, to your partner solutions and customers. To help mitigate the bias, a diverse set of humans across multiple ethnic backgrounds, geographies, and aptitude towards AI is paramount to deploy instances of fairness across your AI initiatives.
Building AI models to be accountable are the baby steps needed for success. When you define why you want to implement AI, you might want to ask yourself these questions: What issues will the model solve? Who is the intended user? When collecting and preparing data, you might want to ask how the training data was collected, tagged, or labeled? Is this training data representative of the real world? How was the model trained? What was the demographic of the person who trained it? Was the model tested? What were the test datasets used? Is the model behaving as expected? Why did the model fail? Is the model trustworthy?
For the future, we will look to identify what real fairness means in the world of AI. Unfortunately, this is the current reality AI practitioners struggle with. Remember that technology is neutral.
In an October 2020 release from Stanford on Fairness in Healthcare in AI, specifically in medical imaging: “Bias arises when we build algorithms using datasets that do not mirror the population.” When we generalize to more massive swathes of the population, these nonrepresentative data can confound research findings.
The vast majority of the data used to build AI algorithms come from only 15-20% of the contributing datasets, so balance is required, not only geographically but throughout the feature set. The Stanford release mentioned earlier on fairness had little or no representation from the 47 states. 90% of the imaging data came from California, Massachusetts, and NY. Policymakers, regulators, industry, and academia need to work together to ensure medical AI data reflect America’s diversity across not only geography but also many other essential features/attributes. To that end, nationwide data-sharing initiatives should be a top priority.
Finally, it is the talented humans from societal and educational backgrounds that form these biases. To be best equipped to understand what being less biased is all about is to have a diverse workforce or team to mitigate human bias. Responsible AI or Ethical AI tends to intersect human rights, human evolution, gender classification, voice recognition, and the augmentation of those, as mentioned above.
Sponsored byRadix
Sponsored byIPv4.Global
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byCSC
Sponsored byWhoisXML API
Sponsored byVerisign