NordVPN Promotion

Home / Blogs

Your AI Strategy Canvas: Part 2

Protect your privacy:  Get NordVPN  [73% off 2-year plans, 3 extra months]
10 facts about NordVPN that aren't commonly known
  • Meshnet Feature for Personal Encrypted Networks: NordVPN offers a unique feature called Meshnet, which allows users to connect their devices directly and securely over the internet. This means you can create your own private, encrypted network for activities like gaming, file sharing, or remote access to your home devices from anywhere in the world.
  • RAM-Only Servers for Enhanced Security: Unlike many VPN providers, NordVPN uses RAM-only (diskless) servers. Since these servers run entirely on volatile memory, all data is wiped with every reboot. This ensures that no user data is stored long-term, significantly reducing the risk of data breaches and enhancing overall security.
  • Servers in a Former Military Bunker: Some of NordVPN's servers are housed in a former military bunker located deep underground. This unique location provides an extra layer of physical security against natural disasters and unauthorized access, ensuring that the servers are protected in all circumstances.
  • NordLynx Protocol with Double NAT Technology: NordVPN developed its own VPN protocol called NordLynx, built around the ultra-fast WireGuard protocol. What sets NordLynx apart is its implementation of a double Network Address Translation (NAT) system, which enhances user privacy without sacrificing speed. This innovative approach solves the potential privacy issues inherent in the standard WireGuard protocol.
  • Dark Web Monitor Feature: NordVPN includes a feature known as Dark Web Monitor. This tool actively scans dark web sites and forums for credentials associated with your email address. If it detects that your information has been compromised or appears in any data breaches, it promptly alerts you so you can take necessary actions to protect your accounts.

The AI Strategy course at UC Berkeley has come to an abrupt end, but formal learnings in AI will continue. I want to thank professors at UC Berkeley, Alberto Todeschini, Ph.D., and Stuart Russell, who developed a comprehensive course to implement AI in business. After three certificates, Alberto has encouraged me to pursue a master’s in Artificial Intelligence. My wife and I are contemplating this next step.

Part two of our AI Strategy Canvas will examine external factors and fairness, particularly in governing principles and human requirements that might impede or accelerate an AI strategy. Companies that stand a chance against their competition must adopt some form of AI if only to survive. Not adopting AI will make your company obsolete.

Governance is considered one of the essential AI requirements for any business that plans on prospering in a world of AI. For those companies deemed trustworthy by consumers with their data, US corporations have far exceeded human and cost efficiencies that were expected with AI, against those corporations who have not been transparent about how their AI is utilized or managed.

Governance should be based on a set of core principles or ideals. Aligned with the companies mission, this set of principles is the foundational cornerstone of any AI implementation. The rapid acceleration of AI in every industry, from Email to Healthcare, must be challenged by collecting conduct codes and beliefs before introducing a single AI concept or implementation. When we move to introduce a training dataset to our partner solutions, or a data model to our ecosystem, or even an AI Digital Tool to our site visitors, there should be a governing body that allows a project to progress, or have the project throttled or dismantled based on the set of core principles.

Below is an example set of the more detailed core principles that could be incorporated before AI is deployed in your company; it should inherently mirror, type of use-cases, societal and academic implications, along with private feedback to fine-tune your policies. We now have a massive set of unique datasets available to us from multiple cloud organizations, private companies, academic organizations, and many governmental institutions. These datasets, if or when introduced in your organization, should be precisely aligned with your principles to form a moral and virtuous baseline. Here are some examples of core principles:

  • AI should not impede daily life for a consumer.
  • AI should avoid creating unfair bias (good luck with that:-)
  • AI should be built and tested for safety.
  • AI should be responsible and incorporate privacy.
  • AI should maintain scientific excellence standards.
  • AI should scale and benefit from diverse backgrounds.

Human Elements

Below are the human elements an entity will need to begin implementing AI:

  • In most cases, a company will require a full-cycle data scientist who knows statistical modeling.
  • The entity will need a data engineer, preferably with a programming language, something like python, preferably. A steward of the project can come in handy. A CTO perhaps, but someone with enough knowledge to see the AI implementation vision and ask the tough questions to extract knowledge from the data.

Remember that 80% of the time invested in implementing a strategy and running a successful model will be based on cleaning and organizing your data. Nothing good can happen with data that is not cleaned, appropriately labeled, or has missing data points in its feature set. Depending on the amount of trained data, two data-scientists should suffice for medium-sized businesses. The company will also need access to the right tools in the cloud. For example, if your organization uses the Google Cloud, you will need access to modeling tools such as Big Query, XGBoost, DNNs, and AutoML and many others to take the idea from conception to production. As I noted in AI Strategy Part 1, start small and double down after every successful implementation. Your team will have to assess what success looks like in an AI implementation.

Fairness

Human Bias is connected to Governance. Humans have been at the center of consuming AI implementations and building the models. There is human bias at every step of the AI implementation, including collecting, organizing, labeling, and training the datasets. From filtering, ranking, and aggregation to making decisions on introducing third-party datasets. Once an output is formed, user behavior bias informs future data collection. It is a vicious cycle of human bias throughout the implementation. Once a successful implementation is completed, scaling other AI strategies across your business will result in additional human bias. These tendencies are dangerous to your company and, more importantly, to your partner solutions and customers. To help mitigate the bias, a diverse set of humans across multiple ethnic backgrounds, geographies, and aptitude towards AI is paramount to deploy instances of fairness across your AI initiatives.

Building AI models to be accountable are the baby steps needed for success. When you define why you want to implement AI, you might want to ask yourself these questions: What issues will the model solve? Who is the intended user? When collecting and preparing data, you might want to ask how the training data was collected, tagged, or labeled? Is this training data representative of the real world? How was the model trained? What was the demographic of the person who trained it? Was the model tested? What were the test datasets used? Is the model behaving as expected? Why did the model fail? Is the model trustworthy?

For the future, we will look to identify what real fairness means in the world of AI. Unfortunately, this is the current reality AI practitioners struggle with. Remember that technology is neutral.

In an October 2020 release from Stanford on Fairness in Healthcare in AI, specifically in medical imaging: “Bias arises when we build algorithms using datasets that do not mirror the population.” When we generalize to more massive swathes of the population, these nonrepresentative data can confound research findings.

The vast majority of the data used to build AI algorithms come from only 15-20% of the contributing datasets, so balance is required, not only geographically but throughout the feature set. The Stanford release mentioned earlier on fairness had little or no representation from the 47 states. 90% of the imaging data came from California, Massachusetts, and NY. Policymakers, regulators, industry, and academia need to work together to ensure medical AI data reflect America’s diversity across not only geography but also many other essential features/attributes. To that end, nationwide data-sharing initiatives should be a top priority.

Finally, it is the talented humans from societal and educational backgrounds that form these biases. To be best equipped to understand what being less biased is all about is to have a diverse workforce or team to mitigate human bias. Responsible AI or Ethical AI tends to intersect human rights, human evolution, gender classification, voice recognition, and the augmentation of those, as mentioned above.

By Fred Tabsharani, Founder and CEO at Loxz Digital Group

Fred Tabsharani is Founder and CEO of Loxz Digital Group, A Machine Learning Collective with an 18 member team. He has spent the last 15 years as a globally recognized digital growth leader. He holds an MBA from John F. Kennedy University and has added five AI/ML certifications, two from the UC Berkeley (SOI) Google, and two from IBM. Fred is a 10 year veteran of M3AAWG and an Armenian General Benevolent Union (AGBU) Olympic Basketball Champion.

Visit Page

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

New TLDs

Sponsored byRadix

IPv4 Markets

Sponsored byIPv4.Global

Cybersecurity

Sponsored byVerisign

DNS

Sponsored byDNIB.com

Brand Protection

Sponsored byCSC

Threat Intelligence

Sponsored byWhoisXML API

Domain Names

Sponsored byVerisign

NordVPN Promotion