- Artificial Intelligence Business:How you can profit from AI
- Przemek Chojecki
- 495字
- 2021-06-11 18:03:50
Cons of using AI
Using Artificial Intelligence solutions can create three risks.
Firstly, the machines may have hidden biases due to the data provided for training. For instance, if a system learns which job applicants to accept for an interview by using a data set of decisions made by human recruiters in the past, it may inadvertently learn to perpetuate their racial, gender, ethnic, or other biases. These biases are hard to detect as they won’t appear explicitly, but rather be embedded in the solution where other factors are considered.
The second risk is that, unlike traditional software engineered systems built on explicit logic rules, neural network systems deal with statistical truths rather than literal truths. Thus it much harder or sometimes impossible to prove that the system will work in all cases — especially in situations that weren’t represented in the training data. Lack of verifiability can be a concern in critical applications, such as controlling a nuclear power plant, or when life-death decisions are involved (healthcare, autonomous vehicles).
The third risk is explainability. When a machine learning system makes errors, as it almost inevitably will, diagnosing and correcting precisely what went wrong can be difficult. The underlying algorithmic structure is convoluted and depends on many factors, hard to unwind.
Having discussed risks, let us now discuss the limitations of AI. As every other technology AI has a couple of shortcomings, starting with data requirements and going beyond it, we can list four challenges:
The first challenge appears when we want to apply supervised learning and need labeled data for training. Labeling is now often done by hired annotators, and it can take time to prepare an adequate dataset. New algorithms emerge that require less data or use other tools to create labels by itself.
The second difficulty lies in obtaining data sets that are sufficiently large and comprehensive for training. This is relative to the algorithms we want to use, but for many business use cases, creating or obtaining such massive data sets can be difficult—for example, limited clinical-trial data to predict healthcare treatment outcomes more accurately.
The third challenge is explaining the results of complex machine learning models. Why is this decision suggested? Explainability is vital in certified systems like in healthcare or finances, where regulations play a significant role.
The fourth limitation is generalizability. Transferring knowledge from one set of circumstances to another is still one of the hardest machine learning tasks - and it is studied under the name of transfer learning. Lack of transferability means that companies need to retrain certain models and commit more resources to go outside of known cases.
Summing up, AI is a great tool for any organisation that can harness its power. However it’s not a magical box that solves every type of problem, and often it requires rich resources to do properly, both in talent and in infrastructure.