ACTS Blog Selection
We use machine learning technology to do auto-translation. Click "English" on top navigation bar to check Chinese version.
AI and Collaboration: A Human Angle
I wonder if we’re overlooking an important implication of AI and generative AI for the future of the enterprise. If, as seems to be the case, many employees will use generative AI applications to assist them and interactively support their work, then a new style of work is emerging. Success for an employee will mean making the most of the AI tools with which they collaborate. We will want to hire employees who are especially good at working interactively with AI (remember how the ability to use word processors and spreadsheet applications were qualifications for a job in the old days?). We will design employee roles to maximize the benefit of collaboration with AI tools. Remote work and the gig economy have already dramatically changed the nature of work; generative AI is another potential disruption on the horizon.
I have a few reasons for thinking this is an important new direction. The implications are clearest in using generative AI to support the software development process. As software developers write code, AI can make suggestions, flag errors and security vulnerabilities, research programming language and API usage questions, and handle mundane, repetitive code-writing tasks. It will help programmers write automated tests and convert code from one programming language to another.
Many programmers have worked in pair-programming environments where another programmer has sat beside them and contributed ideas. But working well with an AI peer is likely to be a new skill. At the speed of a train of thought, the programmer can take advantage of a pretrained model’s vast knowledge and command it to take on boring aspects of the work. The AI companion will not complain any more than my printer complains at having to print an early draft (groan) of my blog post.
Programmers who become proficient at using these AI companions will write code faster. Their code will be more secure, resilient, and compliant with standards, making them more desirable employees. There will need to be a corresponding cultural and attitudinal change—many coders are justifiably proud of their abilities to write good code and may need to learn to accept ideas from their AI companions.
The idea extends to non-IT employees. Imagine generative AI acting as an especially astute search engine, retrieving and presenting information relevant to the task at hand. An employee skilled at using that information will perform better and more efficiently. And it is a specialized skill: it requires shifting attention between the “conversation” with the AI and the normal flow of work and making good decisions about which of the AI’s ideas to incorporate and which to ignore. It might involve asking the AI clarifying or follow-up questions. Again there is cultural change involved. I imagine that skilled researchers took a little time to become comfortable with the idea that search engines and even Wikipedia can contribute to research even though they can’t be relied upon entirely.
When we draw a value stream map for a business process (i.e., the steps that must be taken to deliver a finished result), it can bias our thinking: it leads us to ask whether AI can substitute for any of the process steps. But the value stream map doesn’t show the tools humans use to accomplish each step. An important aspect of AI, especially generative AI, may be how it empowers humans in each step to do their jobs better.
That’s not to say that machines won’t replace some roles; that’s happening already. But generative AI has shown to be very effective in interactive chat uses; it’s a small step from using AI as a chatbot to seeing it as an employee companion or assistant.
AI applications assisting employees may also accelerate their progress and training, thereby advancing their career paths. With AI assistance, interns may quickly become skilled employees. In the case of software development, an AI assistant that immediately points out a security vulnerability that a programmer has introduced teaches them about the vulnerability and how to avoid it in the future. In this sense, AI contributes to the education of the employee and the development of their skills.
The assistive use of AI is likely to be a short-term focus for risk-averse enterprises, who may fear the possibility of AI model biases, mistakes (e.g., “hallucinations”), and potential security issues. Some companies may fear their AI applications expressing an unacceptable social bias while communicating with customers. They may fear the chatbot “hallucinating” and offering a nonexistent deal to customers. And the model may be vulnerable to “poisoning” by malicious users who try to insert false beliefs into its conceptual structure. These fears may or may not reflect real risks, but knowing large enterprises, I believe they will lead to real worries.
Of course, any employee interacting with the outside world presents a similar risk—they may offend customers, express opinions that the company doesn’t agree with, or make consequential mistakes. We accept those risks from people, but in some cases, we (probably rightly) judge the risk too high with today’s AI models. Over time, we’d expect this risk to come down, as innovators devise new ways to address it. But for the moment, risk-averse companies may decide to do their early AI experimentation on internal use cases.
For internal use cases, employees can choose to dismiss bad ideas, filter communications, and do further research if the application suggests something implausible. The comparison with search engines is useful. Many employees have gotten used to using search engines while doing their jobs. Along the way, they learned to disregard search results that were irrelevant or led to unreliable sources. They use their critical thinking skills when searching—only a poor employee reports search results without skepticism, synthesis, further analysis, and critical thinking.
It would be a different story if search engines always returned exactly what we were looking for. But we accept that they don’t. Similarly, a lawyer who uses an AI model to write a brief is responsible for ensuring the cases cited exist. Like a search engine, a brief-writing application can be useful without being perfect. Internal-facing generative AI applications represent the proverbial toe in the water for a risk-averse company wanting to take advantage of the new technology.
The implications for people within an enterprise are notable. An employee working in an AI-assisted role will need to develop skills to get the most out of the AI while also treating it critically. They will need to develop a smooth working process and become comfortable accepting the AI’s criticisms and creative ideas. In turn, AI might help develop their skills and accelerate their career progress.
As with much of digital transformation, a company’s ability to get the business results they seek depends on how they manage the change its transformation brings to its people.