From Buzz to Bust: Investigating IT's Overhyped Technologies  

From Buzz to Bust: Investigating IT's Overhyped Technologies

CIOs are not immune to infatuation with the promise of emerging tech. Here, IT leaders and analysts share which technologies they believe are primed to underdeliver, offering advice on right-sizing expectations for each one.

Most CIOs and IT staffers remain, at heart, technologists, with many proclaiming their interest in shiny new tech toys. They may publicly preach “No technology for technology’s sake,” but they still frequently share their fascination with the latest tech gadgets. They’re not the only ones enthralled by tech.

With technology and tech news now both pervasive and mainstream, many outside of IT — from veteran board members to college-age interns — are equally enthusiastic about bleeding-edge technologies. But all that interest can quickly blow past buzz and hit hype — that is, the point where the technology gets seen more as a panacea for whatever plagues us rather than the helpful tool that it is. It’s then that the hopes for the technology get way ahead of what it can actually deliver today. 

“Nearly every new technology is naturally accompanied by hype and/or fear, but at the same time there is almost always a core of merit and business value to that new tech. The challenge is moving from the initial vision/promise stage, to broad commercial and consumer adoption and pervasiveness,” says George Corbin, board director at Edgewell Personal Care; former chief digital officer at Marriott and Mars Inc.; a faculty member at the National Association of Corporate Directors; and an active member of the MIT Sloan CIO Symposium community.

With that in mind, we asked tech leaders in various roles and industries to list what technologies they think are overhyped and to put a more realistic spin on each one’s potential. Here’s what they say on the topic.

1. Generative AI

Perhaps not surprisingly, generative AI tops the list of today’s overhyped tech. No one denies its transformative potential, but digital leaders say a majority of people seem to think generative AI, which Gartner recently placed at the peak of inflated expectations in its 2023 hype cycle, has more capabilities than it does — at least at this time.

Consider some recent survey findings. A July 2023 report from professional services firm KPMG found that 97% of the 200 senior US business leaders it polled anticipate that generative AI will have a huge impact on their organizations in the short term, 93% believe it will provide value to their business, and 80% believe it will disrupt their industry.

Yet most execs also admit they’re not ready to fully harness that potential. Another July report, the IDC Executive Preview, sponsored by Teradata, titled “The Possibilities and Realities of Generative AI,” found that 86% of the 900 execs it polled believe more governance is needed to ensure the quality and integrity of gen AI insights, with 66% expressing concerns around gen AI’s potential for bias and disinformation. Additionally, only 30% say they’re extremely prepared or even ready to leverage generative AI today and just 42% fully believe that they’ll have the skills in place to implement the technology in the next 6 to 12 months, among other issues their gen AI strategies face today.

At the same time, today’s hype may be distracting enterprise leaders from fully understanding how generative AI (also known as GAI) will evolve and how they can use that power in the future. “The anticipation and fear of the impact of generative AI in particular, and its relationship to artificial general intelligence (AGI), makes it overhyped,” says Daryl Cromer, vice president and CTO for the PCs and smart devices division at Lenovo.

This overhyped state, he adds, makes it “easy to be overly optimistic about what will happen this year and simultaneously understate what will happen in three to five years.” He says generative AI’s “potential is great; it will transform many industries. But it should be noted that digital transformation is complex and time consuming; it’s not like a firm can just take a GAI ‘black box’ and plug it into their business and achieve increased efficiency right away. There’s more likely to be a J-curve to ROI as a firm incurs expenses acquiring the technology and spends on cloud services to support it. Firms could even encounter pushback from affected stakeholders, like they are now with the case of film and television writers and actors.”

2. Quantum computing

Tech giants, startups, research institutions, and even governments are all working on or investing in quantum computing. There’s good reason for all that interest: Quantum computing uses quantum mechanics principles to perform calculations and, thus, is exponentially faster and more powerful than today’s computing capabilities. 

Yet it’s anyone’s guess when, exactly, this new type of computing will become operational. There’s even more uncertainty on when, and whether, quantum computing would become available for anyone outside the small circle of players already in the space today.

“People may think it’s going to replace [our classical computing] computers but it’s not,” at least in the foreseeable future, says Brian Hopkins, vice president for the emerging tech portfolio at research firm Forrester. Hopkins adds: “You see these big announcements from IBM or Google about quantum computing and people think, ‘Quantum is close.’ Those make great headlines, but the truth about quantum computing’s future is far more nuanced and [business leaders] need to understand that.”

Yet that isn’t holding back expectations. A 2022 survey of 501 UK executives by professional services firm EY found that 97% expect quantum computing to disrupt their sectors to a high or moderate extent, with 48% believing “that quantum computing will reach sufficient maturity to play a significant role in the activities of most companies in their respective sectors by 2025.” The EY survey also reveals how unprepared organizations are to meet what they believe is ahead: Only 33% said their organizations have started to plan to prepare for the technology’s commercialization and only 24% have set up or plan to set up pilot teams to explore its potential.

“People are aware quantum computing is coming, but I think there is an underestimation of what it will take [to leverage its power],” adds Seth Robinson, vice president for industry research at trade association CompTIA. “I think people think it’s just going to be a much more powerful way of running what we already have, but in reality what we have is going to have to be rewritten to work with quantum. You won’t be able to just swap out the engine. And it’s not going to turn into a product for the mass market.”

3. The metaverse — and extended reality in general

Although some of the excitement about the coming metaverse has died down, some say this concept remains overplayed. They’re skeptical of any claims that the metaverse will have us all living in a new digital realm, and they question whether the metaverse will have any big impact on daily life and everyday business anytime soon. Same goes for extended reality (XR) — that fusion of augmented reality, virtual reality and mixed reality.

“Virtual spaces provide a completely different experience, popularly known as an immersive experience for customers. However, in my opinion, the actual market potential may probably not be as big as it is being projected now,” says Richard August, managing partner for CIO Advisory Services at Tata Consultancy Services. “The number of use cases and utility values are limited, impacting the potential. Devices to support the ubiquity of these technologies such as VR sets are not available at a scalable, affordable price. Additionally, there have been several instances of negative health effects — such as fatigue, impact on vision and hearing — being reported by using the devices that support these technologies, which limits large-scale adoption.”

Forrester’s Hopkins voices similar caution on the technology’s uptake in the near term. “The form factors today aren’t enticing enough for people to adopt this new technology, so [adoption] is going to take longer than people may think,” he says. Hopkins says researchers do, indeed, see areas where the technology has taken off. Extended reality is useful in HR for training employees, and it provides value in industrial use cases where a digital overlay can guide workers through complex scenarios. “But that’s a pretty small slice of the overall opportunity,” he adds.

4. Web3: Blockchain, NFTs, and cryptocurrencies

Similar to their feelings about the immersive web, tech leaders say Web3 and its components — blockchain, NFTs, and cryptocurrencies — haven’t quite delivered on all their promises. “They just need to see more maturity before we invest in those things,” says Rebecca Fox, group CIO for NCC Group, a UK-headquartered IT security company.

Others have made similar observations. Corbin, for one, says blockchain has “huge business potential in smart contracts — supply chain transparency, healthcare, finance, currency, artwork, media, fraud prevention, IP protection, deep fake mitigation — but slow uptake on implementing.” He points out that it’s not as impenetrable as first promoted, and it’s hard to scale. Meanwhile, its decentralized nature coupled with a lack of regulation means that blockchain contracts are not legally recognized in most countries yet, he adds. Digital experts cite issues with other Web3 technologies, too, noting that most companies can’t figure out what to do with cryptocurrencies, for example, as they struggle with how to account for them and how to report them out to the street.

Furthermore, many people remain skeptical about cryptocurrencies and NFTs — especially after the past year’s headlines about crypto exchange problems and NFT devaluations. Advisers say CIOs should, thus, be mindful of the hype but nonetheless keep a watchful eye on the development of these technologies. “Though it’s in its early stages, we’re seeing lots of momentum behind the shift from Web2 to Web3 — and now Web4 — which will undoubtedly transform the way businesses operate, and how we own and transact property. It holds a lot of promise for the philosophical sense of property, ownership, and self-control of your identity inside the broader digital world at large,” says Jeff Wong, EY’s global chief innovation officer. He adds: “At this stage, Web3/4 is an idea that creates more questions than answers, but we think the questions are worth considering.”

Date: August 22, 2023

Author: Mary K. Pratt

Source: CIO

Why you Should Invest in Healthcare Cybersecurity

Why you Should Invest in Healthcare Cybersecurity

It’s hard to imagine anything more cynical than holding a hospital to ransom, but that is exactly what’s happening with growing frequency. The healthcare sector is a popular target for cybercriminals. Unscrupulous attackers want data they can sell or use for blackmail, but their actions are putting lives at risk. A cyberattack on healthcare is more than an attack on computers. It is an attack on vulnerable people and the people who are involved in their care; this is well illustrated by the breadth of healthcare organizations, from hospitals to mental health facilities to pharmaceutical companies and diagnostic centres, targeted between June 2020 and September 2021.

Cyberattacks on healthcare have continued to plague the sector since the start of the COVID-19 pandemic. At the CyberPeace Institute, we have analyzed data on over 235 cyberattacks (excluding data breaches) against the healthcare sector across 33 countries. While this is a mere fraction of the full scale of such attacks, it provides an important indicator of the rising negative trend and its implications for access to critical care.

Over 10 million records have been stolen, of every type, including social security numbers, patient medical records, financial data, HIV test results and private details of medical donors. On average, 155,000 records are breached during an attack on the sector, and the number can be far higher, with some incidents reporting the breach of over 3 million records.

Poor bill of health

Ransomware attacks on the sector, where threat actors lock IT systems and demand payment to unlock them, have a direct impact on people. Patient care services are particularly vulnerable; their high dependence on technology combined with the critical nature of their daily operations means that ransomware attacks endanger lives. Imagine being in an ambulance that is diverted because a cyberattack has caused chaos at your local emergency department. This is not a hypothetical situation. We found that 15% of ransomware attacks led to patients being redirected to other facilities, 20% caused appointment cancellations, and some services were disrupted for nearly four months.

Ransomware attacks on the sector occurred at a rate of four incidents per week in the first half of 2021, and we know this is just the tip of the iceberg, as there is a significant absence of public reporting and available data in many regions. Threat actors are becoming more ruthless, often copying the data, and threatening to release it online unless they receive further payment.

Health records are low-risk, high reward targets for cybercriminals – each record can fetch a high value on the underground market, and there is little chance of those responsible being caught. Criminal groups operate across a wide range of jurisdictions and regularly update their methods, yet we continue to see that attackers act with impunity.

Securing the right to healthcare

We can, and should, be doing better. The first step is with cybersecurity itself. Healthcare cybersecurity suffers from a general lack of human resources. More people need to be trained and deployed.

Software and security tools need to be secure by design. This means putting security considerations at the centre of the product, from the very beginning. Too often security options are added as a final step, which means they paper over inherent weaknesses and loopholes.

Healthcare organizations should also do more, particularly increasing their investment in cybersecurity to secure infrastructure, patch vulnerabilities and update systems, as well as building and maintaining the required level of cybersecurity awareness-raising and training of staff. Healthcare organizations also need to commit to due diligence and standard rules of incident handling.

But these matters are ultimately too big for individual organizations to solve alone. Governments must take proactive steps to protect the healthcare sector. They must raise the capacity of their national law enforcement agencies and judiciary to act in the event of extraterritorial cases so that threat actors are held to account. This requires the political will and international cooperation of governments, including for investigation and prosecution of threat actors.

One point of real concern from our analysis is that information about cyberattacks, such as ransomware incidents, is inadequate due to under-reporting and lack of documentation of attacks. Thus it is impossible to have a global view of the extent of cyberattacks against the healthcare sector. To build even a partial picture of such attacks meant us accessing and aggregating the data that ransomware operators – the criminals – publish or leak online.

It is not acceptable that they are the significant source of information relating to cyber incidents and threats posed to the sector. We want to shift away from data published by or from malicious actors and encourage stronger reporting and transparency relating to cyberattacks by the healthcare sector to improve both the understanding of the threat and the ability to take appropriate action to reduce it.

Our analysis shows that 69% of countries for which we have recorded attacks have classified health as critical infrastructure. Healthcare must be recognized as critical infrastructure globally. Designation as critical infrastructure would ensure that the sector is part of national policies and plans to strengthen and maintain its functioning as critical to public health and safety.

Governments must enforce existing laws and norms of behaviour to crack down on threat actors. They should cooperate with each other to ensure that these laws are put into operation in order to tackle criminals that operate without borders. More should be done to technically attribute cyberattacks to identify which actors have carried out and/or enabled the attack.

Health is a fundamental human right. It is the responsibility of governments to lead the way in protecting healthcare. People need access to reliable, safe healthcare, and they should be able to access it without worrying about their privacy, safety and security.

Date: August 15, 2023

Author: World Economic Forum and CyberPeace Institue

The Disconnect between AI Hype and ML Reality: Implications for Business Operations

The Disconnect between AI Hype and ML Reality: Implications for Business Operations

You might think that news of “major AI breakthroughs” would do nothing but help machine learning’s (ML) adoption. If only. Even before the latest splashes — most notably OpenAI’s ChatGPT and other generative AI tools — the rich narrative about an emerging, all-powerful AI was already a growing problem for applied ML. That’s because for most ML projects, the buzzword “AI” goes too far. It overly inflates expectations and distracts from the precise way ML will improve business operations.

Most practical use cases of ML — designed to improve the efficiencies of existing business operations — innovate in fairly straightforward ways. Don’t let the glare emanating from this glitzy technology obscure the simplicity of its fundamental duty: the purpose of ML is to issue actionable predictions, which is why it’s sometimes also called predictive analytics. This means real value, so long as you eschew false hype that it is “highly accurate,” like a digital crystal ball.

This capability translates into tangible value in an uncomplicated manner. The predictions drive millions of operational decisions. For example, by predicting which customers are most likely to cancel, a company can provide those customers incentives to stick around. And by predicting which credit card transactions are fraudulent, a card processor can disallow them. It’s practical ML use cases like those that deliver the greatest impact on existing business operations, and the advanced data science methods that such projects apply boil down to ML and only ML.

Here’s the problem: Most people conceive of ML as “AI.” This is a reasonable misunderstanding. But “AI” suffers from an unrelenting, incurable case of vagueness — it is a catch-all term of art that does not consistently refer to any particular method or value proposition. Calling ML tools “AI” oversells what most ML business deployments actually do. In fact, you couldn’t overpromise more than you do when you call something “AI.” The moniker invokes the notion of artificial general intelligence (AGI), software capable of any intellectual task humans can do.

This exacerbates a significant problem with ML projects: They often lack a keen focus on their value — exactly how ML will render business processes more effective. As a result, most ML projects fail to deliver value. In contrast, ML projects that keep their concrete operational objective front and center stand a good chance of achieving that objective.

What Does AI Actually Mean?

“‘AI-powered’ is tech’s meaningless equivalent of ‘all natural.’”

–Devin Coldewey, TechCrunch

AI cannot get away from AGI for two reasons. First, the term “AI” is generally thrown around without clarifying whether we’re talking about AGI or narrow AI, a term that essentially means practical, focused ML deployments. Despite the tremendous differences, the boundary between them blurs in common rhetoric and software sales materials.

Second, there’s no satisfactory way to define AI besides AGI. Defining “AI” as something other than AGI has become a research challenge unto itself, albeit a quixotic one. If it doesn’t mean AGI, it doesn’t mean anything — other suggested definitions either fail to qualify as “intelligent” in the ambitious spirit implied by “AI” or fail to establish an objective goal. We face this conundrum whether trying to pinpoint 1) a definition for “AI,” 2) the criteria by which a computer would qualify as “intelligent,” or 3) a performance benchmark that would certify true AI. These three are one and the same.

The problem is with the word “intelligence” itself. When used to describe a machine, it’s relentlessly nebulous. That’s bad news if AI is meant to be a legitimate field. Engineering can’t pursue an imprecise goal. If you can’t define it, you can’t build it. To develop an apparatus, you must be able to measure how good it is — how well it performs and how close you are to the goal — so that you know you’re making progress and so that you ultimately know when you’ve succeeded in developing it.

In a vain attempt to fend off this dilemma, the industry continually performs an awkward dance of AI definitions that I call the AI shuffle. AI means computers that do something smart (a circular definition). No, it’s intelligence demonstrated by machines (even more circular, if that’s possible). Rather, it’s a system that employs certain advanced methodologies, such as ML, natural language processing, rule-based systems, speech recognition, computer vision, or other techniques that operate probabilistically (clearly, employing one or more of these methods doesn’t automatically qualify a system as intelligent).

But surely a machine would qualify as intelligent if it seemed sufficiently humanlike, if you couldn’t distinguish it from a human, say, by interrogating it in a chatroom — the famous Turing Test. But the ability to fool people is an arbitrary, moving target, since human subjects become wiser to the trickery over time. Any given system will only pass the test at most once — fool us twice, shame on humanity. Another reason that passing the Turing Test misses the mark is because there’s limited value or utility in doing so. If AI could exist, certainly it’s supposed to be useful.

What if we define AI by what it’s capable of? For example, if we define AI as software that can perform a task so difficult that it traditionally requires a human, such as driving a car, mastering chess, or recognizing human faces. It turns out that this definition doesn’t work either because, once a computer can do something, we tend to trivialized it. After all, computers can manage only mechanical tasks that are well-understood and well-specified. Once surmounted, the accomplishment suddenly loses its charm and the computer that can do it doesn’t seem “intelligent” after all, at least not to the whole-hearted extent intended by the term “AI.” Once computers mastered chess, there was little sentiment that we’d “solved” AI.

This paradox, known as The AI Effect, tells us that, if it’s possible, it’s not intelligent. Suffering from an ever-elusive objective, AI inadvertently equates to “getting computers to do things too difficult for computers to do” — artificial impossibility. No destination will satisfy once you arrive; AI categorically defies definition. With due irony, the computer science pioneer Larry Tesler famously suggested that we might as well define AI as “whatever machines haven’t done yet.”

Ironically, it was ML’s measurable success that hyped up AI in the first place. After all, improving measurable performance is supervised machine learning in a nutshell. The feedback from evaluating the system against a benchmark — such as a sample of labeled data — guides its next improvement. By doing so, ML delivers unprecedented value in countless ways. It has earned its title as “the most important general-purpose technology of our era,” as Harvard Business Review put it. More than anything else, ML’s proven leaps and bounds have fueled AI hype.

All in with Artificial General Intelligence

“I predict we will see the third AI Winter within the next five years… When I graduated with my Ph.D. in AI and ML in ’91, AI was literally a bad word. No company would consider hiring somebody who was in AI.”

–Usama Fayyad, June 23, 2022, speaking at Machine Learning Week

There is one way to overcome this definition dilemma: Go all in and define AI as AGI, software capable of any intellectual task humans can do. If this science fiction-sounding goal were achieved, I submit that there would be a strong argument that it qualified as “intelligent.” And it’s a measurable goal, at least in principle if not in practicality. For example, its developers could benchmark the system against a set of 1,000,000 tasks, including tens of thousands of complicated email requests you might send to a virtual assistant, various instructions for a warehouse employee you’d just as well issue to a robot, and even brief, one-paragraph overviews for how the machine should, in the role of CEO, run a Fortune 500 company to profitability.

AGI may set a clear-cut objective, but it’s out of this world — as unwieldy an ambition as there can be. Nobody knows if and when it could be achieved.

Therein lies the problem for typical ML projects. By calling them “AI,” we convey that they sit on the same spectrum as AGI, that they’re built on technology that is actively inching along in that direction. “AI” haunts ML. It invokes a grandiose narrative and pumps up expectations, selling real technology in unrealistic terms. This confuses decision-makers and dead-ends projects left and right.

It’s understandable that so many would want to claim a piece of the AI pie, if it’s made of the same ingredients as AGI. The wish fulfillment AGI promises — a kind of ultimate power — is so seductive that it’s nearly irresistible.

But there’s a better way forward, one that’s realistic and that I would argue is already exciting enough: running major operations — the main things we do as organizations — more effectively! Most commercial ML projects aim to do just that. For them to succeed at a higher rate, we’ve got to come down to earth. If your aim is to deliver operational value, don’t buy “AI” and don’t sell “AI.” Say what you mean and mean what you say. If a technology consists of ML, let’s call it that.

Reports of the human mind’s looming obsolescence have been greatly exaggerated, which means another era of AI disillusionment is nigh. And, in the long run, we will continue to experience AI winters so long as we continue to hyperbolically apply the term “AI.” But if we tone down the “AI” rhetoric — or otherwise differentiate ML from AI — we will properly insulate ML as an industry from the next AI Winter. This includes resisting the temptation to ride hype waves and refrain from passively affirming starry-eyed decision makers who appear to be bowing at the altar of an all-capable AI. Otherwise, the danger is clear and present: When the hype fades, the overselling is debunked, and winter arrives, much of ML’s true value proposition will be unnecessarily disposed of along with the myths, like the baby with the bathwater.

Date: July 10, 2023

Author: Eric Siegel 

Source: Harvard Business Review

Meer artikelen...