2 items tagged "Responsibility"

  • Ethical Intelligence: can businesses take the responsibilty?

    Ethical Intelligence: can businesses take the responsibilty?

    Adding property rights to inherent human data could provide a significant opportunity and differentiator for companies seeking to get ahead of the data ethics crisis and adopt good business ethics around consumer data.

    The ability for a business to operate based on some amount of intelligence is not new. Even before business owners used manual techniques such as writing customer orders in a book or using calculators to help forecast how many pounds of potatoes might be needed to stock up for next week's sales, there were forms of "insight searching." Enterprises are always looking for operational efficiencies, and today they are gathering more intelligence exponentially.

    A significant part of business intelligence is understanding customers. The more data a company has about its current or prospective customers' wants, likes, dislikes, behaviors, activities, and lifestyle, the more intelligence that business can generate. In principle, more data suggests the possibility of more intelligence.

    The question is: are most businesses and their employees prepared to be highly intelligent? If a company were to reach a state where it has significant intelligence about its customers, could it resist the urge to manipulate them?

    Suppose a social media site uses data about past activities to conclude that a 14-year-old boy is attracted to other teenage boys. Before he discovers where he might be on the gay/straight spectrum, could t social media executives, employees, and/or algorithms resist the urge to target him with content tagged for members of the LGBTQ community? If they knowingly or unknowingly target him with LGBTQ-relevant content before the child discovers who he might be, is that behavior considered ethical?

    Looking for best practices

    Are businesses prepared to be responsible with significant intelligence, and are there best practices that would give a really intelligent business an ethical compass?

    The answer is maybe, leaning toward no.

    Business ethics is not something new either. Much like business intelligence, it evolved over time. What is new though, is that ethics no longer only have to be embedded into humans that make business decisions. It must also be embedded in automated systems that make business decisions. The former, although imperfect, is conceivable. You might be able to hire ethical people or build a culture of ethics in people. The latter is more difficult. Building ethics into systems is neither art nor science. It is a confluence of raw materials, many of which we humans still don't fully understand.

    Business ethics has two components. One is the aforementioned ethics in systems (sometimes called AI ethics) that is primarily focused on the design of algorithms. The other component of business ethics is data ethics, which can be measured from two dimensions: the algorithm and the raw material that goes into the algorithm (that is, the data).

    AI ethics is complex, but it is being studied. At the core of the complexity are human programmers who are usually biased and can have varying ethical frameworks and customs. They may create potentially biased or unethical algorithms.

    Data ethics is not as complex but is not widely studied. It covers areas such as consent for the possession of data, authorization for the use of data, the terms under which an enterprise is permitted to possess and use data, whether the value created from data should be shared with the data's source (such as a human), and how permission is secured to share insights derived from data.

    Another area of data ethics is whether the entire data set is representative of society. For example, is an algorithm determining how to spot good resumes being trained with 80 percent resumes from men and just 20 percent from women?

    These are large social, economic, and historical constructs to sort out. As companies become exponentially more intelligent, the need for business ethics will increase likewise. As a starting point, corporations and executives should consider consent for and authorization of data used in business intelligence. Was the data collected with proper consent? Meaning: does the user really know that their data is being monetized or was it hidden in a long terms and conditions agreement? What were the terms and conditions? Was the data donated, was it leased, or was it "sort of lifted" from the user?

    Many questions, limited answers.

    The property rights model

    Silicon Valley is currently burning in a data ethics crisis. At the core is a growing social divide about data ownership between consumers, communities, corporations, and countries. We tend to anticipate that new problems need new solutions. In reality, sometimes the best solution is to take something we already know and understand and retrofit it into something new.

    One emerging construct uses a familiar legal and commercial framework to enable consumers and corporations to find agreement around the many unanswered questions of data ownership. This construct uses the legal and commercial framework of property as a set of agreements to bridge the growing divide between consumers and corporations on the issues of data ownership, use, and consideration for value derived from data.

    If consumer data is treated as personal property, consumers and enterprises can reach agreement using well-understood and accepted practices such as a title of ownership for one's data, track and trace of data as property, leasing of the data as property, protection from theft, taxation of income created from said data, tax write-offs for donating the data, and the ability to include data property as part of one's estate.

    For corporations and executives, with increasing business intelligence comes increasing business ethics responsibilities.

    What is your strategy?

    Author: Richie Etwaru

    Source: TDWI

  • The Future of AI: Setting the Priorities

    The Future of AI: Setting the Priorities

    Leaders should focus on these three priorities to ensure their AI initiatives provide business value and do so ethically.

    An artificial intelligence (AI) algorithm designed to scan electronic medical records for potential clinical trial participants can perform at high accuracy in some cases. However, depending on the pool of patients, where they’re located, and what the trial is for, there are inherent biases in the selection process. Just because the algorithm performs a given task correctly doesn’t mean it does so in a responsible, ethical way.

    One well-known example is Amazon’s sexist AI recruiting algorithm that prioritized hiring men over women. The algorithm learned from the company’s existing team -- not inaccurate information -- and was as flawed as the history used to train it. AI has great potential for good, but it is only as effective as the humans and data powering it. These biases may not mean much when it comes to verticals such as retail or to the ads you’re being served, but they can be a life-or-death matter in the healthcare industry.

    Fortunately, as AI technology and tools are maturing, so, too, are best practices and regulatory frameworks around ethics. As GDPR is for data protection, the EU has proposed a legal framework for how to ensure AI tools are safer and more trustworthy for users, but we can’t wait until government mandated laws and best practices for AI are passed. For now, it’s on us -- the people who build these products and services -- to ensure AI-powered products and services are doing more good than harm.

    Here are three priorities leaders should focus on to ensure their AI initiatives provide business value and do so ethically.


    It's one thing for AI to understand the English language, but it's another to understand the nuances of language in domains such as law or clinical practice. Accurately trained models -- and ones that learn quickly -- are key to staying ahead. Achieving this is no easy undertaking. It requires ongoing monitoring, retraining, and tuning. In essence, the job is never quite done. Accuracy is also not just one feature you optimize for. There are different metrics for stability, coverage, bias, and online performance, among other factors.

    Think of an AI model going into production as if it were a new car driving off the lot. As soon as it’s out in the wild, the model begins to degrade. Different environments and inputs take a toll on what once worked perfectly in a controlled research setting. To ensure models remain accurate over time, you need to dedicate the appropriate resources -- tech, talent, and software -- to keep them that way. In other words, constant monitoring and tuning is part of the job, in the same way that application and data monitoring, DevOps, and SecOps are ongoing efforts.

    Responsible AI Practices

    With maturity, growth, and democratizing of AI comes a responsibility to prioritize responsible practices. Think of industries such as the media -- the spreading of fake news, toxic content, and even deep fakes have become serious problems in recent years. This data -- accurate or not -- is the source feeding your AI algorithms. For example, a UC Berkeley study published in Science showed that risk prediction tools used in healthcare exhibited significant racial bias. There are countless other studies that reflect similar problems with AI and patient care.

    Healthcare is leading the way for responsible AI and data practices, although it’s still an early work in progress. Bound by stringent regulations and oaths to do no harm that may not be perfect, many other industries can take a page from healthcare’s book. All companies using AI should have a system of checks and balances or ethics committees to ensure appropriate measures are in place. These steps should be implemented before AI is in use to ensure it’s built with good intent. Encourage dialogue around ethical practices and do it from the top-down. A culture of “see something, say something” will help you remain accountable.

    No-Code and Low-Code Experiences

    Remember when building a website was a major software engineering project? Building an e-commerce website was an eight-figure, multiyear investment in the mid-1990s. Fast forward to today and anyone can start selling in a few hours for $29/month (with a much broader feature set). AI will gradually make this change, with no-code tools getting into the hands of doctors, teachers, lawyers, marketers, and other domain experts. Leveling the playing field for AI is a crucial step for humanity but also for the sake of accuracy and ethics. To truly be representative, systems must be built by people from all demographics, geographies, and backgrounds.

    Although the terms are often used interchangeably, no-code offerings are key for getting AI into the hands of the masses, and low-code offerings help with simpler coding tasks, freeing up data scientists to focus on more complex projects. This democratization of AI at all levels has become a growing area of interest, and will help move the needle for accurate and ethical AI. Beyond increasing the diversity of practitioners, no-code tools embody best practices and processes, making it easier to adopt and scale them.

    Final Thoughts

    The rapidly increasing evolution of AI brings with it a trove of new ethical questions and concerns. Whether an algorithm is delivering on its intended promise is one thing, and whether its downstream effects are positive is another. An algorithm may be inaccurate or unstable, but at worst it could be harmful and cost lives. Getting AI into the hands of more people through low- and no-code functionality is a step in mitigating some of these risks. By prioritizing accuracy, responsible practices, and usability, you can make your AI initiative part of the solution, not the problem.

    Author: David Talby

    Source: TDWI

EasyTagCloud v2.8