2 items tagged "image recognition"

  • Google buys French image recognition startup Moodstocks

    524861120Two weeks after Twitter acquired Magic Pony to advance its machine learning smarts for improving users’ experience of photos and videos on its platform, Google is following suit. Today, the maker of Android and search giant announced that it has acquired Moodstocks, a startup based out of Paris that develops machine-learning based image recognition technology for smartphones whose APIs for developers have been described as “Shazam for images.”

    Moodstocks’ API and SDK will be discontinued “soon”, according to an announcement on the company’s homepage. “Our focus will be to build great image recognition tools within Google, but rest assured that current paying Moodstocks customers will be able to use it until the end of their subscription,” the company noted.

    Terms of the deal were not disclosed and it’s not clear how much Moodstocks had raised: CrunchBase doesn’t note any VC money, although when we first wrote about the company back in 2010 we noted that it had raised $500,000 in seed funding from European investors. As a point of reference, Twitter paid a whopping $150 million in cash for its UK acquisition of Magic Pony the other week.

    While Magic Pony was young and acquired while still largely under the radar, Moodstocks has been around since 2008, all the while working around the basic premise of improving image recognition via mobile devices. “Our dream has been to give eyes to machines by turning cameras into smart sensors able to make sense of their surroundings,” the company writes in its acquisition/farewell/hello note.

    It looks like Moodstocks originally tried its hand at creating its own consumer apps, one of which was a social networking app of sorts: it let people snap pictures of media like books, and then add their own annotations about that media that would link up with other people’s annotations, by way of special image recognition behind the scenes that would match up the “fingerprint” in different people’s snaps.

    An interesting idea, but it didn’t take off, and so as the company pivoted to offering its tech to other developers, at least one of its apps, Moodstocks Scanner, turned into tools for testing the SDK before implementing it in your own app.

    Google doesn’t specify whether it will be launching its own SDK for developers to incorporate more imaging services into apps, or whether it will be incorporating the tech solely into its own consumer-facing services. What it does say is that it will be bringing Moodstocks’ team — the startup was co-founded by Denis Brule and Cedric Deltheil — and the company’s tech into its R&D operation based in France.

    In a short statement, Vincent Simonet, who heads up that center, says Google sees Moodstocks’ work contributing to better image searches, a service that is of course already offered in Google but is now going to be improved. “We have made great strides in terms of visual recognition,” he writes (in French), “but there is still much to do in this area.”

    It’s not clear if Moodstocks’ work will remain something intended for smartphones or if it will be applied elsewhere. There are already areas where Moodstocks’ machine learning algorithms could be applied, for example in Google’s searches, to “learn” more about how to find images that are similar and/or related to verbal search terms. Google also could potentially use the tech in an existing app like Photos.

    Or it could make an appearance in a future product that has yet to be launched, although the more obvious use case, for smartphones, is already here: on a small handset with a touchscreen, users are generally less inclined to enter text; and they may be using their own (poor quality) images to find similar ones: in both of these scenarios, having a stronger visual recognition tool (let’s say to snap a pic of something and then use it as a search ‘term’) could come in handy.

    Google has made other acquisitions in France, including FlexyCore (also for improving smartphone performance). It’s also made a number of acquisitions to improve its tech in imaging, such as JetPac and PittPatt for facial recognition. And other large tech companies are also buying up technology in talent in this area. Earlier this year, it emerged thatAmazon had quietly acquired Orbeus, a startup up that also develops photo recognition tech, with its service tapping AI and neural networks.

    Bron: Techcrunch.com


  • How to use AI image recognition responsibly?

    How to use AI image recognition responsibly?

    The use of artificial intelligence (AI) for image recognition offers great potential for business transformation and problem-solving. But numerous responsibilities are interwoven with that potential. Predominant among them is the need to understand how the underlying technologies work, and the safety and ethical considerations required to guide their use.

    Regulations Coming for image, face, and voice recognition?

    Today, governance regulations have sprung up worldwide that dictate how an individual’s personal information is held, used and who owns it. General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) are examples of regulations designed to address data and security challenges faced by consumers and the businesses that possess their associated data. If laws now apply to personal data information, can regulations governing image and facial recognition (technology that can identify a person’s face and voice, the most personal 'information' we possess) be far behind? Further regulations are likely coming, but organizations shouldn’t wait to plan and direct their utilization. Businesses need to follow how this technology is being both used and misused, and then proactively apply guidelines that govern how to use it effectively, safely, and ethically.

    The use and misuse of technology

    Many organizations use recognition capabilities in helpful and transformative ways. Medical imaging is a prime example. Through machine learning, predictive algorithms come to recognize tumors more accurately and faster than human doctors can. Autonomous vehicles use image recognition to detect road signs, traffic signals, other traffic, and pedestrians. For industrial manufacturers and utilities, machines have learned how to recognize defects in things like power lines, wind turbines, and offshore oil rigs through the use of drones. This ability removes humans from what can sometimes be dangerous environments, improving safety, enabling preventive maintenance, and increasing frequency and thoroughness of inspections. In the insurance field, machine learning helps process claims for auto and property damage after catastrophic events, which improves accuracy and limits the need for humans to put themselves in potentially unsafe conditions.

    Just as most technologies can be used for good, there are always those who seek to use them intentionally for ignoble or even criminal reasons. The most obvious example of the misuse of image recognition is deepfake video or audio. Deepfake video and audio use AI to create misleading content or alter existing content to try to pass off something as genuine that never occurred. An example is inserting a celebrity’s face onto another person’s body to create a pornographic video. Another example is using a politician’s voice to create a fake audio recording that seems to have the politician saying something they never actually said.

    In-between intentional beneficial use and intentional harmful use, there are gray areas and unintended consequences. If an autonomous vehicle company used only one country’s road signs as the data to teach the vehicle what to look for, the results might be disastrous if the technology is used in another country where the signs are different. Also, governments use cameras to capture on-street activity. Ostensibly, the goal is to improve citizen safety by building a database of people and identities. What are the implications for a free society that now seems to be under public surveillance? How does that change expectations of privacy? What happens if that data is hacked?

    Why take proactive measures?

    Governments and corporate governance bodies likely will create guidelines and laws that apply to these types of tools. There are a number of reasons why businesses should proactively plan for how they create and use these tools now before these laws to come into effect.

    Physical safety is a prime concern. If an organization creates or uses these tools in an unsafe way, people could be harmed. Setting up safety standards and guidelines protects people and also protects the business from legal action that may result from carelessness.

    Customers demand accountability from companies that use these technologies. They expect their personal data to be protected, and that expectation will extend to their image and voice information as well. Transparency helps create trust and that trust will be necessary for any business to succeed in the field of image recognition.

    Putting safety and ethics guidelines in place now, including establishing best practices such as model audits and model interpretability, may also give a business a competitive advantage by the time laws governing these tools are passed. Other organizations will be playing catch-up while those who have planned ahead gain market share over their competitors.

    Author: Bethann Noble

    Source: Cloudera

EasyTagCloud v2.8