Saturday , July 2 2022

Predictim Nursery Application: Performing Facebook and Twitter


The app analyzes social media accounts for messages that parents can createImage copyright
Getty Images

Image caption

The app analyzes social media accounts for messages that parents can create

A Facebook app that claims babysitters for babies on Facebook has completely blocked Twitter.

Predictim based in California offers a service that cares for a childhood social communication activity, to provide five points, whether it's safe or not.

Find messages about drugs, other content strangers. Critics will not be able to trust algorithms to advise someone on employability.

Earlier this month, after finding the activity, Facebook revoked the use of the Predictim user, a company that believes that violating the use of personal data.

Predictim is investigating whether or not Facebook is trying to completely block its business, and it has still begun to break Facebook's public data into algorithms.

"Everyone sees people in social media, people look at Google," said Sal Parsa, executive and founder,

"We are automating this process".

Facebook did not see that way.

"People are scraping our Facebook data against our service conditions," a spokeswoman said.

"Predictim will investigate violations of our terms, such as scraping."

Meanwhile, Twitter decided the BBC "newly" to access the Predictim user.

"We prohibit the use of sworn data and APIs strictly for protection, including background checks," a spokeswoman said by email. "When we know about the services of Predictim, we have done a research and the access to Twitter API public has been canceled."

An API – application programming interface – allows you to interact with different software. In this case, Predictim would use the Twitter API to analyze user tweets.

Legal question

Predictim, financed by a scheme set up by the University of California, earned a great deal of attention over the weekend with a Washington Post front-page story. There, experts have warned of the falsity of the algorithms that misunderstand the desire to make a message back.

Jamie Williams, the Electronic Frontier Foundation, said the newspapers: "Kids are joking. Notoriously sarcastic. Something like the" bad attitude "of the reverse algorithm may sound like other political or critical expressions."

Predictim said that the review of the human system was said to be its system, it was worrying that the messages were hand-picked to avoid negative shocks. As far as referral is concerned, Predictim says "it shows cases that demonstrate individual respect, esteem or honest behavior."

The company showed that BBC exhibit slate saw users see their specific views on social networking sites.

The service charges $ 25 for scanning social network profiling scanners with scans for multiple scans. The company said that "shared economy" companies were discussing the ride-share drivers or host-host hosts.

"It's not a magic Blackbox," said Mr. Parsa. "Because the AI ​​flags are due to individual abuse, why this person proves excessive."

The company does not have a tool used to make procurement decisions, and the score is the only guide. However, in the panel on this site, the company uses the preamble, for example: "this person is likely to show unwanted behavior (a high chance of being bad)". On the other hand, in the simulated doll, "very dangerous".

Mr. Parsa has stated that "we do not give any guarantee in the report on the accuracy report or whether the issue of this report is tailored to your needs."

Public social networks of companies without scratch for data processing companies legality in court.

Professional Networking website LinkedIn is blocked by UQ in the US Court of Appeal with HiQ, a LinkedIn public data service created to create its own database. The Supreme Court of California has previously used the consent to use the data to support the HiQ.


Follow Dave Lee Twitter @DaveLeeBBC

Do you have more information about this or any other technology story? You can access Dave directly and safely through encrypted messages. Signal: +1 (628) 400-7370

Source link