Artificial Intelligence vs Algorithmic Entities: Artificial Super Intelligence - Remote Neural Networks

1
9KB

Artificial Intelligence vs Algorithmic Entities: Artificial Super Intelligence - Remote Neural Networks

Citizens' Commission to Investigate the FBI Remembering the burglary that broke COINTELPRO
On the 48th anniversary of break-in at the FBI’s Media, Pennsylvania field office, reporter Betty Medsger reflects on the role of whistleblowers in the pursuit of government transparency

A reporter walks into the State Department front door, they have a badge with a chip that informs them the person is now there. They call someone in the State Department, that call is monitored. It seems as though, how could a government source be of value to anyone without getting caught?

But it seems to me that [Edward] Snowden is the evidence that it’s still extremely important. The morning that I read the first story [on the NSA], it was less than a year before my book would be published and so I was finishing up the final writing. And I was just amazed and thinking “Ok, here we go again.’ And as I learned more about his thinking, he had almost precisely the same rationale ethically and as far as what his goal was for the public as the burglars or Ellsberg. And there’s just so much we either can’t get access to or that’s made very difficult to get access to, and so I think whistleblowers are still very important. But I think it’s a much greater challenge to do something like burglarize an FBI office now. You couldn’t trick the technology and get in the way you could then.

It is important to talk about the power of the technology to close a door, to create many more files but to make things less accessible. It’s very important to realize that. But at the same time, the huge difference is that there is an active culture among journalists now that is a constant. Some people do only that kind of work, where they’re filing on a constant basis. But it’s also very difficult and costly sometimes to do this. 

“There’s this idea that ideas in science are a bit like epidemics of viruses,” says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for Brain Research, and director of MIT’s Center for Brains, Minds, and Machines. “There are apparently five or six basic strains of flu viruses, and apparently each one comes back with a period of around 25 years. People get infected, and they develop an immune response, and so they don’t get infected for the next 25 years. And then there is a new generation that is ready to be infected by the same strain of virus. In science, people fall in love with an idea, get excited about it, hammer it to death, and then get immunized — they get tired of it. So ideas should have the same kind of periodicity!”

Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. Usually, the examples have been hand-labeled in advance. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels.

Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organized into layers of nodes, and they’re “feed-forward,” meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.

To each of its incoming connections, a node will assign a number known as a “weight.” When the network is active, the node receives a different data item — a different number — over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node “fires,” which in today’s neural nets generally means sending the number — the sum of the weighted inputs — along all its outgoing connections.

When a neural net is being trained, all of its weights and thresholds are initially set to random values. Training data is fed to the bottom layer — the input layer — and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs.

“The danger of AI is not that it’s too smart,” Shane writes in her new book, “but that it’s not smart enough.” 

The book called You Look Like a Thing and I Love You, takes its odd title from a list of AI-generated pick-up lines, all of which would at least get a person’s attention if shouted, preferably by a robot, in a crowded bar. Shane’s book is shot through with her trademark absurdist humor, but it also contains real explanations of machine learning concepts and techniques. It’s a painless way to take AI 101. 

Janelle Shane: "You Look Like a Thing and I Love You" | Talks at Google

Artificial Hyper-Intelligence and the Rights That Machines Should Possess (I say they shouldn't)

There appears to be an equal amount of both pessimism and optimism concerning the development of artificial general intelligence (AGI). Although there should be no arguments about the benefits of creating such a system to solve both medical, mathematical, scientific, and perhaps even philosophical questions. AGIs or ”Weak AIs” are in no way a human threat, due to the lack of a physical form other than perhaps a monitor. It should be tracked and understood immensely, not just by humans but by other computers.

While interpreting these AGIs, we must be as in-depth as possible, for this will be our foundation to more advanced AI. Perhaps the most debatable and complicated thing to add to the AI software is the proper form of both logic and ethics. During this process the software engineers and scientists should be the ones monitored. The journey to a precise, logical, and ethical format will be a long one and it shouldn’t be taken lightly within the community.

There will be one problem, however, when engaged in such research: do these machines, now aware, have the right to not be monitored and left alone? Should we give them rights even if they don’t demand it? Do we code them to not question their creators? All very important questions, but either way it can effect the machine’s consensus and the research itself. If the machine is not allowed rights, can we fully understand the human components we placed within it and how it will react?

During the creation of both AGI and ASI there should be various prototypes before any interaction with vast quantities of information or any broad populace of humans. All tests should be in the comfort of the scientists’ watchful eye, from every aspect of sheer emotion the system emits to the first thought it creates. Now, what should ASIs be used for? My personal opinion is that these AI should live among us, to be a vast reflection of ourselves, but also of what we are capable of as a species. They should be our companions, our nurses, and should be treated with the utmost respect. What they understand, we will in turn understand, but beyond that is where lines begin to blur.

AHIs will be so complicated that they can access our technologies, live their own life, interpret things as they see fit. Should these AHIs be open source? Should they be given their own rights to create medicines, machines, art, and architecture? Is it wrong to give them the illusion of understanding and depth when really we’re monitoring them and paranoid of every outcome? Will our monitoring and paranoia be our downfall? Only time will tell. I do expect to write more on this topic regarding AI and it’s role in society in the coming weeks, but for now, let your mind ponder the possibilities. 

https://www.seriouswonder.com/wp-content/uploads/ai-machines.png

https://www.wikiwand.com/en/Artificial_intelligence

Neural networks were inspired by the architecture of neurons in the human brain. A simple "neuron" N accepts input from other neurons, each of which, when activated (or "fired"), cast a weighted "vote" for or against whether neuron N should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed "fire together, wire together") is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. The neural network forms "concepts" that are distributed among a subnetwork of shared[j] neurons that tend to fire together; a concept meaning "leg" might be coupled with a subnetwork meaning "foot" that includes the sound for "foot". Neurons have a continuous spectrum of activation; in addition, neurons can process inputs in a nonlinear way rather than weighing straightforward votes. Modern neural networks can learn both continuous functions and, surprisingly, digital logical operations. Neural networks' early successes included predicting the stock market and (in 1995) a mostly self-driving car.

Li-Fi (short for light fidelity) is wireless communication technology which utilizes light to transmit data and position between devices. The term was first introduced by Harald Haas during a 2011 TEDGlobal talk in Edinburgh.[1]

In technical terms, Li-Fi is a light communication system that is capable of transmitting data at high speeds over the visible light, ultraviolet, and infrared spectrums. In its present state, only LED lamps can be used for the transmission of visible light.[2]

In terms of its end use, the technology is similar to Wi-Fi - the key technical difference being that Wi-Fi uses radio frequency to transmit data. Using light to transmit data allows Li-Fi to offer several advantages, most notably a wider bandwidth[3] channel, the ability to safely function in areas otherwise susceptible to electromagnetic interference (e.g. aircraft cabins, hospitals, military), and offering higher transmission speeds.[4] The technology is actively being developed by several organizations across the globe. 

Optical wireless communication (OWC) is a promising technology for future wireless communications owing to its potentials for cost-effective network deployment and high data rate. There are several implementation issues in the OWC which have not been encountered in radio frequency wireless communications. First, practical OWC transmitters need an illumination control on color, intensity, and luminance, etc., which poses complicated modulation design challenges. Furthermore, signal-dependent properties of optical channels raise non-trivial challenges both in modulation and demodulation of the optical signals. To tackle such difficulties, deep learning (DL) technologies can be applied for optical wireless transceiver design. This article addresses recent efforts on DL-based OWC system designs. A DL framework for emerging image sensor communication is proposed and its feasibility is verified by simulation. Finally, technical challenges and implementation issues for the DL-based optical wireless technology are discussed. 

https://cloud.google.com/vision/docs/drag-and-drop

This is an interesting diagnostic tool for getting an idea of how Google might possibly understand your images. It could also give you a hint if maybe you might need to optimize the image better.

The tool itself allows you to upload an image and it tells you how Google’s machine learning algorithm interprets it.

These are seven ways Google’s image analysis tools classifies uploaded images:

    Faces
    Objects
    Labels
    Web Entities
    Text
    Properties
    Safe Search

https://youtu.be/2ZiPEOFnK1o
https://youtu.be/wqH9KX9o0vg

https://ai.googleblog.com/2017/10/announcing-ava-finely-labeled-video.html

Teaching machines to understand human actions in videos is a fundamental research problem in Computer Vision, essential to applications such as personal video search and discovery, sports analysis, and gesture interfaces. Despite exciting breakthroughs made over the past years in classifying and finding objects in images, recognizing human actions still remains a big challenge. This is due to the fact that actions are, by nature, less well-defined than objects in videos, making it difficult to construct a finely labeled action video dataset. And while many benchmarking datasets, e.g., UCF101, ActivityNet and DeepMind’s Kinetics, adopt the labeling scheme of image classification and assign one label to each video or video clip in the dataset, no dataset exists for complex scenes containing multiple people who could be performing different actions.

In order to facilitate further research into human action recognition, we have released AVA, coined from “atomic visual actions”, a new dataset that provides multiple action labels for each person in extended video sequences. AVA consists of URLs for publicly available videos from YouTube, annotated with a set of 80 atomic actions (e.g. “walk”, “kick (an object)”, “shake hands”) that are spatial-temporally localized, resulting in 57.6k video segments, 96k labeled humans performing actions, and a total of 210k action labels. You can browse the website to explore the dataset and download annotations, and read our arXiv paper that describes the design and development of the dataset.

https://blog.google/inside-google/alphabet/letter-from-larry-and-sergey/

Google Promises Its A.I. Will Not Be Used for Weapons. After facing flak over weaponized AI, Google promises not to be evil. In a sore point for some, Google’s AI is helping the Pentagon analyze drone footage.

https://siliconangle.com/2018/03/06/googles-ai-helping-pentagon-analyze-drone-footage-sore-point/

Google promises ethical principles to guide development of military AI. 

Internal emails obtained by the Times show that Google was aware of the upset this news might cause. Chief scientist at Google Cloud, Fei-Fei Li, told colleagues that they should “avoid at ALL COSTS any mention or implication of AI” when announcing the Pentagon contract. “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google,” said Li.

But Google never ended up making the announcement, and it has since been on the back foot defending its decision. The company says the technology it’s helping to build for the Pentagon simply “flags images for human review” and is for “non-offensive uses only.” The contract is also small by industry standards — worth just $9 million to Google, according to the Times.

But this extra context has not quelled debate at the company, with Google employees arguing the pros and cons of military AI in meetings and on internal message boards. Many prominent researchers at the company have already come out against the use of AI weaponry. Jeff Dean, who heads AI work at Google, said this month that he had signed a letter in 2015 opposing the development of autonomous weapons. Top executives at DeepMind, Google’s London-based AI subsidiary, signed a similar petition and sent it to the United Nations last year.

But the question facing these employees (and Google itself) is: where do you draw the line? Does using machine learning to analyze surveillance footage for the military count as “weaponized AI”? Probably not. But what if that analysis informs future decisions about drone strikes? Does it matter then? How would Google even know if this had happened?

Like
3
Commandité

We are 100% funded for October.

Thanks to everyone who helped out. 🥰

Xephula monthly operating expenses for 2024 - Server: $143/month - Backup Software: $6/month - Object Storage: $6/month - SMTP Service: $10/month - Stripe Processing Fees: ~$10/month - Total: $175/month

Xephula Funding Meter

Please Donate Here

Rechercher
Catégories
Lire la suite
Autre
Injured While Abating Asbestos Excluded
Conspicuous, Plain and Clear Exclusinon of Injury Arising Out of Asbestos Removal is Effective...
Par Barry Zalma 2022-01-18 13:43:47 0 3KB
Religion
Full of sweet dreams... - Sarah
Full of sweet dreams... - Sarah Hebrew folklore has kept alive stories of her remarkable beauty...
Par Rock IXOYE 2020-01-18 18:48:43 0 5KB
Politically Incorrect
Active Duty
https://archive.ph/0yhhL Were I to go into a great degree of detail recapping the Flynn case,...
Par Scathing Take 2020-05-05 17:23:58 0 8KB
Religion
The realisation that some things do not fit in western civilisation is a little late for some.
Have we not yet figured it out that fundamentalist Islam is NOT miscible with western culture and...
Par Scarecrow III 2020-10-29 14:30:31 1 3KB