# Método montante

El Método Montante, llamado así debido a su descubridor, René Mario Montante Pardo, es un algoritmo del álgebra lineal para determinar las soluciones de un sistema de ecuaciones lineales, encontrar matrices inversas, matrices de adjuntos y determinantes.

El método fue descubierto en el 1973 por René Mario Montante Pardo , egresado de la Facultad de Ingeniería Mecánica y Eléctrica . La característica principal del Método Montante es que trabaja con enteros, lo cual hace que el resultado sea exacto, aunque se resuelva con computadora, ya que evita que se redondeen los números.

• Método

Nuevo elemento =  (pivote act) (elemento act)-(E.C.CP columna) (E.C.f.P fila) / pivote anterior

Pasos del meto montante

♦Ejemplo

4x + 3y 2z = 5

1x –1y +1z = 3

2x + y + 2z =-2

Paso #1

Reconocer el primer número pivote

3     -2        …

View original post 408 more words

# Intelligence and consciousness. Artifical intelligence and conscious robots? Soul and immortality.

Much recent attention has been given to studies of the brain and questions of artificial intelligence and consciousness. Here I discuss some relevant aspects from a philosophical point of view.

The famous physicist and proponent of string theory, Brian Greene, recently (March/April 2016) participated in a discussion on ABC (Australian) television dealing with important recent developments in science. He raised exciting developments in string theory, the theory of multiverses, and artificial intelligence. Defending the value of pure science and the importance of supporting scientific work even if it has no foreseeable practical, i.e. economic, applications, he asked the audience if they would not be interested in knowing whether there are duplicates of themselves in other universes, and whether they would not like to become immortal by having their consciousness imprinted in silicon molecules, thus preserving it for eternity (my phrase)? He (Greene) certainly would!

Stephen Wolfram, the famous…

View original post 1,338 more words

# Artificial intelligence and dangerous robots: barking up the wrong tree

Some famous people, among them the eminent physicict Stephen Hawking, have warned of the danger posed by artificial intelligence which may soon surpass that of humans and take over the world, making us superfluous and dispensable. But is this indeed so?

My answer: intelligent machines (robots, computers) can indeed turn out to be extremely dangerous, but largely because of the dangerous uses to which they are put by humans. For example, look at drones as they exist at this moment: fairly simple and certainly not highly intelligent relative to how they could potentially be in the not too distant future, nevertheless extremely dangerous if used for the wrong purposes in warfare or even in civil life. Drones have several times got very close to civilian airliners and disaster was just avoided. However, nobody would dream of attributing an intelligence approaching that of humans, or feelings like an urge to take…

View original post 1,256 more words

# Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems

#### Introduction

There has been much recent discussion about AI risk, meaning specifically the potential pitfalls (both short-term and long-term) that AI with improved capabilities could create for society. Discussants include AI researchers such as Stuart Russell and Eric Horvitz and Tom Dietterich, entrepreneurs such as Elon Musk and Bill Gates, and research institutes such as the Machine Intelligence Research Institute (MIRI) and Future of Humanity Institute (FHI); the director of the latter institute, Nick Bostrom, has even written a bestselling book on this topic. Finally, ten million dollars in funding have been earmarked towards research on ensuring that AI will be safe and beneficial. Given this, I think it would be useful for AI researchers to discuss the nature and extent of risks that might be posed by increasingly capable AI systems, both short-term and long-term. As a PhD student in machine learning and artificial intelligence, this essay…

View original post 3,704 more words

# ¡El Hotel Infinito!

Jefferson

Cuando estaba estudiando cálculo I en la facultad de Ciencias con Jefferson (¡larga vida a Jeff!), inició el curso con un cuento que siempre recordé como “El cuento de los hoteles infinitos”. Gracias a ello comprendí mucho sobre los infinitos y es la base que todo matemático debe tener.

He estado preparando algunas entradas sobre el conjunto de Cántor o cosillas raras que encuentro por ahí, pero como debo investigar y acomodar bien las cosas antes de publicarlas, me tomará un poco más de tiempo. Así que les dejo el cuento de los hoteles infinitos que a mí tanto me ha gustado y que espero sea de su agrado también.

El hotel extraordinario o el viaje mil uno de Ion el silencioso por Stanislaw Lem

Regresé a casa bastante tarde -la reunión en el club “Nebulosa de Andrómeda” se alargó hasta después de la media noche. Terribles pesadillas me acosaron cuando…

View original post 2,561 more words

# “Hacking” Wargames

### Who doesn’t love games?

What if you could learn Linux, while playing a type of game? No, there isn’t pretty graphics, but just like any other game, you get instant gratification and a great feeling of accomplishment. Depending on the site/layout you might even get points, leader boards, achievements, avatars…. some of the high end and popular ones even have cash prizes! Sound fun? Lets take a look at what a wargame is first, then where to find them and a few recommendations.

### What are “war games”?

If your thinking about “Toy Soldier” or “Call of Duty” then your no where near what im talking about. “WarGames” are linux hacking exercises that are laid out like a game. I know of several types of these war games, and they vary widely in the way you go about executing them. Lets go over a few types.

View original post 963 more words

# Why You Should Use Cross-Entropy Error Instead Of Classification Error Or Mean Squared Error For Neural Network Classifier Training

When using a neural network to perform classification and prediction, it is usually better to use cross-entropy error than classification error, and somewhat better to use cross-entropy error than mean squared error to evaluate the quality of the neural network. Let me explain. The basic idea is simple but there are a lot of related issues that greatly confuse the main idea. First, let me make it clear that we are dealing only with a neural network that is used to classify data, such as predicting a person’s political party affiliation (democrat, republican, other) from independent data such as age, sex, annual income, and so on. We are not dealing with a neural network that does regression, where the value to be predicted is numeric, or a time series neural network, or any other kind of neural network.

Now suppose you have just three training data items. Your neural network…

View original post 953 more words