“Hacking” Wargames

Learn some basic Linux with Me

Who doesn’t love games?

What if you could learn Linux, while playing a type of game? No, there isn’t pretty graphics, but just like any other game, you get instant gratification and a great feeling of accomplishment. Depending on the site/layout you might even get points, leader boards, achievements, avatars…. some of the high end and popular ones even have cash prizes! Sound fun? Lets take a look at what a wargame is first, then where to find them and a few recommendations.

What are “war games”?

If your thinking about “Toy Soldier” or “Call of Duty” then your no where near what im talking about. “WarGames” are linux hacking exercises that are laid out like a game. I know of several types of these war games, and they vary widely in the way you go about executing them. Lets go over a few types.

  1. VPN/SSH
  2. VM – Rooting
  3. Online/Browser-based

View original post 963 more words

Advertisements

Why You Should Use Cross-Entropy Error Instead Of Classification Error Or Mean Squared Error For Neural Network Classifier Training

James D. McCaffrey

When using a neural network to perform classification and prediction, it is usually better to use cross-entropy error than classification error, and somewhat better to use cross-entropy error than mean squared error to evaluate the quality of the neural network. Let me explain. The basic idea is simple but there are a lot of related issues that greatly confuse the main idea. First, let me make it clear that we are dealing only with a neural network that is used to classify data, such as predicting a person’s political party affiliation (democrat, republican, other) from independent data such as age, sex, annual income, and so on. We are not dealing with a neural network that does regression, where the value to be predicted is numeric, or a time series neural network, or any other kind of neural network.

Now suppose you have just three training data items. Your neural network…

View original post 953 more words

A gentle introduction to Naïve Bayes classification using R

Eight to Late

Preamble

One of the key problems of predictive analytics is to classify entities or events based on a knowledge of their attributes.  An example: one might want to classify customers into two categories, say, ‘High Value’ or ‘Low Value,’ based on a knowledge of their buying patterns.  Another example: to figure out the party allegiances of  representatives based on their voting records.  And yet another:  to predict the species a particular plant or animal specimen based on a list of its characteristics. Incidentally, if you haven’t been there already, it is worth having a look at Kaggle to get an idea of some of the real world classification problems that people tackle using techniques of predictive analytics.

Given the importance of classification-related problems, it is no surprise that analytics tools offer a range of options. My favourite (free!) tool, R, is no exception: it has a plethora of state of the art packages…

View original post 3,076 more words

A gentle introduction to decision trees using R

Eight to Late

Introduction

Most techniques of predictive analytics have their origins in probability or statistical theory (see my post on Naïve Bayes, for example).  In this post I’ll look at one that has more a commonplace origin: the way in which humans make decisions.  When making decisions, we typically identify the options available and then evaluate them based on criteria that are important to us.  The intuitive appeal of such a procedure is in no small measure due to the fact that it can be easily explained through a visual. Consider the following graphic, for example:

Figure 1: Example of a simple decision tree (Courtesy: Duncan Hull) Figure 1: Example of a simple decision tree (Courtesy: Duncan Hull)

(Original image: https://www.flickr.com/photos/dullhunk/7214525854, Credit: Duncan Hull)

The tree structure depicted here provides a neat, easy-to-follow description of the issue under consideration and its resolution. The decision procedure is based on asking a series of questions, each of which serve to further reduce the…

View original post 2,675 more words

Yet Another Lambda Tutorial

Python Conquers The Universe

There are a lot of tutorials[1] for Python’s lambda out there. A very helpful one is Mike Driscoll’s discussion of lambda on the Mouse vs Python blog. Mike’s discussion is excellent: clear, straight-forward, with useful illustrative examples. It helped me — finally — to grok lambda, and led me to write yet another lambda tutorial.


Lambda is a tool for building functions

Lambda is a tool for building functions, or more precisely, for building function objects. That means that Python has two tools for building functions: def and lambda.

Here’s an example. You can build a function in the normal way, using def, like this:

or you can use lambda:

Here are a few other interesting examples of lambda:


What is lambda good for? Why do we need lambda?

Actually, we don’t absolutely need lambda; we could get along without it. But there are certain situations…

View original post 1,906 more words

Mercer’s Theorem and SVMs

Patterns of Ideas

In a funny coincidence, this post has the same basic structure as my previous one: proving some technical result, and then looking at an application to machine learning. This time it’s Mercer’s theorem from functional analysis, and the kernel trick for SVMs. The proof of Mercer’s theorem mostly follows Lax’s Functional Analysis.

1. Mercer’s Theorem

Consider a real-valued function $latex {K(s,t)}&fg=000000$, and the corresponding integral operator $latex {mathbf{K}: L^2[0,1]rightarrow L^2[0,1]}&fg=000000$ given by

$latex displaystyle (mathbf{K} u)(s)=int_0^1 K(s,t) u(t), dt.&fg=000000$

We begin with two facts connecting the properties of $latex {K}&fg=000000$ to the properties of $latex {mathbf{K} }&fg=000000$.

Proposition 1. If $latex {K}&fg=000000$ is continuous, then $latex {mathbf{K} }&fg=000000$ is compact.

Proof: Consider a bounded sequence $latex {{f_n}_{n=1}^{infty} subset L^2[0,1]}&fg=000000$. We wish to show that the image of this sequence, $latex {{mathbf{K} f_n}_{n=1}^{infty}}&fg=000000$, has a convergent subsequence. We show that $latex {{mathbf{K} f_n}}&fg=000000$ is equicontinuous, and Arzela-Ascoli then gives a…

View original post 848 more words

Picking a colour scale for scientific graphics

Better Figures

Here are some recommendations for making scientific graphics which help your audience understand your data as easily as possible. Your graphics should be striking, readily understandable, should avoid distorting the data (unless you really mean to), and be safe for those who are colourblind. Remember, there are no really “right” or “wrong” palettes (OK, maybe a few wrong ones), but studying a few simple rules and examples will help you communicate only what you intend.

What kind of palettes for maps?

For maps of quantitative data that has an order, use an ordered palette. If data is sequential and is continually increasing or decreasing then use a brightness ramp (e.g. light to dark shades of grey, blue or red) or a hue ramp (e.g. cycling from light yellow to dark blue). In general, people interpret darker colours as representing “more”. These colour palettes can be downloaded from Color…

View original post 1,117 more words