Category Archives: Internet

Artificial Intelligence – Depth First Search(DFS)

Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. One starts at the root (selecting some arbitrary node as the root in the case of a graph) and explores as far as possible along each branch before backtracking.

A version of depth-first search was investigated in the 19th century by French mathematician Charles Pierre Trémaux[1] as a strategy for solving mazes.

Algorithmic Thoughts - Artificial Intelligence | Machine Learning | Neuroscience | Computer Vision

Okay! So this is my first blog post!

I will start by talking about the most basic solution to search problems, which are an integral part of artificial intelligence.

What the hell are search problems?

In simple language, search problems consist of a graph, a starting node and a goal(also a node). Our aim while solving a search problem is to get a path from the starting node to the goal.

Consider the diagram below, we want to get to the node G starting from the node S.

Which path will we get on solving the search problem? How do we get the path? This is where algorithms come into picture and answer all our questions! We will look at Depth First Search which can be seen as a brute force method of solving a search problem.

Creating the search tree

So how do we simplify this problem? If we…

View original post 623 more words

open data

At the end of the month, world governments will convene at the UN COP21 conference in Paris for the next round of binding emission commitments aimed at restricting global warming to no more than two degrees by the end of the century.

When it comes to agreeing potentially tougher targets, both policymakers and members of the public will now be armed with the COP21 climate change calculator, developed by the Climate-KIC, the EU’s main climate innovation research centre, in collaboration with Imperial College, London and FT.com.

Using data on the emission reduction pledges made to date and scientific forecasts on future warming, it aims to inform the public and policymakers on the impact a variety of choices by individual countries would have on overall global warming.

The Pause

Let’s be clear: The planet is still getting hotter. The so-called pause, or hiatus, in global warming means the rate of temperature rise has slowed. The average global temperature is still going up, but in the past 10 to 15 years it hasn’t been going up as quickly as it was in the decades before.

Although the ongoing increase is trouble, a slower rate is preferable. The question is: Why did the slowdown occur—and how long will it last?

Separate work by Mann, presented in Scientific American article he wrote last April, also indicates that the pause will not last long. Mann calculated that if the world continues to burn fossil fuels at the current rate, global warming would rise to two degrees Celsius by 2036 (compared with preindustrial levels), crossing a threshold that would harm human civilization. And even if the pause persists for longer than expected, the world would cross the line in 2046. The article includes a monumental graph showing all the details. Mann also published the data sources and formula he used, on Scientific American’s Web site, so anyone could replicate his calculations.

Atlantic and Pacific multidecadal oscillations and Northern Hemisphere temperatures

  1. Byron A. Steinman1,*,
  2. Michael E. Mann2,
  3. Sonya K. Miller2

+ Author Affiliations


  1. 1Large Lakes Observatory and Department of Earth and Environmental Sciences, University of Minnesota Duluth, Duluth, MN, USA.

  2. 2Department of Meteorology and Earth and Environmental Systems Institute, Pennsylvania State University, University Park, PA, USA.
  1. *Corresponding author. E-mail: bsteinma@d.umn.edu

The recent slowdown in global warming has brought into question the reliability of climate model projections of future temperature change and has led to a vigorous debate over whether this slowdown is the result of naturally occurring, internal variability or forcing external to Earth’s climate system. To address these issues, we applied a semi-empirical approach that combines climate observations and model simulations to estimate Atlantic- and Pacific-based internal multidecadal variability (termed “AMO” and “PMO,” respectively). Using this method, the AMO and PMO are found to explain a large proportion of internal variability in Northern Hemisphere mean temperatures. Competition between a modest positive peak in the AMO and a substantially negative-trending PMO are seen to produce a slowdown or “false pause” in warming of the past decade.

Climate Oscillations and the Global Warming Faux Pause

Filed under:

No, climate change is not experiencing a hiatus. No, there is not currently a “pause” in global warming.

Despite widespread such claims in contrarian circles, human-caused warming of the globe proceeds unabated. Indeed, the most recent year (2014) was likely the warmest year on record.

It is true that Earth’s surface warmed a bit less than models predicted it to over the past decade-and-a-half or so. This doesn’t mean that the models are flawed. Instead, it points to a discrepancy that likely arose from a combination of three main factors (see the discussion my piece last year in Scientific American). These factors include the likely underestimation of the actual warming that has occurred, due to gaps in the observational data. Secondly, scientists have failed to include in model simulations some natural factors (low-level but persistent volcanic eruptions and a small dip in solar output) that had a slight cooling influence on Earth’s climate. Finally, there is the possibility that internal, natural oscillations in temperature may have masked some surface warming in recent decades, much as an outbreak of Arctic air can mask the seasonal warming of spring during a late season cold snap. One could call it a global warming “speed bump”. In fact, I have.

Some have argued that these oscillations contributed substantially to the warming of the globe in recent decades. In an article my colleagues Byron Steinman, Sonya Miller and I have in the latest issue of Science magazine, we show that internal climate variability instead partially offset global warming.

– See more at: http://www.realclimate.org/index.php/archives/2015/02/climate-oscillations-and-the-global-warming-faux-pause/#sthash.2F7mdCFk.dpuf

Alpha–beta pruning

The Alpha-Beta algorithm (Alpha-Beta Pruning, Alpha-Beta Heuristic [1] ) is a significant enhancement to the minimax search algorithm that eliminates the need to search large portions of the game tree applying a branch-and-bound technique. Remarkably, it does this without any potential of overlooking a better move. If one already has found a quite good move and search for alternatives, one refutation is enough to avoid it. No need to look for even stronger refutations. The algorithm maintains two values, alpha and beta.


Alpha–beta pruning is a search algorithm that seeks to decrease the number of nodes that are evaluated by the minimax algorithmin its search tree. It is an adversarial search algorithm used commonly for machine playing of two-player games (Tic-tac-toe, Chess,Go, etc.). It stops completely evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. When applied to a standard minimax tree, it returns the same move as minimax would, but prunes away branches that cannot possibly influence the final decision.

Allen Newell and Herbert A. Simon who used what John McCarthy calls an “approximation”[2] in 1958 wrote that alpha–beta “appears to have been reinvented a number of times”.[3] Arthur Samuel had an early version and Richards, Hart, Levine and/or Edwards found alpha–beta independently in the United States.[4] McCarthy proposed similar ideas during the Dartmouth Conference in 1956 and suggested it to a group of his students including Alan Kotok at MIT in 1961.[5] Alexander Brudno independently discovered the alpha–beta algorithm, publishing his results in 1963.[6] Donald Knuth and Ronald W. Moore refined the algorithm in 1975[7][8] and Judea Pearl proved its optimality in 1982.[9]

Improvements over naive minimax

An illustration of alpha–beta pruning. The grayed-out subtrees need not be explored (when moves are evaluated from left to right), since we know the group of subtrees as a whole yields the value of an equivalent subtree or worse, and as such cannot influence the final result. The max and min levels represent the turn of the player and the adversary, respectively.

The benefit of alpha–beta pruning lies in the fact that branches of the search tree can be eliminated. This way, the search time can be limited to the ‘more promising’ subtree, and a deeper search can be performed in the same time. Like its predecessor, it belongs to the branch and bound class of algorithms. The optimization reduces the effective depth to slightly more than half that of simple minimax if the nodes are evaluated in an optimal or near optimal order (best choice for side on move ordered first at each node).

With an (average or constant) branching factor of b, and a search depth of d plies, the maximum number of leaf node positions evaluated (when the move ordering is pessimal) is O(b*b*…*b) = O(bd) – the same as a simple minimax search. If the move ordering for the search is optimal (meaning the best moves are always searched first), the number of leaf node positions evaluated is about O(b*1*b*1*…*b) for odd depth andO(b*1*b*1*…*1) for even depth, or O(b^{d/2}) = O(\sqrt{b^d}). In the latter case, where the ply of a search is even, the effective branching factor is reduced to its square root, or, equivalently, the search can go twice as deep with the same amount of computation.[10] The explanation of b*1*b*1*… is that all the first player’s moves must be studied to find the best one, but for each, only the best second player’s move is needed to refute all but the first (and best) first player move—alpha–beta ensures no other second player moves need be considered. When nodes are ordered at random, the average number of nodes evaluated is roughly O(b^{3d/4}).[2]

An animated pedagogical example that attempts to be human-friendly by substituting initial infinite (or arbitrarily large) values for emptiness and by avoiding using the negamax coding simplifications.

Normally during alpha–beta, the subtrees are temporarily dominated by either a first player advantage (when many first player moves are good, and at each search depth the first move checked by the first player is adequate, but all second player responses are required to try to find a refutation), or vice versa. This advantage can switch sides many times during the search if the move ordering is incorrect, each time leading to inefficiency. As the number of positions searched decreases exponentially each move nearer the current position, it is worth spending considerable effort on sorting early moves. An improved sort at any depth will exponentially reduce the total number of positions searched, but sorting all positions at depths near the root node is relatively cheap as there are so few of them. In practice, the move ordering is often determined by the results of earlier, smaller searches, such as through iterative deepening.

The algorithm maintains two values, alpha and beta, which represent the maximum score that the maximizing player is assured of and the minimum score that the minimizing player is assured of respectively. Initially alpha is negative infinity and beta is positive infinity, i.e. both players start with their lowest possible score. It can happen that when choosing a certain branch of a certain node the minimum score that the minimizing player is assured of becomes less than the maximum score that the maximizing player is assured of (beta<=alpha). If this is the case, the parent node should not choose this node, because it will make the score for the parent node worse. Therefore, the other branches of the node do not have to be explored.

Additionally, this algorithm can be trivially modified to return an entire principal variation in addition to the score. Some more aggressive algorithms such as MTD(f) do not easily permit such a modification.

Pseudocode

01 function alphabeta(node, depth, α, β, maximizingPlayer)
02      if depth = 0 or node is a terminal node
03          return the heuristic value of node
04      if maximizingPlayer
05          v := -∞
06          for each child of node
07              v := max(v, alphabeta(child, depth - 1, α, β, FALSE))
08              α := max(α, v)
09              if β ≤ α
10                  break (* β cut-off *)
11          return v
12      else
13          v := ∞
14          for each child of node
15              v := min(v, alphabeta(child, depth - 1, α, β, TRUE))
16              β := min(β, v)
17              if β ≤ α
18                  break (* α cut-off *)
19          return v
(* Initial call *)
alphabeta(origin, depth, -, +, TRUE)

Privly

A new tool under development by Oregon State computer scientists could radically alter the way that communications work on the web. Privly is a sort of manifesto-in-code, a working argument for a more private, less permanent Internet.

The system we have now gives all the power to the service providers. That seemed to be necessary, but Privly shows that it is not: Users could have a lot more power without giving up social networking. Just pointing that out is a valuable contribution to the ongoing struggle to understand and come up with better ways of sharing and protecting ourselves online.

“Companies like Twitter, Google, and Facebook make you choose between modern technology and privacy. But the Privly developers know this to be false choice,” lead dev Sean McGregor says in the video below. “You can communicate through the site of your choosing without giving the host access to your content.”

Through browser extensions, Privly allows you to post to social networks and send email without letting those services see “into” your text. Instead, your actual words get encrypted and then routed to Privlys servers (or an eventual peer-to-peer network). What the social media site “sees” is merely a link that Privly expands in your browser into the full content. Of course, this requires that people who want to see your content also need Privly installed on their machines.

Google Privacy Policy

We’re getting rid of over 60 different privacy policies across Google and replacing them with one that’s a lot shorter and easier to read. Our new policy covers multiple products and features, reflecting our desire to create one beautifully simple and intuitive experience across Google.
We believe this stuff matters, so please take a few minutes to read our updated Privacy Policy and Terms of Service at http://www.google.com/policies. These changes will take effect on March 1, 2012. 

Got questions?
We’ve got answers.
Visit our FAQ at http://www.google.com/policies/faq to read more about the changes. (We figured our users might have a question or twenty-two.)

employers asking for Facebook passwords

SEATTLE (AP) — Two U.S. senators are asking Attorney General Eric Holder to investigate whether employers asking for Facebook passwords during job interviews are violating federal law, their offices announced Sunday.

Troubled by reports of the practice, Democratic Sens. Chuck Schumer of New York and Richard Blumenthal of Connecticut said they are calling on the Department of Justice and the U.S. Equal Employment Opportunity Commission to launch investigations. The senators are sending letters to the heads of the agencies.

The Associated Press reported last week that some private and public agencies around the country are asking job seekers for their social media credentials. The practice has alarmed privacy advocates, but the legality of it remains murky.

On Friday, Facebook warned employers not to ask job applicants for their passwords to the site so they can poke around on their profiles. The company threatened legal action against applications that violate its long-standing policy against sharing passwords.

A Facebook executive cautioned that if an employer discovers that a job applicant is a member of a protected group, the employer may be vulnerable to claims of discrimination if it doesn’t hire that person.

Personal information such as gender, race, religion and age are often displayed on a Facebook profile — all details that are protected by federal employment law.

“We don’t think employers should be asking prospective employees to provide their passwords because we don’t think it’s the right thing to do. While we do not have any immediate plans to take legal action against any specific employers, we look forward to engaging with policy makers and other stakeholders, to help better safeguard the privacy of our users,” Facebook said in a statement.

Not sharing passwords is a basic tenet of online conduct. Aside from the privacy concerns, Facebook considers the practice a security risk.

“In an age where more and more of our personal information — and our private social interactions — are online, it is vital that all individuals be allowed to determine for themselves what personal information they want to make public and protect personal information from their would-be employers. This is especially important during the job-seeking process, when all the power is on one side of the fence,” Schumer said in a statement.

Specifically, the senators want to know if this practice violates the Stored Communications Act or the Computer Fraud and Abuse Act. Those two acts, respectively, prohibit intentional access to electronic information without authorization and intentional access to a computer without authorization to obtain information.

The senators also want to know whether two court cases relating to supervisors asking current employees for social media credentials could be applied to job applicants.

“I think it’s going to take some years for courts to decide whether Americans in the digital age have the same privacy rights” as previous generations, American Civil Liberties Union attorney Catherine Crump said in a previous interview with the AP.

The senators also said they are drafting a bill to fill in any gaps that current laws don’t cover.

Maryland and Illinois are considering bills that would bar public agencies for asking for this information.

In California, Democratic Sen. Leland Yee introduced a bill that would prohibit employers from asking current employees or job applicants for their social media user names or passwords. That state measure also would bar employers from requiring access to employees’ and applicants’ social media content, to prevent employers from requiring logins or printouts of that content for their review.

In Massachusetts, state Democratic Rep. Cheryl Coakly-Rivera also filed a similar bill Friday that also expands to include personal email. Her measure also bars employers from “friending” a job applicant to view protected Facebook profiles or using similar methods for other protected social media websites.

___

Manuel Valdes can be reached at https://twitter.com/ByManuelValdes.

Redes sociales

En México, nueve de cada 10 internautas mayores de 15 años utilizan redes sociales, con un promedio de uso de 6.8 horas al mes por persona, lo que coloca al país dentro de los primeros lugares en el manejo de social media a nivel mundial, aseguró Ivan Marchant, líder de Country Manager México de comScore.