Jump to content

CHAOS (chess)

From Wikipedia, the free encyclopedia
(Redirected from CHAOS (Chess program))
Fred Swartz, Vic Berman, Bill Toikka, Ira Ruben and Joe Winograd at the console of a UNIVAC 1108 computer circa 1972.

CHAOS (Chess Heuristics and Other Stuff) is a chess playing program that was developed by programmers working at the RCA Systems Programming division in the late 1960s. It played competitively in computer chess competitions in the 1970s and 1980s. It differed from other programs of that era in its look-ahead philosophy, choosing to use chess knowledge to evaluate fewer positions and continuations as opposed to simple evaluations that relied on deep look-ahead to avoid bad moves.

Introduction

[edit]

CHAOS was originally developed by Ira Ruben, Fred Swartz, Victor Berman, Joe Winograd and William Toikka while working at RCA in Cinnaminson, NJ.[1] Its name is an acronym for 'Chess Heuristics and Other Stuff.'[1][2] Program development moved to the Computing Center of the University of Michigan when Swartz changed jobs, and Mike Alexander joined the development group. Swartz, Alexander and Berman were continuously group members from that point onward in CHAOS' evolution, as others of the original authors left and new members contributed episodically.[1] Chess Senior Master Jack O'Keefe contributed to CHAOS' development from about 1980 onwards.[3][4]

CHAOS was written in Fortran, except for low-level board representation manipulations written in assembly language or C.[5] Due to this portability, it ran on RCA, Univac and IBM-compatible mainframes in its lifetime.

CHAOS heralds from the mainframe computing era when only machines of that capacity were able to play at a high level. Consequently, development and testing could only take place at off-peak times for production use of the machine. In a competition, CHAOS had to run on a dedicated mainframe with a telephone link to the match venue. In its later years, CHAOS ran on computers on the machine assembly floor of Amdahl Corporation[2] on MTS.

Background

[edit]

Chess and artificial intelligence

[edit]
Electromechanical maze built by Shannon as an early experiment in machine learning.

Mathematicians Claude Shannon and Alan Turing, working separately, were the first to view playing chess as a challenge to machines. Working for AT&T / Bell Labs with its access to telephone switching equipment, Shannon built a relay-based machine that learned how to work its way through a two-dimensional, 5x5 cell maze in 1949. Shannon viewed this as an analogue of the way that organisms learn things about their natural environment. There is a random element to searching it, a memory element to benefit from the search outcome, and a reward element that reinforces learning when the global outcome is favorable to the organism.[6]

Soon afterward, Shannon wrote a mathematical analysis of the game of chess, published in 1950.[7] Like with the maze, he broke down game play into the necessary elements for reinforcement learning. Associated with each board configuration a move will be made from, there is a numerical score. To decide what move to make, a player wants to maximize their own position's score after the move and to minimize their opponent's score (a minimax view). Since there are about 32 possible moves at each of the early stages of the game, and about 40 moves and responses in each game, then there are about or about possible games - an impossibly large set to evaluate completely. Therefore, there must be a way to limit the number of moves to look ahead for to find the best one.

Reducing the game to these few key elements provided a way to think about human intelligence in general. Shannon became part of a wider group using computing machines to mimic aspects of human intelligence that grew into the general idea of artificial intelligence. (Other members of this group were John McCarthy, Herbert Simon, Allen Newell, Alan Kotok, Alex Bernstein and Richard Greenblatt.)[5] The paradigm that evolved was that there was a quantification of the position on the board into a score, an evaluation method to find favorable outcomes (minimax, later alpha-beta pruning), and a strategy to manage the combinatorial explosion of the look-ahead possibilities.[a]

By the early 1960s, there were computer programs that played chess at a rudimentary level. They used very simple evaluation functions for each position[b] and tried to search as far forward as was practical given the time constraints[c] and available compute power. Naturally, programmers optimized their code to use the available computing resources. This led to a major philosophical divide among chess programs: those that tried to evaluate as many positions as possible, and those that tried to evaluate the most promising move sequences as deeply as possible. CHAOS was firmly in the camp believing only the most promising moves should be evaluated in depth. Said Swartz, "The 'brute force people' ... look at every (possible move) no matter what garbage it is. Most moves are just terrible, terrible moves, and most computing time is being spent on pure garbage."[2] The program spent more time evaluating each board position in the expectation that it would find the most promising lines of play to explore in depth. In 1983, the then-fastest chess program (Belle) evaluated 110,000 positions per second, and typical programs 1000–50,000 per second, whereas CHAOS evaluated about 50-100 per second.[5]

[edit]

From about 1949 onward, Arthur Samuel began work for IBM on machine learning,[10] culminating in a checkers-playing program in 1952[11] and publications on the topic.[11] Concurrently, Christopher Strachey created Checkers, a program to play the board game of Checkers in 1951,[12] but it had no capacity to learn from its play. Checkers was chosen by both authors because it was simpler than chess yet contained the basic characteristics of an intellectual activity, and, in Samuel's view, was a test-bed in which heuristic procedures and learning processes could be evaluated quickly. Checker playing programs introduced the notion of the game tree and evaluating play to various depths to choose the best move.[12][11]

The complexity of chess, however, promoted it to the status of an analogue for human intelligence, and it attracted computer scientists' attention, who referred to it as research into artificial intelligence (AI).[5] Like checkers, it required a numerical assessment of each arrangement of chess pieces on a board. It also required looking ahead to future moves to decide how to play the present position. Due to the enormous number of possible moves, there had to be a way to confine the look-ahead search to the most promising lines of play. From these factors, the notion of minimax score evaluation developed[11] and, later, alpha-beta tree pruning to abandon looking at positions worse than any that have already been examined.

Chess search strategies

[edit]

The AI community viewed artificial intelligence as comprising two parts: a way to symbolically quantify the knowledge in hand (a chess board position), and a set of heuristics to limit look-ahead to the consequences of a move. The early chess playing programs attempted to look forward as far as possible, perhaps to 3 moves ahead by each player, and to choose the best outcome. This led to the horizon effect, whereby a key move 4 or more moves ahead would be unexamined and therefore missed.[13] Consequently, the programs were quite weak and heuristics to manage the search became important in their development. CHAOS used a selective search strategy with iterative widening.[14]

As chess programs evolved, they incorporated books of opening lines of play from historic sources.[15] Nowadays, book moves are catalogued in machine-readable form, but originally programmers had to type them in.[16] CHAOS had an extensive book for its time of around 10,000 moves[14] that O'Keefe helped to develop. A problem with play from an opening book is the behavior of the program when the play leaves the book: the positional advantage may be so subtle that the evaluation scheme may be unable to understand it, leading to very wide and shallow searches to establish a line of play. The horizon effect again plagues move selection after leaving the book. CHAOS mitigated these problems by only using book lines that it could understand, and by relying on cached analyses of continuations out of the book made while the opponent's clock was running.

Game Play History

[edit]
CHAOS v Chess 4.0
abcdefgh
8
a8 black rook
b8 black queen
e8 black king
h8 black rook
d7 black knight
f7 black pawn
g7 black pawn
h7 black pawn
a6 black pawn
d6 black bishop
e6 black pawn
f6 black knight
g6 black bishop
a4 white knight
b4 black pawn
d4 white knight
b3 white bishop
f3 white pawn
g3 white pawn
a2 white pawn
b2 white pawn
e2 white queen
h2 white pawn
a1 white rook
c1 white bishop
d1 white rook
g1 white king
8
77
66
55
44
33
22
11
abcdefgh

CHAOS played in twelve ACM computer chess tournaments[17] and four World Computer Chess Championships (WCCC).[1] Its debut was the ACM computer chess tournament in 1973, taking 2nd place.[18] In 1974, it again won 2nd place in the WCCC, defeating the tournament favorite Chess 4.0 but losing to Kaissa. CHAOS was close to winning the 1980 WCCC, but lost to Belle in a playoff.[4] The 1985 ACM computer chess tournament was CHAOS' last competition.[1]

One of CHAOS' notable victories was over Chess 4.0 at the 1974 WCCC tournament. Chess 4.0 was unbeaten by any other program up until then. Playing as white, CHAOS made a knight sacrifice (16 Nd4-e6!!) that traded material for open lines of attack and eventually won the game.[13] CHAOS’ authors thought the move was due to an unintended programming side effect.[13]

Belle v CHAOS
abcdefgh
8
a8 black rook
b8 black knight
d8 black queen
e8 black king
f8 black bishop
h8 black rook
a7 black pawn
b7 black pawn
c7 black pawn
e7 black pawn
f7 black pawn
h7 black pawn
g6 black pawn
e5 white knight
f5 black bishop
a4 white queen
b4 black knight
c4 white pawn
d4 white pawn
g3 white pawn
a2 white pawn
b2 white pawn
f2 white pawn
h2 white pawn
a1 white rook
b1 white knight
c1 white bishop
e1 white king
f1 white bishop
h1 white rook
8
77
66
55
44
33
22
11
abcdefgh

Belle v CHAOS was the deciding match for winner of the 1980 WCCC (Linz Austria). The game, which Belle won, was characterized by many missed opportunities by both programs. In the position after 8 Qd1-a4+, CHAOS played 8 ... Nb4-c6, whereas 8 ... Nb8-c6 would have been much stronger.[4]

Legacy

[edit]

CHAOS left the competitive arena around 1985 when it became clear that the use of special-purpose hardware to play chess was going to overwhelm intelligent play and, eventually, human players.[19] It had a strong debut, winning over established programs, showing that superior chess knowledge could defeat brute-force approaches. Eventually, Moore’s Law grew hardware’s capabilities faster than chess knowledge could be distilled algorithmically. However, the chess-playing program AlphaZero eventually beat the reigning brute-force program Stockfish by evaluating fewer positions (rates of thousands as opposed to millions per second) using neural networks that learned from previous play,[20] showing that selective search is an effective strategy.

Notes

[edit]
  1. ^ Turing was working along similar lines contemporaneously, and by 1952 designed a chess-playing program that, though it could not yet be run on a computer, hand-simulated it while playing a game against a human.[8]
  2. ^ Shannon's evaluation function awarded: (material) 1 point for a pawn, 3 for a knight or bishop, 5 for a rook, 9 for a queen; (position) 1/2 point subtracted for a doubled, backward or isolated pawn; (mobility) 0.1 point for each legal move available.[7]
  3. ^ Competitive chess matches played by humans typically have time limits for a move or for game completion.[9]

References

[edit]
  1. ^ a b c d e "CHAOS". Chess Programming Wiki. Retrieved 2025-12-17.
  2. ^ a b c "Chess players are experiencing CHAOS". Lakeland Ledger. 1979-11-29. p. 3,9.
  3. ^ "Jack O'Keefe". Chess Programming Wiki. Retrieved 2025-12-29.
  4. ^ a b c Robert Byrne (1981-01-25). "The State of the Art (Chess section)". New York Times. Retrieved 2025-12-29.
  5. ^ a b c d Jonathan Schaeffer (2024). "CHESS: Chess History, Experiments, and Search Symposium" (PDF). Retrieved 2025-12-24.
  6. ^ Daniel Klein (2018). "Mighty Mouse". MIT Technology Review. Retrieved 2025-12-23.
  7. ^ a b Claude Shannon (1950). "Programming a Computer for Playing Chess". Philosophical Magazine. Retrieved 2025-12-24.
  8. ^ Liat Clark; Ian Steadman (2012). "Turing's achievements: Codebreaking, AI and the birth of computer science". Wired.
  9. ^ "Chess Terms: Time Controls". chess.com. Retrieved 2025-12-24.
  10. ^ Don Knuth (1990). "Arthur Lee Samuel, 1901-1990 (obituary)" (PDF). TUGboat. 22 (4). Retrieved 2025-12-24.
  11. ^ a b c d Arthur L. Samuel (1959). "Some studies in machine learning using the game of Checkers". IBM Journal of Research and Development. 3 (3): 210–229. doi:10.1147/rd.33.0210.
  12. ^ a b C. S. Strachey (1952). Logical or Non-Mathematical Programmes. Proc. ACM Meeting Toronto, 8-10 Sept., 1952. pp. 46–49.
  13. ^ a b c Alex Bell (1978). "MASTER at IFIPS". Retrieved 2025-12-15.
  14. ^ a b Monty Newborn; Ben Mittman (1980). List of Participants (PDF). The Eleventh ACM's North American Computer Chess Championship, 26–28 October 1980. p. 7. Retrieved 2025-12-25.
  15. ^ John Nunn; Graham Burgess; John Emms; Joe Gallagher (1999). Nunn's Chess Openings. London: Gambit/Everyman Chess. 544 pp. ISBN 978-1-85744-221-2. Retrieved 2025-12-29.
  16. ^ "Opening Book". Chess programming wiki. Retrieved 2025-12-25.
  17. ^ Bill Wall. "ACM Computer Chess". Retrieved 2025-12-28.
  18. ^ "ACM North American Computer Chess Championship". Chess Programming Wiki. Retrieved 2025-12-17.
  19. ^ Hansen Hsu (2020). "AI and Play, Part 1: How games have driven two schools of AI research". Retrieved 2025-12-17.
  20. ^ Silver, David; Hubert, Thomas; Schrittwieser, Julian; Antonoglou, Ioannis; Lai, Matthew; Guez, Arthur; Lanctot, Marc; Sifre, Laurent; Kumaran, Dharshan; Graepel, Thore; Lillicrap, Timothy; Simonyan, Karen; Hassabis, Demis (2017). "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm". arXiv:1712.01815 [cs.AI].
[edit]