![]() What Is KARMA? |
![]() |
KARMA Technology |
KARMA® (Kay Algorithmic Realtime Music Architecture) is a unique algorithmic musical effect generation technology that generates MIDI data (the language that computers and electronic musical instruments use to communicate). Under development since 1994 with over 20 years of concentrated research and development, it has been awarded eleven U.S. patents on various portions of the system. The technology has been designed to generate unique musical effects in real-time, allowing effortless creation of spectacular cascades of complex interweaving notes, techno arpeggios and effects, dense rhythmic and melodic textures, natural sounding glissandos for acoustic instrument programs, guitar strumming and finger-picking simulations, random effects, auto-accompaniment effects, gliding and swooping portamento and pitch bend effects, and new sound design possibilities. It has been designed to be not only a valuable tool for proficient musicians, but a means of interactively controlling music generation for people of any level of musical skill, including no musical knowledge whatsoever. KARMA can be difficult to describe, as it is fundamentally a different approach to generating musical effects. You could say that KARMA is an advanced algorithmic pattern and effect generator that creates musical effects based on input source material. A typical arpeggiator can be thought of as a very limited pattern generator; in this sense, what it can do is a tiny subset of the types of pattern and effect generation that can be accomplished by KARMA, which has many different algorithms seamlessly integrated as a whole to provide a powerful music generation engine. The KARMA architecture allows the various algorithms to be re-configured in real-time even while the effect is running; various parts of the algorithms may be modified, disabled, etc. A performance of a musical phrase can be thought of as having many different attributes that determine the overall effect of the resulting music. For example, you might say a musical phrase has a rhythm attribute, which is the rhythm with which the notes are being played; a cluster attribute, which is the number of notes being played at the same time in various places of the musical phrase (chords); a velocity attribute, which is the volume/accent with which the notes are played; a pan attribute, which is the spatial location of the notes in a stereo or three-dimensional field, etc. Typically, music that has been recorded or sequenced has all of these attributes pre-determined and fixed in relation to each other. A specific note is to be played with a specific rhythmic value for a specific period of time, at a specific volume level, at a specific location in a stereo field, with the sound of a specific musical instrument, and these relationships remain fixed. For example, in most if not all auto-accompaniment instruments, to achieve a variation in the accompaniment pattern the instrument essentially switches to a different pre-recorded sequence of musical events (again with specific relationships that are fixed in the data). In KARMA, the various aspects of a musical phrase have been divided into separately controllable attributes. Each of these attributes is controlled by a separate group of parameters, which can be individually varied or changed in groups by the user in real-time as the music is being generated. A grouping of all the parameters related to generating a single musical effect is referred to as a Generated Effect, or GE. A single GE is comprised of over 400 separate parameters which essentially specify a configuration of the musical algorithms in the core engine. Another way of looking at it is that a GE is a blueprint for a type of musical effect, such as a guitar riff, a sax riff, or a bass line. It specifies the rhythm the notes will be generated with, the general direction of movement, the types of intervals to be played sequentially, the range and length of the phrase, etc. There are groups of parameters that control the rhythm of the notes, the movement of the pitches of the notes, the velocity of the notes, the panning, the number of notes at a time to generate, the durations of the notes; there are parameters that allow generated notes to be repeated and melodically transposed; there are parameters for controlling automatic pitch bending effects; there are parameters that control three different envelopes which can be applied to tempo, velocity, pitch, duration, and any MIDI control change message, and more. Many parameters or groups of parameters may be controlled by uniquely musical randomizations. However, it is important to note that the actual pitches of the notes themselves are not specified (in most cases); they are supplied or determined by the users real-time input, which may be provided in one of several ways, such as by a MIDI instrument and a musician in real-time, or by a MIDI sequence or guide track played through KARMA. Therefore, a chord voiced one way can produce a compeltely different result than the same chord voiced a different way. A KARMA Module contains a single GE and includes other settings such as the input note key range, how the module will be triggered (by the keyboard or other gesture or action), how the GE will be advanced (by internal clock or user specified triggers), and other settings that are not stored as part of the GE, but may be changed depending on the current application. A KARMA Performance is a grouping of one or more modules along with some other overall global settings such as tempo and external MIDI controller routings, and can be anything from a groove consisting of drums, bass, and several other parts, to a multiple module harp glissando, where each module performs one sweep of the harpists hands. One use of KARMA is to provide individual effects such as various guitar strumming templates, various techno arpeggios, etc. But KARMA can also be used to generate infinitely variable randomized grooves and accompaniment backings. In this incarnation, it is fundamentally different from anything else that has been developed before. Up until now there have been two basic types of algorithmic backing track generation. The traditional method used in all auto-accompaniment keyboards is a system which analyzes notes played on a keyboard (chord recognition) and then plays back patterns stored in memory through transposition tables. The patterns themselves are static (only the tonality is changed), yielding a monotonous, repetitive performance. The second method is the one used by certain software programs such as Jammer and Band-In-The-Box. These systems are truly algorithmic in that they create new patterns each time the algorithm is called. However, these systems are not implemented in real-time. You must first input a chord structure in a proprietary format and once the algorithm has created a pattern you can not interact with the music in real-time other than to play it back and perhaps play along with it. KARMA combines the algorithmic diversity of the second method with the real-time control and immediate access of the first method to create a new form of interactive groove generation, where the user is in more direct control, since what is produced is directly related to which notes are pressed. If one note is sent as input, KARMA can use just that one note; if 10 notes are sent, then KARMA can use those 10 notes to create its algorithmic patterns. A Cmaj7 voiced on a keyboard with one hand can create a totally different effect than a Cmaj7 voiced with two hands in different octaves. Furthermore, extensive aspects of the rhythm, velocity, chord size and other parameters can be randomly varied in real-time to allow the user to control the complexity and density of the resulting backing tracks and musical effects. |
KARMA Technology - Background |
KARMA has been under development since 1994 by Stephen Kay, a professional musician and programmer well recognized in the music industry for his sound design and factory demo sequence work on a variety of keyboards. While working with the Japanese keyboard company KORG as an independent consultant on the design and programming of the i-Series Interactive Music Workstations, he gained an enormous insight into the manipulation of MIDI data by creating a completely functional software prototype of the product, which was used to make crucial decisions regarding implementation of transposition tables, chord recognition, and other key features. Furthermore, since 1989 he has worked with KORG as an independent consultant, programming many of the factory demo sequences distributed world-wide in the KORG products, during which time he learned every trick in the book (and invented some) for achieving realistic instrumental nuances, special effects through MIDI, etc. Based on those experiences, and also to make it easier for himself to create impressive sequences, Kay began developing software with the following goals:
• to make it easy to achieve some of the effects that are traditionally difficult for keyboard players to emulate, such as:
• to generate drum, bass, and accompaniment grooves (backing tracks) which can be randomly varied and produce more human, less mechanical performances.
|