Scielo RSS <![CDATA[Journal of the Brazilian Computer Society]]> http://www.scielo.br/rss.php?pid=0104-650020080003&lang=en vol. 14 num. 3 lang. en <![CDATA[SciELO Logo]]> http://www.scielo.br/img/en/fbpelogp.gif http://www.scielo.br <![CDATA[<b>Letter From The Guest Editors</b>]]> http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0104-65002008000300001&lng=en&nrm=iso&tlng=en <![CDATA[<b>A machine learning approach to automatic music genre classification</b>]]> http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0104-65002008000300002&lng=en&nrm=iso&tlng=en This paper presents a non-conventional approach for the automatic music genre classification problem. The proposed approach uses multiple feature vectors and a pattern recognition ensemble approach, according to space and time decomposition schemes. Despite being music genre classification a multi-class problem, we accomplish the task using a set of binary classifiers, whose results are merged in order to produce the final music genre label (space decomposition). Music segments are also decomposed according to time segments obtained from the beginning, middle and end parts of the original music signal (time-decomposition). The final classification is obtained from the set of individual results, according to a combination procedure. Classical machine learning algorithms such as Naïve-Bayes, Decision Trees, k Nearest-Neighbors, Support Vector Machines and MultiLayer Perceptron Neural Nets are employed. Experiments were carried out on a novel dataset called Latin Music Database, which contains 3,160 music pieces categorized in 10 musical genres. Experimental results show that the proposed ensemble approach produces better results than the ones obtained from global and individual segment classifiers in most cases. Some experiments related to feature selection were also conducted, using the genetic algorithm paradigm. They show that the most important features for the classification task vary according to their origin in the music signal. <![CDATA[<b>Agent-based guitar performance simulation</b>]]> http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0104-65002008000300003&lng=en&nrm=iso&tlng=en The goal of this paper is to describe a system-aided performance and composition tool that aims to expand guitarist capacities by providing innovative ways in which the user can interact with the system. In order to achieve that, we decided to use an agent-based approach, independently modeling the active elements involved in a guitar performance as autonomous agents - named Left-Hand, Right-Hand, and Speaker (the guitar itself). These agents are able to communicate to each other in order to make some musical decisions, specially related to the chord's shape choice. The musical elements (harmony and rhythm) are independently defined respectively by the Left-Hand and Right-Hand agents. The most relevant aspects of this work, however, are the algorithms and strategies to process both harmonic and rhythmic data. Finally, we perform an evaluation of the system and discuss the results of the implemented techniques. <![CDATA[<b>CinBalada</b>: <b>a multiagent rhythm factory</b>]]> http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0104-65002008000300004&lng=en&nrm=iso&tlng=en CinBalada is a system for automatic creation of polyphonic rhythmic performances by mixing elements from different musical styles. This system is based on agents that act as musicians playing percussion instruments in a drum circle. Each agent has to choose from a database the rhythm pattern of its instrument that satisfies the "rhythmic role" assigned to him in order to produce a collectively-consistent rhythmic performance. A rhythmic role is a concept that we proposed here with the objective of representing culture-specific rules for creation of polyphonic performances. <![CDATA[<b>Soundscape design through evolutionary engines</b>]]> http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0104-65002008000300005&lng=en&nrm=iso&tlng=en Two implementations of an Evolutionary Sound Synthesis method using the Interaural Time Difference (ITD) and psychoacoustic descriptors are presented here as a way to develop criteria for fitness evaluation. We also explore a relationship between adaptive sound evolution and three soundscape characteristics: key-sounds, key-signals and sound-marks. Sonic Localization Field is defined using a sound attenuation factor and ITD azimuth angle, respectively (Ii, Li). These pairs are used to build Spatial Sound Genotypes (SSG) and they are extracted from a waveform population set. An explanation on how our model was initially written in MATLAB is followed by a recent Pure Data (Pd) implementation. It also elucidates the development and use of: parametric scores, a triplet of psychoacoustic descriptors and the correspondent graphical user interface. <![CDATA[<b>The computation of pitch with vectors</b>]]> http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0104-65002008000300006&lng=en&nrm=iso&tlng=en A pitch model is proposed which is supported by a vector representation of tones. First, an algorithm capable of performing the vector addition of the spectral components of two-tone harmonic complexes is introduced which initially converts the amplitude, frequency, and phase (AFP) parameters into coordinates of the here introduced quotient, distance in octaves, and loudness (QOL) tone space. As QOL is isomorphic to the hue, saturation, and value (HSV) color space, a transformation from QOL to the red, green, and blue (RGB) vector space can be formulated so that the vector addition of two pure tones is conceived by analogy with color mixing operations. Since the QOL to RGB transformation is invertible, the resulting RGB vector sum can be transformed back to QOL. Then, by converting QOL coordinates back to AFP parameters, a tone is found whose frequency supposedly corresponds to the pitch evoked by the original two-tone complex. As for complexes having more than two components, the algorithm is to be sequentially applied to pairs of vectors in such a way that initially the first two vector tones are added together, then the resulting vector is added to the third vector tone, and so on. <![CDATA[<b>AcMus</b>: <b>an open, integrated platform for room acoustics research</b>]]> http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0104-65002008000300007&lng=en&nrm=iso&tlng=en This article describes the design, implementation, and experiences with AcMus, an open and integrated software platform for room acoustics research, which comprises tools for measurement, analysis, and simulation of rooms for music listening and production. Through use of affordable hardware, such as laptops, consumer audio interfaces and microphones, the software allows evaluation of relevant acoustical parameters with stable and consistent results, thus providing valuable information in the diagnosis of acoustical problems, as well as the possibility of simulating modifications in the room through analytical models. The system is open-source and based on a flexible and extensible Java plug-in framework, allowing for cross-platform portability, accessibility and experimentation, thus fostering collaboration of users, developers and researchers in the field of room acoustics.