Scielo RSS <![CDATA[Journal of the Brazilian Computer Society]]> vol. 15 num. 3 lang. en <![CDATA[SciELO Logo]]> <![CDATA[<b>Letter from the Guest Editor</b>]]> <![CDATA[<b>NLOOK</b>: <b>a computational attention model for robot vision</b>]]> The computational models of visual attention, originally proposed as cognitive models of human attention, nowadays are being used as front-ends to some robotic vision systems, like automatic object recognition and landmark detection. However, these kinds of applications have different requirements from those originally proposed. More specifically, a robotic vision system must be relatively insensitive to 2D similarity transforms of the image, as in-plane translations, rotations, reflections and scales, and it should also select fixation points in scale as well as position. In this paper a new visual attention model, called NLOOK, is proposed. This model is validated through several experiments, which show that it is less sensitive to 2D similarity transforms than other two well known and publicly available visual attention models: NVT and SAFE. Besides, NLOOK can select more accurate fixations than other attention models, and it can select the scales of fixations, too. Thus, the proposed model is a good tool to be used in robot vision systems. <![CDATA[<b>Adaptive complementary filtering algorithm for mobile robot localization</b>]]> As a mobile robot navigates through an indoor environment, the condition of the floor is of low (or no) relevance to its decisions. In an outdoor environment, however, terrain characteristics play a major role on the robot's motion. Without an adequate assessment of terrain conditions and irregularities, the robot will be prone to major failures, since the environment conditions may greatly vary. As such, it may assume any orientation about the three axes of its reference frame, which leads to a full six degrees of freedom configuration. The added three degrees of freedom have a major bearing on position and velocity estimation due to higher time complexity of classical techniques such as Kalman filters and particle filters. This article presents an algorithm for localization of mobile robots based on the complementary filtering technique to estimate the localization and orientation, through the fusion of data from IMU, GPS and compass. The main advantages are the low complexity of implementation and the high quality of the results for the case of navigation in outdoor environments (uneven terrain). The results obtained through this system are compared positively with those obtained using more complex and time consuming classic techniques. <![CDATA[<b>General detection model in cooperative multirobot localization</b>]]> The cooperative multirobot localization problem consists in localizing each robot in a group within the same environment, when robots share information in order to improve localization accuracy. It can be achieved when a robot detects and identifies another one, and measures their relative distance. At this moment, both robots can use detection information to update their own poses beliefs. However some other useful information besides single detection between a pair of robots can be used to update robots poses beliefs as: propagation of a single detection for non participants robots, absence of detections and detection involving more than a pair of robots. A general detection model is proposed in order to aggregate all detection information, addressing the problem of updating poses beliefs in all situations depicted. Experimental results in simulated environment with groups of robots show that the proposed model improves localization accuracy when compared to conventional single detection multirobot localization. <![CDATA[<b>Appearance-based odometry and mapping with feature descriptors for underwater robots</b>]]> The use of Autonomous Underwater Vehicles (AUVs) for underwater tasks is a promising robotic field. These robots can carry visual inspection cameras. Besides serving the activities of inspection and mapping, the captured images can also be used to aid navigation and localization of the robots. Visual odometry is the process of determining the position and orientation of a robot by analyzing the associated camera images. It has been used in a wide variety of non-standard locomotion robotic methods. In this context, this paper proposes an approach to visual odometry and mapping of underwater vehicles. Supposing the use of inspection cameras, this proposal is composed of two stages: i) the use of computer vision for visual odometry, extracting landmarks in underwater image sequences and ii) the development of topological maps for localization and navigation. The integration of such systems will allow visual odometry, localization and mapping of the environment. A set of tests with real robots was accomplished, regarding online and performance issues. The results reveals an accuracy and robust approach to several underwater conditions, as illumination and noise, leading to a promissory and original visual odometry and mapping technique. <![CDATA[<b>An improved particle filter for sparse environments</b>]]> In this paper, we combine a path planner based on Boundary Value Problems (BVP) and Monte Carlo Localization (MCL) to solve the wake-up robot problem in a sparse environment. This problem is difficult since large regions of sparse environments do not provide relevant information for the robot to recover its pose. We propose a novel method that distributes particle poses only in relevant parts of the environment and leads the robot along these regions using the numeric solution of a BVP. Several experiments show that the improved method leads to a better initial particle distribution and a better convergence of the localization process. <![CDATA[<b>Compulsory Flow Q-Learning</b>: <b>an RL algorithm for robot navigation based on partial-policy and macro-states</b>]]> Reinforcement Learning is carried out on-line, through trial-and-error interactions of the agent with the environment, which can be very time consuming when considering robots. In this paper we contribute a new learning algorithm, CFQ-Learning, which uses macro-states, a low-resolution discretisation of the state space, and a partial-policy to get around obstacles, both of them based on the complexity of the environment structure. The use of macro-states avoids convergence of algorithms, but can accelerate the learning process. In the other hand, partial-policies can guarantee that an agent fulfils its task, even through macro-state. Experiments show that the CFQ-Learning performs a good balance between policy quality and learning rate.