Search results for: explicit algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4073

Search results for: explicit algorithm

2033 Wireless FPGA-Based Motion Controller Design by Implementing 3-Axis Linear Trajectory

Authors: Kiana Zeighami, Morteza Ozlati Moghadam

Abstract:

Designing a high accuracy and high precision motion controller is one of the important issues in today’s industry. There are effective solutions available in the industry but the real-time performance, smoothness and accuracy of the movement can be further improved. This paper discusses a complete solution to carry out the movement of three stepper motors in three dimensions. The objective is to provide a method to design a fully integrated System-on-Chip (SOC)-based motion controller to reduce the cost and complexity of production by incorporating Field Programmable Gate Array (FPGA) into the design. In the proposed method the FPGA receives its commands from a host computer via wireless internet communication and calculates the motion trajectory for three axes. A profile generator module is designed to realize the interpolation algorithm by translating the position data to the real-time pulses. This paper discusses an approach to implement the linear interpolation algorithm, since it is one of the fundamentals of robots’ movements and it is highly applicable in motion control industries. Along with full profile trajectory, the triangular drive is implemented to eliminate the existence of error at small distances. To integrate the parallelism and real-time performance of FPGA with the power of Central Processing Unit (CPU) in executing complex and sequential algorithms, the NIOS II soft-core processor was added into the design. This paper presents different operating modes such as absolute, relative positioning, reset and velocity modes to fulfill the user requirements. The proposed approach was evaluated by designing a custom-made FPGA board along with a mechanical structure. As a result, a precise and smooth movement of stepper motors was observed which proved the effectiveness of this approach.

Keywords: 3-axis linear interpolation, FPGA, motion controller, micro-stepping

Procedia PDF Downloads 210
2032 Sparse Representation Based Spatiotemporal Fusion Employing Additional Image Pairs to Improve Dictionary Training

Authors: Dacheng Li, Bo Huang, Qinjin Han, Ming Li

Abstract:

Remotely sensed imagery with the high spatial and temporal characteristics, which it is hard to acquire under the current land observation satellites, has been considered as a key factor for monitoring environmental changes over both global and local scales. On a basis of the limited high spatial-resolution observations, challenged studies called spatiotemporal fusion have been developed for generating high spatiotemporal images through employing other auxiliary low spatial-resolution data while with high-frequency observations. However, a majority of spatiotemporal fusion approaches yield to satisfactory assumption, empirical but unstable parameters, low accuracy or inefficient performance. Although the spatiotemporal fusion methodology via sparse representation theory has advantage in capturing reflectance changes, stability and execution efficiency (even more efficient when overcomplete dictionaries have been pre-trained), the retrieval of high-accuracy dictionary and its response to fusion results are still pending issues. In this paper, we employ additional image pairs (here each image-pair includes a Landsat Operational Land Imager and a Moderate Resolution Imaging Spectroradiometer acquisitions covering the partial area of Baotou, China) only into the coupled dictionary training process based on K-SVD (K-means Singular Value Decomposition) algorithm, and attempt to improve the fusion results of two existing sparse representation based fusion models (respectively utilizing one and two available image-pair). The results show that more eligible image pairs are probably related to a more accurate overcomplete dictionary, which generally indicates a better image representation, and is then contribute to an effective fusion performance in case that the added image-pair has similar seasonal aspects and image spatial structure features to the original image-pair. It is, therefore, reasonable to construct multi-dictionary training pattern for generating a series of high spatial resolution images based on limited acquisitions.

Keywords: spatiotemporal fusion, sparse representation, K-SVD algorithm, dictionary learning

Procedia PDF Downloads 265
2031 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction

Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan

Abstract:

Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.

Keywords: decision trees, neural network, myocardial infarction, Data Mining

Procedia PDF Downloads 433
2030 Numerical Simulation of Air Pollutant Using Coupled AERMOD-WRF Modeling System over Visakhapatnam: A Case Study

Authors: Amit Kumar

Abstract:

Accurate identification of deteriorated air quality regions is very helpful in devising better environmental practices and mitigation efforts. In the present study, an attempt has been made to identify the air pollutant dispersion patterns especially NOX due to vehicular and industrial sources over a rapidly developing urban city, Visakhapatnam (17°42’ N, 83°20’ E), India, during April 2009. Using the emission factors of different vehicles as well as the industry, a high resolution 1 km x 1 km gridded emission inventory has been developed for Visakhapatnam city. A dispersion model AERMOD with explicit representation of planetary boundary layer (PBL) dynamics and offline coupled through a developed coupler mechanism with a high resolution mesoscale model WRF-ARW resolution for simulating the dispersion patterns of NOX is used in the work. The meteorological as well as PBL parameters obtained by employing two PBL schemes viz., non-local Yonsei University (YSU) and local Mellor-Yamada-Janjic (MYJ) of WRF-ARW model, which are reasonably representing the boundary layer parameters are considered for integrating AERMOD. Significantly different dispersion patterns of NOX have been noticed between summer and winter months. The simulated NOX concentration is validated with available six monitoring stations of Central Pollution Control Board, India. Statistical analysis of model evaluated concentrations with the observations reveals that WRF-ARW of YSU scheme with AERMOD has shown better performance. The deteriorated air quality locations are identified over Visakhapatnam based on the validated model simulations of NOX concentrations. The present study advocates the utility of tNumerical Simulation of Air Pollutant Using Coupled AERMOD-WRF Modeling System over Visakhapatnam: A Case Studyhe developed gridded emission inventory of NOX with coupled WRF-AERMOD modeling system for air quality assessment over the study region.

Keywords: WRF-ARW, AERMOD, planetary boundary layer, air quality

Procedia PDF Downloads 284
2029 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 132
2028 Decision Making in Medicine and Treatment Strategies

Authors: Kamran Yazdanbakhsh, Somayeh Mahmoudi

Abstract:

Three reasons make good use of the decision theory in medicine: 1. Increased medical knowledge and their complexity makes it difficult treatment information effectively without resorting to sophisticated analytical methods, especially when it comes to detecting errors and identify opportunities for treatment from databases of large size. 2. There is a wide geographic variability of medical practice. In a context where medical costs are, at least in part, by the patient, these changes raise doubts about the relevance of the choices made by physicians. These differences are generally attributed to differences in estimates of probabilities of success of treatment involved, and differing assessments of the results on success or failure. Without explicit criteria for decision, it is difficult to identify precisely the sources of these variations in treatment. 3. Beyond the principle of informed consent, patients need to be involved in decision-making. For this, the decision process should be explained and broken down. A decision problem is to select the best option among a set of choices. The problem is what is meant by "best option ", or know what criteria guide the choice. The purpose of decision theory is to answer this question. The systematic use of decision models allows us to better understand the differences in medical practices, and facilitates the search for consensus. About this, there are three types of situations: situations certain, risky situations, and uncertain situations: 1. In certain situations, the consequence of each decision are certain. 2. In risky situations, every decision can have several consequences, the probability of each of these consequences is known. 3. In uncertain situations, each decision can have several consequences, the probability is not known. Our aim in this article is to show how decision theory can usefully be mobilized to meet the needs of physicians. The decision theory can make decisions more transparent: first, by clarifying the data systematically considered the problem and secondly by asking a few basic principles should guide the choice. Once the problem and clarified the decision theory provides operational tools to represent the available information and determine patient preferences, and thus assist the patient and doctor in their choices.

Keywords: decision making, medicine, treatment strategies, patient

Procedia PDF Downloads 581
2027 Damage Mesomodel Based Low-Velocity Impact Damage Analysis of Laminated Composite Structures

Authors: Semayat Fanta, P.M. Mohite, C.S. Upadhyay

Abstract:

Damage meso-model for laminates is one of the most widely applicable approaches for the analysis of damage induced in laminated fiber-reinforced polymeric composites. Damage meso-model for laminates has been developed over the last three decades by many researchers in experimental, theoretical, and analytical methods that have been carried out in micromechanics as well as meso-mechanics analysis approaches. It has been fundamentally developed based on the micromechanical description that aims to predict the damage initiation and evolution until the failure of structure in various loading conditions. The current damage meso-model for laminates aimed to act as a bridge between micromechanics and macro-mechanics of the laminated composite structure. This model considers two meso-constituents for the analysis of damage in ply and interface that imparted from low-velocity impact. The damages considered in this study include fiber breakage, matrix cracking, and diffused damage of the lamina, and delamination of the interface. The damage initiation and evolution in laminae can be modeled in terms of damaged strain energy density using damage parameters and the thermodynamic irreversible forces. Interface damage can be modeled with a new concept of spherical micro-void in the resin-rich zone of interface material. The damage evolution is controlled by the damage parameter (d) and the radius of micro-void (r) from the point of damage nucleation to its saturation. The constitutive martial model for meso-constituents is defined in a user material subroutine VUMAT and implemented in ABAQUS/Explicit finite element modeling tool. The model predicts the damages in the meso-constituents level very accurately and is considered the most effective technique of modeling low-velocity impact simulation for laminated composite structures.

Keywords: mesomodel, laminate, low-energy impact, micromechanics

Procedia PDF Downloads 229
2026 Methods for Enhancing Ensemble Learning or Improving Classifiers of This Technique in the Analysis and Classification of Brain Signals

Authors: Seyed Mehdi Ghezi, Hesam Hasanpoor

Abstract:

This scientific article explores enhancement methods for ensemble learning with the aim of improving the performance of classifiers in the analysis and classification of brain signals. The research approach in this field consists of two main parts, each with its own strengths and weaknesses. The choice of approach depends on the specific research question and available resources. By combining these approaches and leveraging their respective strengths, researchers can enhance the accuracy and reliability of classification results, consequently advancing our understanding of the brain and its functions. The first approach focuses on utilizing machine learning methods to identify the best features among the vast array of features present in brain signals. The selection of features varies depending on the research objective, and different techniques have been employed for this purpose. For instance, the genetic algorithm has been used in some studies to identify the best features, while optimization methods have been utilized in others to identify the most influential features. Additionally, machine learning techniques have been applied to determine the influential electrodes in classification. Ensemble learning plays a crucial role in identifying the best features that contribute to learning, thereby improving the overall results. The second approach concentrates on designing and implementing methods for selecting the best classifier or utilizing meta-classifiers to enhance the final results in ensemble learning. In a different section of the research, a single classifier is used instead of multiple classifiers, employing different sets of features to improve the results. The article provides an in-depth examination of each technique, highlighting their advantages and limitations. By integrating these techniques, researchers can enhance the performance of classifiers in the analysis and classification of brain signals. This advancement in ensemble learning methodologies contributes to a better understanding of the brain and its functions, ultimately leading to improved accuracy and reliability in brain signal analysis and classification.

Keywords: ensemble learning, brain signals, classification, feature selection, machine learning, genetic algorithm, optimization methods, influential features, influential electrodes, meta-classifiers

Procedia PDF Downloads 81
2025 The Image of Saddam Hussein and Collective Memory: The Semiotics of Ba'ath Regime's Mural in Iraq (1980-2003)

Authors: Maryam Pirdehghan

Abstract:

During the Ba'ath Party's rule in Iraq, propaganda was utilized to justify and to promote Saddam Hussein's image in the collective memory as the greatest Arab leader. Consequently, urban walls were routinely covered with images of Saddam. Relying on these images, the regime aimed to provide a basis for evoking meanings in the public opinion, which would supposedly strengthen Saddam’s power and reconstruct facts to legitimize his political ideology. Nonetheless, Saddam was not always portrayed with common and explicit elements but in certain periods of his rule, the paintings depicted him in an unusual context, where various historical and contemporary elements were combined in a narrative background. Therefore, an understanding of the implied socio-political references of these elements is required to fully elucidate the impact of these images on forming the memory and collective unconscious of the Iraqi people. To obtain such understanding, one needs to address the following questions: a) How Saddam Hussein is portrayed in mural during his rule? b) What of elements and mythical-historical narratives are found in the paintings? c) Which Saddam's political views were subject to the collective memory through mural? Employing visual semiotics, this study reveals that during Saddam Hussein's regime, the paintings were initially simple portraits but gradually transformed into narrative images, characterized by a complex network of historical, mythical and religious elements. These elements demonstrate the transformation of a secular-nationalist politician into a Muslim ruler who tried to instill three major policies in domestic and international relations i.e. the arabization of Iraq, as well as the propagation of pan-arabism ideology (first period), the implementation of anti-Israel policy (second period) and the implementation of anti-American-British policy (last period).

Keywords: Ba'ath Party, Saddam Hussein, mural, Iraq, propaganda, collective memory

Procedia PDF Downloads 331
2024 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images

Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir

Abstract:

The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement; On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.

Keywords: altitude estimation, drone, image processing, trajectory planning

Procedia PDF Downloads 116
2023 Model-Based Approach as Support for Product Industrialization: Application to an Optical Sensor

Authors: Frederic Schenker, Jonathan J. Hendriks, Gianluca Nicchiotti

Abstract:

In a product industrialization perspective, the end-product shall always be at the peak of technological advancement and developed in the shortest time possible. Thus, the constant growth of complexity and a shorter time-to-market calls for important changes on both the technical and business level. Undeniably, the common understanding of the system is beclouded by its complexity which leads to the communication gap between the engineers and the sale department. This communication link is therefore important to maintain and increase the information exchange between departments to ensure a punctual and flawless delivery to the end customer. This evolution brings engineers to reason with more hindsight and plan ahead. In this sense, they use new viewpoints to represent the data and to express the model deliverables in an understandable way that the different stakeholder may identify their needs and ideas. This article focuses on the usage of Model-Based System Engineering (MBSE) in a perspective of system industrialization and reconnect the engineering with the sales team. The modeling method used and presented in this paper concentrates on displaying as closely as possible the needs of the customer. Firstly, by providing a technical solution to the sales team to help them elaborate commercial offers without omitting technicalities. Secondly, the model simulates between a vast number of possibilities across a wide range of components. It becomes a dynamic tool for powerful analysis and optimizations. Thus, the model is no longer a technical tool for the engineers, but a way to maintain and solidify the communication between departments using different views of the model. The MBSE contribution to cost optimization during New Product Introduction (NPI) activities is made explicit through the illustration of a case study describing the support provided by system models to architectural choices during the industrialization of a novel optical sensor.

Keywords: analytical model, architecture comparison, MBSE, product industrialization, SysML, system thinking

Procedia PDF Downloads 165
2022 Hand Gesture Detection via EmguCV Canny Pruning

Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae

Abstract:

Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.

Keywords: canny pruning, hand recognition, machine learning, skin tracking

Procedia PDF Downloads 188
2021 Performance Comparison of Non-Binary RA and QC-LDPC Codes

Authors: Ni Wenli, He Jing

Abstract:

Repeat–Accumulate (RA) codes are subclass of LDPC codes with fast encoder structures. In this paper, we consider a nonbinary extension of binary LDPC codes over GF(q) and construct a non-binary RA code and a non-binary QC-LDPC code over GF(2^4), we construct non-binary RA codes with linear encoding method and non-binary QC-LDPC codes with algebraic constructions method. And the BER performance of RA and QC-LDPC codes over GF(q) are compared with BP decoding and by simulation over the Additive White Gaussian Noise (AWGN) channels.

Keywords: non-binary RA codes, QC-LDPC codes, performance comparison, BP algorithm

Procedia PDF Downloads 379
2020 Evaluating the Capability of the Flux-Limiter Schemes in Capturing the Turbulence Structures in a Fully Developed Channel Flow

Authors: Mohamed Elghorab, Vendra C. Madhav Rao, Jennifer X. Wen

Abstract:

Turbulence modelling is still evolving, and efforts are on to improve and develop numerical methods to simulate the real turbulence structures by using the empirical and experimental information. The monotonically integrated large eddy simulation (MILES) is an attractive approach for modelling turbulence in high Re flows, which is based on the solving of the unfiltered flow equations with no explicit sub-grid scale (SGS) model. In the current work, this approach has been used, and the action of the SGS model has been included implicitly by intrinsic nonlinear high-frequency filters built into the convection discretization schemes. The MILES solver is developed using the opensource CFD OpenFOAM libraries. The role of flux limiters schemes namely, Gamma, superBee, van-Albada and van-Leer, is studied in predicting turbulent statistical quantities for a fully developed channel flow with a friction Reynolds number, ReT = 180, and compared the numerical predictions with the well-established Direct Numerical Simulation (DNS) results for studying the wall generated turbulence. It is inferred from the numerical predictions that Gamma, van-Leer and van-Albada limiters produced more diffusion and overpredicted the velocity profiles, while superBee scheme reproduced velocity profiles and turbulence statistical quantities in good agreement with the reference DNS data in the streamwise direction although it deviated slightly in the spanwise and normal to the wall directions. The simulation results are further discussed in terms of the turbulence intensities and Reynolds stresses averaged in time and space to draw conclusion on the flux limiter schemes performance in OpenFOAM context.

Keywords: flux limiters, implicit SGS, MILES, OpenFOAM, turbulence statistics

Procedia PDF Downloads 192
2019 Adaptive Routing in NoC-Based Heterogeneous MPSoCs

Authors: M. K. Benhaoua, A. E. H. Benyamina, T. Djeradi, P. Boulet

Abstract:

In this paper, we propose adaptive routing that considers the routing of communications in order to optimize the overall performance. The routing technique uses a newly proposed Algorithm to route communications between the tasks. The routing we propose of the communications leads to a better optimization of several performance metrics (time and energy consumption). Experimental results show that the proposed routing approach provides significant performance improvements when compared to those using static routing.

Keywords: multi-processor systems-on-chip (mpsocs), network-on-chip (noc), heterogeneous architectures, adaptive routin

Procedia PDF Downloads 381
2018 Ultracapacitor State-of-Energy Monitoring System with On-Line Parameter Identification

Authors: N. Reichbach, A. Kuperman

Abstract:

The paper describes a design of a monitoring system for super capacitor packs in propulsion systems, allowing determining the instantaneous energy capacity under power loading. The system contains real-time recursive-least-squares identification mechanism, estimating the values of pack capacitance and equivalent series resistance. These values are required for accurate calculation of the state-of-energy.

Keywords: real-time monitoring, RLS identification algorithm, state-of-energy, super capacitor

Procedia PDF Downloads 539
2017 Gamipulation: Exploring Covert Manipulation through Gamification in the Context of Education

Authors: Aguiar-Castillo Lidia, Perez-Jimenez Rafael

Abstract:

The integration of gamification in educational settings aims to enhance student engagement and motivation through game design elements in learning activities. This paper introduces "Gamipulation," the subtle manipulation of students via gamification techniques serving hidden agendas without explicit consent. It highlights the need to distinguish between beneficial and exploitative uses of gamification in education, focusing on its potential to psychologically manipulate students for purposes misaligned with their best interests. Through a literature review and expert interviews, this study presents a conceptual framework outlining gamipulation's features. It examines ethical concerns like gradually introducing desired behaviors, using distraction to divert attention from significant learning objectives, immediacy of rewards fostering short-term engagement over long-term learning, infantilization of students, and exploitation of emotional responses over reflective thinking. Additionally, it discusses ethical issues in collecting and utilizing student data within gamified environments.  Key findings suggest that while gamification can enhance motivation and engagement, there's a fine line between ethical motivation and unethical manipulation. The study emphasizes the importance of transparency, respect for student autonomy, and alignment with educational values in gamified systems. It calls for educators and designers to be aware of gamification's manipulative potential and strive for ethical implementation that benefits students. In conclusion, this paper provides a framework for educators and researchers to understand and address gamipulation's ethical challenges. It encourages developing ethical guidelines and practices to ensure gamification in education remains a tool for positive engagement and learning rather than covert manipulation.

Keywords: gradualness, distraction, immediacy, infantilization, emotion

Procedia PDF Downloads 38
2016 Evaluation of a Method for the Virtual Design of a Software-based Approach for Electronic Fuse Protection in Automotive Applications

Authors: Dominic Huschke, Rudolf Keil

Abstract:

New driving functionalities like highly automated driving have a major impact on the electrics/electronics architecture of future vehicles and inevitably lead to higher safety requirements. Partly due to these increased requirements, the vehicle industry is increasingly looking at semiconductor switches as an alternative to conventional melting fuses. The protective functionality of semiconductor switches can be implemented in hardware as well as in software. A current approach discussed in science and industry is the implementation of a model of the protected low voltage power cable on a microcontroller to calculate its temperature. Here, the information regarding the current is provided by the continuous current measurement of the semiconductor switch. The signal to open the semiconductor switch is provided by the microcontroller when a previously defined limit for the temperature of the low voltage power cable is exceeded. A setup for the testing of the described principle for electronic fuse protection of a low voltage power cable is built and successfullyvalidated with experiments afterwards. Here, the evaluation criterion is the deviation of the measured temperature of the low voltage power cable from the specified limit temperature when the semiconductor switch is opened. The analysis is carried out with an assumed ambient temperature as well as with a measured ambient temperature. Subsequently, the experimentally performed investigations are simulated in a virtual environment. The explicit focus is on the simulation of the behavior of the microcontroller with an implemented model of a low voltage power cable in a real-time environment. Subsequently, the generated results are compared with those of the experiments. Based on this, the completely virtual design of the described approach is assumed to be valid.

Keywords: automotive wire harness, electronic fuse protection, low voltage power cable, semiconductor-based fuses, software-based validation

Procedia PDF Downloads 109
2015 Performance Evaluation of Packet Scheduling with Channel Conditioning Aware Based on Wimax Networks

Authors: Elmabruk Laias, Abdalla M. Hanashi, Mohammed Alnas

Abstract:

Worldwide Interoperability for Microwave Access (WiMAX) became one of the most challenging issues, since it was responsible for distributing available resources of the network among all users this leaded to the demand of constructing and designing high efficient scheduling algorithms in order to improve the network utilization, to increase the network throughput, and to minimize the end-to-end delay. In this study, the proposed algorithm focuses on an efficient mechanism to serve non-real time traffic in congested networks by considering channel status.

Keywords: WiMAX, Quality of Services (QoS), OPNE, Diff-Serv (DS).

Procedia PDF Downloads 293
2014 Adaptive Beamforming with Steering Error and Mutual Coupling between Antenna Sensors

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

Owing to close antenna spacing between antenna sensors within a compact space, a part of data in one antenna sensor would outflow to other antenna sensors when the antenna sensors in an antenna array operate simultaneously. This phenomenon is called mutual coupling effect (MCE). It has been shown that the performance of antenna array systems can be degraded when the antenna sensors are in close proximity. Especially, in a systems equipped with massive antenna sensors, the degradation of beamforming performance due to the MCE is significantly inevitable. Moreover, it has been shown that even a small angle error between the true direction angle of the desired signal and the steering angle deteriorates the effectiveness of an array beamforming system. However, the true direction vector of the desired signal may not be exactly known in some applications, e.g., the application in land mobile-cellular wireless systems. Therefore, it is worth developing robust techniques to deal with the problem due to the MCE and steering angle error for array beamforming systems. In this paper, we present an efficient technique for performing adaptive beamforming with robust capabilities against the MCE and the steering angle error. Only the data vector received by an antenna array is required by the proposed technique. By using the received array data vector, a correlation matrix is constructed to replace the original correlation matrix associated with the received array data vector. Then, the mutual coupling matrix due to the MCE on the antenna array is estimated through a recursive algorithm. An appropriate estimate of the direction angle of the desired signal can also be obtained during the recursive process. Based on the estimated mutual coupling matrix, the estimated direction angle, and the reconstructed correlation matrix, the proposed technique can effectively cure the performance degradation due to steering angle error and MCE. The novelty of the proposed technique is that the implementation procedure is very simple and the resulting adaptive beamforming performance is satisfactory. Simulation results show that the proposed technique provides much better beamforming performance without requiring complicated complexity as compared with the existing robust techniques.

Keywords: adaptive beamforming, mutual coupling effect, recursive algorithm, steering angle error

Procedia PDF Downloads 326
2013 Segmenting 3D Optical Coherence Tomography Images Using a Kalman Filter

Authors: Deniz Guven, Wil Ward, Jinming Duan, Li Bai

Abstract:

Over the past two decades or so, Optical Coherence Tomography (OCT) has been used to diagnose retina and optic nerve diseases. The retinal nerve fibre layer, for example, is a powerful diagnostic marker for detecting and staging glaucoma. With the advances in optical imaging hardware, the adoption of OCT is now commonplace in clinics. More and more OCT images are being generated, and for these OCT images to have clinical applicability, accurate automated OCT image segmentation software is needed. Oct image segmentation is still an active research area, as OCT images are inherently noisy, with the multiplicative speckling noise. Simple edge detection algorithms are unsuitable for detecting retinal layer boundaries in OCT images. Intensity fluctuation, motion artefact, and the presence of blood vessels also decrease further OCT image quality. In this paper, we introduce a new method for segmenting three-dimensional (3D) OCT images. This involves the use of a Kalman filter, which is commonly used in computer vision for object tracking. The Kalman filter is applied to the 3D OCT image volume to track the retinal layer boundaries through the slices within the volume and thus segmenting the 3D image. Specifically, after some pre-processing of the OCT images, points on the retinal layer boundaries in the first image are identified, and curve fitting is applied to them such that the layer boundaries can be represented by the coefficients of the curve equations. These coefficients then form the state space for the Kalman Filter. The filter then produces an optimal estimate of the current state of the system by updating its previous state using the measurements available in the form of a feedback control loop. The results show that the algorithm can be used to segment the retinal layers in OCT images. One of the limitations of the current algorithm is that the curve representation of the retinal layer boundary does not work well when the layer boundary is split into two, e.g., at the optic nerve, the layer boundary split into two. This maybe resolved by using a different approach to representing the boundaries, such as b-splines or level sets. The use of a Kalman filter shows promise to developing accurate and effective 3D OCT segmentation methods.

Keywords: optical coherence tomography, image segmentation, Kalman filter, object tracking

Procedia PDF Downloads 485
2012 Signed Language Phonological Awareness: Building Deaf Children's Vocabulary in Signed and Written Language

Authors: Lynn Mcquarrie, Charlotte Enns

Abstract:

The goal of this project was to develop a visually-based, signed language phonological awareness training program and to pilot the intervention with signing deaf children (ages 6 -10 years/ grades 1 - 4) who were beginning readers to assess the effects of systematic explicit American Sign Language (ASL) phonological instruction on both ASL vocabulary and English print vocabulary learning. Growing evidence that signing learners utilize visually-based signed language phonological knowledge (homologous to the sound-based phonological level of spoken language processing) when reading underscore the critical need for further research on the innovation of reading instructional practices for visual language learners. Multiple single-case studies using a multiple probe design across content (i.e., sign and print targets incorporating specific ASL phonological parameters – handshapes) was implemented to examine if a functional relationship existed between instruction and acquisition of these skills. The results indicated that for all cases, representing a variety of language abilities, the visually-based phonological teaching approach was exceptionally powerful in helping children to build their sign and print vocabularies. Although intervention/teaching studies have been essential in testing hypotheses about spoken language phonological processes supporting non-deaf children’s reading development, there are no parallel intervention/teaching studies exploring hypotheses about signed language phonological processes in supporting deaf children’s reading development. This study begins to provide the needed evidence to pursue innovative teaching strategies that incorporate the strengths of visual learners.

Keywords: American sign language phonological awareness, dual language strategies, vocabulary learning, word reading

Procedia PDF Downloads 337
2011 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments

Authors: David X. Dong, Qingming Zhang, Meng Lu

Abstract:

Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.

Keywords: optical sensor, regression model, nitrites, water quality

Procedia PDF Downloads 76
2010 The Effect of Traffic Load on the Maximum Response of a Cable-Stayed Bridge under Blast Loads

Authors: S. K. Hashemi, M. A. Bradford, H. R. Valipour

Abstract:

The Recent collapse of bridges has raised the awareness about safety and robustness of bridges subjected to extreme loading scenarios such as intentional/unintentional blast loads. The air blast generated by the explosion of bombs or fuel tankers leads to high-magnitude short-duration loading scenarios that can cause severe structural damage and loss of critical structural members. Hence, more attentions need to put towards bridge structures to develop guidelines to increase the resistance of such structures against the probable blast. Recent advancements in numerical methods have brought about the viable and cost effective facilities to simulate complicated blast scenarios and subsequently provide useful reference for safeguarding design of critical infrastructures. In the previous studies common bridge responses to blast load, the traffic load is sometimes not included in the analysis. Including traffic load will increase the axial compression in bridge piers especially when the axial load is relatively small. Traffic load also can reduce the uplift of girders and deck when the bridge experiences under deck explosion. For more complicated structures like cable-stayed or suspension bridges, however, the effect of traffic loads can be completely different. The tension in the cables increase and progressive collapse is likely to happen while traffic loads exist. Accordingly, this study is an attempt to simulate the effect of traffic load cases on the maximum local and global response of an entire cable-stayed bridge subjected to blast loadings using LS-DYNA explicit finite element code. The blast loads ranged from small to large explosion placed at different positions above the deck. Furthermore, the variation of the traffic load factor in the load combination and its effect on the dynamic response of the bridge under blast load is investigated.

Keywords: blast, cable-stayed bridge, LS-DYNA, numerical, traffic load

Procedia PDF Downloads 338
2009 Promises versus Realities: A Critical Assessment of the Integrated Design Process

Authors: Firdous Nizar, Carmela Cucuzzella

Abstract:

This paper explores how the integrated design process (IDP) was adopted for an architectural project. The IDP is a relatively new approach to collaborative design in architectural design projects in Canada. It has gained much traction recently as the closest possible approach to the successful management of low energy building projects and has been advocated as a productive method for multi-disciplinary collaboration within complex projects. This study is based on the premise that there are explicit and implicit dimensions of power within the integrated design process (IDP) in the green building industry that may or may not lead to irreconcilable differences in a process that demands consensus. To gain insight on the potential gap between the theoretical promises and practical realities of the IDP, a review of existing IDP literature is compared with a case study analysis of a competition-based architectural project in Canada, a first to incorporate the IDP in its overall design format. This paper aims to address the undertheorized power relations of the IDP in a real project. It presents a critical assessment through the lens of the combined theories of deliberative democracy by Jürgen Habermas, with that of agonistic pluralism by political theorist Chantal Mouffe. These two theories are intended to more appropriately embrace the conflictual situations in collaborative environments, and shed light on the relationships of power, between engineers, city officials, architects, and designers in this conventional consensus-based model. In addition, propositions for a shift in approach that embraces conflictual differences among its participants are put forth based on concepts of critical spatial practice by Markus Meissen. As IDP is a relatively new design process, it requires much deliberation on its structure from the theoretical framework built in this paper in order to unlock its true potential.

Keywords: agonistic pluralism, critical spatial practice, deliberative democracy, integrated design process

Procedia PDF Downloads 179
2008 Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder

Authors: Zhen Cheng, Xinyu Dai, Shujian Huang, Jiajun Chen

Abstract:

Recently, explanatory natural language inference has attracted much attention for the interpretability of logic relationship prediction, which is also known as explanation generation for Natural Language Inference (NLI). Existing explanation generators based on discriminative Encoder-Decoder architecture have achieved noticeable results. However, we find that these discriminative generators usually generate explanations with correct evidence but incorrect logic semantic. It is due to that logic information is implicitly encoded in the premise-hypothesis pairs and difficult to model. Actually, logic information identically exists between premise-hypothesis pair and explanation. And it is easy to extract logic information that is explicitly contained in the target explanation. Hence we assume that there exists a latent space of logic information while generating explanations. Specifically, we propose a generative model called Variational Explanation Generator (VariationalEG) with a latent variable to model this space. Training with the guide of explicit logic information in target explanations, latent variable in VariationalEG could capture the implicit logic information in premise-hypothesis pairs effectively. Additionally, to tackle the problem of posterior collapse while training VariaztionalEG, we propose a simple yet effective approach called Logic Supervision on the latent variable to force it to encode logic information. Experiments on explanation generation benchmark—explanation-Stanford Natural Language Inference (e-SNLI) demonstrate that the proposed VariationalEG achieves significant improvement compared to previous studies and yields a state-of-the-art result. Furthermore, we perform the analysis of generated explanations to demonstrate the effect of the latent variable.

Keywords: natural language inference, explanation generation, variational auto-encoder, generative model

Procedia PDF Downloads 153
2007 Generic Competences, the Great Forgotten: Teamwork in the Undergraduate Degree in Translation and Interpretation

Authors: María-Dolores Olvera-Lobo, Bryan John Robinson, Juncal Gutierrez-Artacho

Abstract:

Graduates are equipped with a wide range of generic competencies which complement solid curricular competencies and facilitate their access to the labour market in diverse fields and careers. However, some generic competencies such as instrumental, personal and systemic competencies related to teamwork and interpersonal communication skills, decision-making and organization skills are seldom taught explicitly and even less often assessed. In this context, translator training has embraced a broad range of competencies specified in the undergraduate program currently taught at universities and opens up the learning experience to cover areas often ignored due to the difficulties inherent in both teaching and assessment. In practice, translator training combines two well-established approaches to teaching/learning: project-based learning and genuinely cooperative – or merely collaborative – learning. Our professional approach to translator training is a model focused on and adapted to the teleworking context of professional translation and presented through the medium of blended e-learning. Teamwork-related competencies are extremely relevant, and they require explicit and implicit teaching so that graduates can be confident about their capacity to make their way in professional contexts. In order to highlight the importance of teamwork and intra-team relationships beyond the classroom, we aim to raise awareness of teamwork processes so as to empower translation students in managing their interaction and ensure that they gain valuable pre-professional experience. With these objectives, at the University of Granada (Spain) we have developed a range of classroom activities and assessment tools. The results of their application are summarized in this study.

Keywords: blended learning, collaborative teamwork, cross-curricular competencies, higher education, intra-team relationships, students’ perceptions, translator training

Procedia PDF Downloads 173
2006 An Atomistic Approach to Define Continuum Mechanical Quantities in One Dimensional Nanostructures at Finite Temperature

Authors: Smriti, Ajeet Kumar

Abstract:

We present a variant of the Irving-Kirkwood procedure to obtain the microscopic expressions of the cross-section averaged continuum fields such as internal force and moment in one-dimensional nanostructures in the non-equilibrium setting. In one-dimensional continuum theories for slender bodies, we deal with quantities such as mass, linear momentum, angular momentum, and strain energy densities, all defined per unit length. These quantities are obtained by integrating the corresponding pointwise (per unit volume) quantities over the cross-section of the slender body. However, no well-defined cross-section exists for these nanostructures at finite temperature. We thus define the cross-section of a nanorod to be an infinite plane which is fixed in space even when time progresses and defines the above continuum quantities by integrating the pointwise microscopic quantities over this infinite plane. The method yields explicit expressions of both the potential and kinetic parts of the above quantities. We further specialize in these expressions for helically repeating one-dimensional nanostructures in order to use them in molecular dynamics study of extension, torsion, and bending of such nanostructures. As, the Irving-Kirkwood procedure does not yield expressions of stiffnesses, we resort to a thermodynamic equilibrium approach to obtain the expressions of axial force, twisting moment, bending moment, and the associated stiffnesses by taking the first and second derivatives of the Helmholtz free energy with respect to conjugate strain measures. The equilibrium approach yields expressions independent of kinetic terms. We then establish the equivalence of the expressions obtained using the two approaches. The derived expressions are used to understand the extension, torsion, and bending of single-walled carbon nanotubes at non-zero temperatures.

Keywords: thermoelasticity, molecular dynamics, one dimensional nanostructures, nanotube buckling

Procedia PDF Downloads 130
2005 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market

Authors: Cristian Păuna

Abstract:

In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.

Keywords: algorithmic trading, automated trading systems, high-frequency trading, DAX Deutscher Aktienindex

Procedia PDF Downloads 134
2004 Hydraulic Characteristics of Mine Tailings by Metaheuristics Approach

Authors: Akhila Vasudev, Himanshu Kaushik, Tadikonda Venkata Bharat

Abstract:

A large number of mine tailings are produced every year as part of the extraction process of phosphates, gold, copper, and other materials. Mine tailings are high in water content and have very slow dewatering behavior. The efficient design of tailings dam and economical disposal of these slurries requires the knowledge of tailings consolidation behavior. The large-strain consolidation theory closely predicts the self-weight consolidation of these slurries as the theory considers the conservation of mass and momentum conservation and considers the hydraulic conductivity as a function of void ratio. Classical laboratory techniques, such as settling column test, seepage consolidation test, etc., are expensive and time-consuming for the estimation of hydraulic conductivity variation with void ratio. Inverse estimation of the constitutive relationships from the measured settlement versus time curves is explored. In this work, inverse analysis based on metaheuristics techniques will be explored for predicting the hydraulic conductivity parameters for mine tailings from the base excess pore water pressure dissipation curve and the initial conditions of the mine tailings. The proposed inverse model uses particle swarm optimization (PSO) algorithm, which is based on the social behavior of animals searching for food sources. The finite-difference numerical solution of the forward analytical model is integrated with the PSO algorithm to solve the inverse problem. The method is tested on synthetic data of base excess pore pressure dissipation curves generated using the finite difference method. The effectiveness of the method is verified using base excess pore pressure dissipation curve obtained from a settling column experiment and further ensured through comparison with available predicted hydraulic conductivity parameters.

Keywords: base excess pore pressure, hydraulic conductivity, large strain consolidation, mine tailings

Procedia PDF Downloads 140