Search results for: error compensation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2192

Search results for: error compensation

932 Representativity Based Wasserstein Active Regression

Authors: Benjamin Bobbia, Matthias Picard

Abstract:

In recent years active learning methodologies based on the representativity of the data seems more promising to limit overfitting. The presented query methodology for regression using the Wasserstein distance measuring the representativity of our labelled dataset compared to the global distribution. In this work a crucial use of GroupSort Neural Networks is made therewith to draw a double advantage. The Wasserstein distance can be exactly expressed in terms of such neural networks. Moreover, one can provide explicit bounds for their size and depth together with rates of convergence. However, heterogeneity of the dataset is also considered by weighting the Wasserstein distance with the error of approximation at the previous step of active learning. Such an approach leads to a reduction of overfitting and high prediction performance after few steps of query. After having detailed the methodology and algorithm, an empirical study is presented in order to investigate the range of our hyperparameters. The performances of this method are compared, in terms of numbers of query needed, with other classical and recent query methods on several UCI datasets.

Keywords: active learning, Lipschitz regularization, neural networks, optimal transport, regression

Procedia PDF Downloads 82
931 Comparative Study of Different Enhancement Techniques for Computed Tomography Images

Authors: C. G. Jinimole, A. Harsha

Abstract:

One of the key problems facing in the analysis of Computed Tomography (CT) images is the poor contrast of the images. Image enhancement can be used to improve the visual clarity and quality of the images or to provide a better transformation representation for further processing. Contrast enhancement of images is one of the acceptable methods used for image enhancement in various applications in the medical field. This will be helpful to visualize and extract details of brain infarctions, tumors, and cancers from the CT image. This paper presents a comparison study of five contrast enhancement techniques suitable for the contrast enhancement of CT images. The types of techniques include Power Law Transformation, Logarithmic Transformation, Histogram Equalization, Contrast Stretching, and Laplacian Transformation. All these techniques are compared with each other to find out which enhancement provides better contrast of CT image. For the comparison of the techniques, the parameters Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) are used. Logarithmic Transformation provided the clearer and best quality image compared to all other techniques studied and has got the highest value of PSNR. Comparison concludes with better approach for its future research especially for mapping abnormalities from CT images resulting from Brain Injuries.

Keywords: computed tomography, enhancement techniques, increasing contrast, PSNR and MSE

Procedia PDF Downloads 315
930 Optimal Configuration for Polarimetric Surface Plasmon Resonance Sensors

Authors: Ibrahim Watad, Ibrahim Abdulhalim

Abstract:

Conventional spectroscopic surface plasmon resonance (SPR) sensors are widely used, both in fundamental research and environmental monitoring as well as healthcare diagnostics. However, they still lack the low limit of detection (LOD) and there still a place for improvement. SPR conventional sensors are based on the detection of a dip in the reflectivity spectrum which is relatively wide. To improve the performance of these sensors, many techniques and methods proposed either to reduce the width of the dip or to increase the sensitivity. Together with that, profiting from the sharp jump in the phase spectrum under SPR, several works suggested the extraction of the phase of the reflected wave. However, existing phase measurement setups are in general more complicated compared to the conventional setups, require more stability and are very sensitive to external vibrations and noises. In this study, a simple polarimetric technique for phase extraction under SPR is presented, followed by a theoretical error analysis and an experimental verification. The advantages of the proposed technique upon existing techniques will be elaborated, together with conclusions regarding the best polarimetric function, and its corresponding optimal metal layer range of thicknesses to use under the conventional Kretschmann-Raether configuration.

Keywords: plasmonics, polarimetry, thin films, optical sensors

Procedia PDF Downloads 405
929 Optimization of Temperature Coefficients for MEMS Based Piezoresistive Pressure Sensor

Authors: Vijay Kumar, Jaspreet Singh, Manoj Wadhwa

Abstract:

Piezo-resistive pressure sensors were one of the first developed micromechanical system (MEMS) devices and still display a significant growth prompted by the advancements in micromachining techniques and material technology. In MEMS based piezo-resistive pressure sensors, temperature can be considered as the main environmental condition which affects the system performance. The study of the thermal behavior of these sensors is essential to define the parameters that cause the output characteristics to drift. In this work, a study on the effects of temperature and doping concentration in a boron implanted piezoresistor for a silicon-based pressure sensor is discussed. We have optimized the temperature coefficient of resistance (TCR) and temperature coefficient of sensitivity (TCS) values to determine the effect of temperature drift on the sensor performance. To be more precise, in order to reduce the temperature drift, a high doping concentration is needed. And it is well known that the Wheatstone bridge in a pressure sensor is supplied with a constant voltage or a constant current input supply. With a constant voltage supply, the thermal drift can be compensated along with an external compensation circuit, whereas the thermal drift in the constant current supply can be directly compensated by the bridge itself. But it would be beneficial to also compensate the temperature coefficient of piezoresistors so as to further reduce the temperature drift. So, with a current supply, the TCS is dependent on both the TCπ and TCR. As TCπ is a negative quantity and TCR is a positive quantity, it is possible to choose an appropriate doping concentration at which both of them cancel each other. An exact cancellation of TCR and TCπ values is not readily attainable; therefore, an adjustable approach is generally used in practical applications. Thus, one goal of this work has been to better understand the origin of temperature drift in pressure sensor devices so that the temperature effects can be minimized or eliminated. This paper describes the optimum doping levels for the piezoresistors where the TCS of the pressure transducers will be zero due to the cancellation of TCR and TCπ values. Also, the fabrication and characterization of the pressure sensor are carried out. The optimized TCR value obtained for the fabricated die is 2300 ± 100ppm/ᵒC, for which the piezoresistors are implanted at a doping concentration of 5E13 ions/cm³ and the TCS value of -2100ppm/ᵒC is achieved. Therefore, the desired TCR and TCS value is achieved, which are approximately equal to each other, so the thermal effects are considerably reduced. Finally, we have calculated the effect of temperature and doping concentration on the output characteristics of the sensor. This study allows us to predict the sensor behavior against temperature and to minimize this effect by optimizing the doping concentration.

Keywords: piezo-resistive, pressure sensor, doping concentration, TCR, TCS

Procedia PDF Downloads 184
928 Relative Navigation with Laser-Based Intermittent Measurement for Formation Flying Satellites

Authors: Jongwoo Lee, Dae-Eun Kang, Sang-Young Park

Abstract:

This study presents a precise relative navigational method for satellites flying in formation using laser-based intermittent measurement data. The measurement data for the relative navigation between two satellites consist of a relative distance measured by a laser instrument and relative attitude angles measured by attitude determination. The relative navigation solutions are estimated by both the Extended Kalman filter (EKF) and unscented Kalman filter (UKF). The solutions estimated by the EKF may become inaccurate or even diverge as measurement outage time gets longer because the EKF utilizes a linearization approach. However, this study shows that the UKF with the appropriate scaling parameters provides a stable and accurate relative navigation solutions despite the long measurement outage time and large initial error as compared to the relative navigation solutions of the EKF. Various navigation results have been analyzed by adjusting the scaling parameters of the UKF.

Keywords: satellite relative navigation, laser-based measurement, intermittent measurement, unscented Kalman filter

Procedia PDF Downloads 358
927 Profit-Based Artificial Neural Network (ANN) Trained by Migrating Birds Optimization: A Case Study in Credit Card Fraud Detection

Authors: Ashkan Zakaryazad, Ekrem Duman

Abstract:

A typical classification technique ranks the instances in a data set according to the likelihood of belonging to one (positive) class. A credit card (CC) fraud detection model ranks the transactions in terms of probability of being fraud. In fact, this approach is often criticized, because firms do not care about fraud probability but about the profitability or costliness of detecting a fraudulent transaction. The key contribution in this study is to focus on the profit maximization in the model building step. The artificial neural network proposed in this study works based on profit maximization instead of minimizing the error of prediction. Moreover, some studies have shown that the back propagation algorithm, similar to other gradient–based algorithms, usually gets trapped in local optima and swarm-based algorithms are more successful in this respect. In this study, we train our profit maximization ANN using the Migrating Birds optimization (MBO) which is introduced to literature recently.

Keywords: neural network, profit-based neural network, sum of squared errors (SSE), MBO, gradient descent

Procedia PDF Downloads 476
926 Instability Index Method and Logistic Regression to Assess Landslide Susceptibility in County Route 89, Taiwan

Authors: Y. H. Wu, Ji-Yuan Lin, Yu-Ming Liou

Abstract:

This study aims to set up the landslide susceptibility map of County Route 89 at Ren-Ai Township in Nantou County using the Instability Index Method and Logistic regression. Seven susceptibility factors including Slope Angle, Aspect, Elevation, Distance to fold, Distance to River, Distance to Road and Accumulated Rainfall were obtained by GIS based on the Typhoon Toraji landslide area identified by Industrial Technology Research Institute in 2001. To calculate the landslide percentage of each factor and acquire the weight and grade the grid by means of Instability Index Method. In this study, landslide susceptibility can be classified into four grades: high, medium high, medium low and low, in order to determine the advantages and disadvantages of the two models. The precision of this model is verified by classification error matrix and SRC curve. These results suggest that the logistic regression model is a preferred method than instability index in the assessment of landslide susceptibility. It is suitable for the landslide prediction and precaution in this area in the future.

Keywords: instability index method, logistic regression, landslide susceptibility, SRC curve

Procedia PDF Downloads 294
925 Propagation of Ultra-High Energy Cosmic Rays through Extragalactic Magnetic Fields: An Exploratory Study of the Distance Amplification from Rectilinear Propagation

Authors: Rubens P. Costa, Marcelo A. Leigui de Oliveira

Abstract:

The comprehension of features on the energy spectra, the chemical compositions, and the origins of Ultra-High Energy Cosmic Rays (UHECRs) - mainly atomic nuclei with energies above ~1.0 EeV (exa-electron volts) - are intrinsically linked to the problem of determining the magnitude of their deflections in cosmic magnetic fields on cosmological scales. In addition, as they propagate from the source to the observer, modifications are expected in their original energy spectra, anisotropy, and the chemical compositions due to interactions with low energy photons and matter. This means that any consistent interpretation of the nature and origin of UHECRs has to include the detailed knowledge of their propagation in a three-dimensional environment, taking into account the magnetic deflections and energy losses. The parameter space range for the magnetic fields in the universe is very large because the field strength and especially their orientation have big uncertainties. Particularly, the strength and morphology of the Extragalactic Magnetic Fields (EGMFs) remain largely unknown, because of the intrinsic difficulty of observing them. Monte Carlo simulations of charged particles traveling through a simulated magnetized universe is the straightforward way to study the influence of extragalactic magnetic fields on UHECRs propagation. However, this brings two major difficulties: an accurate numerical modeling of charged particles diffusion in magnetic fields, and an accurate numerical modeling of the magnetized Universe. Since magnetic fields do not cause energy losses, it is important to impose that the particle tracking method conserve the particle’s total energy and that the energy changes are results of the interactions with background photons only. Hence, special attention should be paid to computational effects. Additionally, because of the number of particles necessary to obtain a relevant statistical sample, the particle tracking method must be computationally efficient. In this work, we present an analysis of the propagation of ultra-high energy charged particles in the intergalactic medium. The EGMFs are considered to be coherent within cells of 1 Mpc (mega parsec) diameter, wherein they have uniform intensities of 1 nG (nano Gauss). Moreover, each cell has its field orientation randomly chosen, and a border region is defined such that at distances beyond 95% of the cell radius from the cell center smooth transitions have been applied in order to avoid discontinuities. The smooth transitions are simulated by weighting the magnetic field orientation by the particle's distance to the two nearby cells. The energy losses have been treated in the continuous approximation parameterizing the mean energy loss per unit path length by the energy loss length. We have shown, for a particle with the typical energy of interest the integration method performance in the relative error of Larmor radius, without energy losses and the relative error of energy. Additionally, we plotted the distance amplification from rectilinear propagation as a function of the traveled distance, particle's magnetic rigidity, without energy losses, and particle's energy, with energy losses, to study the influence of particle's species on these calculations. The results clearly show when it is necessary to use a full three-dimensional simulation.

Keywords: cosmic rays propagation, extragalactic magnetic fields, magnetic deflections, ultra-high energy

Procedia PDF Downloads 128
924 Low Density Parity Check Codes

Authors: Kassoul Ilyes

Abstract:

The field of error correcting codes has been revolutionized by the introduction of iteratively decoded codes. Among these, LDPC codes are now a preferred solution thanks to their remarkable performance and low complexity. The binary version of LDPC codes showed even better performance, although it’s decoding introduced greater complexity. This thesis studies the performance of binary LDPC codes using simplified weighted decisions. Information is transported between a transmitter and a receiver by digital transmission systems, either by propagating over a radio channel or also by using a transmission medium such as the transmission line. The purpose of the transmission system is then to carry the information from the transmitter to the receiver as reliably as possible. These codes have not generated enough interest within the coding theory community. This forgetfulness will last until the introduction of Turbo-codes and the iterative principle. Then it was proposed to adopt Pearl's Belief Propagation (BP) algorithm for decoding these codes. Subsequently, Luby introduced irregular LDPC codes characterized by a parity check matrix. And finally, we study simplifications on binary LDPC codes. Thus, we propose a method to make the exact calculation of the APP simpler. This method leads to simplifying the implementation of the system.

Keywords: LDPC, parity check matrix, 5G, BER, SNR

Procedia PDF Downloads 156
923 Optimization of Production Scheduling through the Lean and Simulation Integration in Automotive Company

Authors: Guilherme Gorgulho, Carlos Roberto Camello Lima

Abstract:

Due to the competitive market in which companies are currently engaged, the constant changes require companies to react quickly regarding the variability of demand and process. The changes are caused by customers, or by demand fluctuations or variations of products, or the need to serve customers within agreed delivery taking into account the continuous search for quality and competitive prices in products. These changes end up influencing directly or indirectly the activities of the Planning and Production Control (PPC), which does business in strategic, tactical and operational levels of production systems. One area of concern for organizations is in the short term (operational level), because this planning stage any error or divergence will cause waste and impact on the delivery of products on time to customers. Thus, this study aims to optimize the efficiency of production scheduling, using different sequencing strategies in an automotive company. Seeking to aim the proposed objective, we used the computer simulation in conjunction with lean manufacturing to build and validate the current model, and subsequently the creation of future scenarios.

Keywords: computational simulation, lean manufacturing, production scheduling, sequencing strategies

Procedia PDF Downloads 274
922 Occupational Safety in Construction Projects

Authors: Heba Elbibas, Esra Gnijeewa, Zedan Hatush

Abstract:

This paper presents research on occupational safety in construction projects, where the importance of safety management in projects was studied, including the preparation of a safety plan and program for each project and the identification of the responsibilities of each party to the contract. The research consists of two parts: 1-Field visits: which were field visits to three construction projects, including building projects, road projects, and tower installation. The safety level of these projects was evaluated through a checklist that includes the most important safety elements in terms of the application of these items in the projects. 2-Preparation of a questionnaire: which included supervisors and engineers and aimed to determine the level of awareness and commitment of different project categories to safety standards. The results showed the following: i) There is a moderate occupational safety policy. ii) The preparation and storage of maintenance reports are not fully complied with. iii) There is a moderate level of training on occupational safety for project workers. iv) The company does not impose penalties on safety violators permanently. v) There is a moderate policy for equipment and machinery safety. vi) Self-injuries occur due to (fatigue, lack of attention, deliberate error, and emotional factors), with a rate of 82.4%.

Keywords: management, safety, occupational safety, classification

Procedia PDF Downloads 108
921 A Study of Predicting Judgments on Causes of Online Privacy Invasions: Based on U.S Judicial Cases

Authors: Minjung Park, Sangmi Chai, Myoung Jun Lee

Abstract:

Since there are growing concerns on online privacy, enterprises could involve various personal privacy infringements cases resulting legal causations. For companies that are involving online business, it is important for them to pay extra attentions to protect users’ privacy. If firms can aware consequences from possible online privacy invasion cases, they can more actively prevent future online privacy infringements. This study attempts to predict the probability of ruling types caused by various invasion cases under U.S Personal Privacy Act. More specifically, this research explores online privacy invasion cases which was sentenced guilty to identify types of criminal punishments such as penalty, imprisonment, probation as well as compensation in civil cases. Based on the 853 U.S judicial cases ranged from January, 2000 to May, 2016, which related on data privacy, this research examines the relationship between personal information infringements cases and adjudications. Upon analysis results of 41,724 words extracted from 853 regal cases, this study examined online users’ privacy invasion cases to predict the probability of conviction for a firm as an offender in both of criminal and civil law. This research specifically examines that a cause of privacy infringements and a judgment type, whether it leads a civil or criminal liability, from U.S court. This study applies network text analysis (NTA) for data analysis, which is regarded as a useful method to discover embedded social trends within texts. According to our research results, certain online privacy infringement cases caused by online spamming and adware have a high possibility that firms are liable in the case. Our research results provide meaningful insights to academia as well as industry. First, our study is providing a new insight by applying Big Data analytics to legal cases so that it can predict the cause of invasions and legal consequences. Since there are few researches applying big data analytics in the domain of law, specifically in online privacy, this study suggests new area that future studies can explore. Secondly, this study reflects social influences, such as a development of privacy invasion technologies and changes of users’ level of awareness of online privacy on judicial cases analysis by adopting NTA method. Our research results indicate that firms need to improve technical and managerial systems to protect users’ online privacy to avoid negative legal consequences.

Keywords: network text analysis, online privacy invasions, personal information infringements, predicting judgements

Procedia PDF Downloads 229
920 A Spiral Dynamic Optimised Hybrid Fuzzy Logic Controller for a Unicycle Mobile Robot on Irregular Terrains

Authors: Abdullah M. Almeshal, Mohammad R. Alenezi, Talal H. Alzanki

Abstract:

This paper presents a hybrid fuzzy logic control strategy for a unicycle trajectory following robot on irregular terrains. In literature, researchers have presented the design of path tracking controllers of mobile robots on non-frictional surface. In this work, the robot is simulated to drive on irregular terrains with contrasting frictional profiles of peat and rough gravel. A hybrid fuzzy logic controller is utilised to stabilise and drive the robot precisely with the predefined trajectory and overcome the frictional impact. The controller gains and scaling factors were optimised using spiral dynamics optimisation algorithm to minimise the mean square error of the linear and angular velocities of the unicycle robot. The robot was simulated on various frictional surfaces and terrains and the controller was able to stabilise the robot with a superior performance that is shown via simulation results.

Keywords: fuzzy logic control, mobile robot, trajectory tracking, spiral dynamic algorithm

Procedia PDF Downloads 496
919 Perception of TQM Implementation and Perceived Cost of Poor Quality: A Case Study of Local Automotive Company’s Supplier

Authors: Fakhruddin Esa, Yusri Yusof

Abstract:

The confirmatory of Total Quality Management (TQM) implementation is most vital in quality management. This paper focuses on employees' perceptions towards TQM implementation in a local automotive company supplier. The objectives of this study are first and foremost to determine the perception of TQM implementation among the staff, and secondly to ascertain the correlation between the variables, and lastly to identify the relative influence of the 10 TQM variables on the cost of poor quality (COPQ). The TQM implementation is perceived to be moderate. All correlation is found to be significant and five variables having positively moderate to high correlation. Out of 10 variables, quality system improvement, reward and recognition and customer focus influence the perceived COPQ. This study extended a discussion on these three variables contribution to TQM in general and the human resource development in the organization. A significant recommendation to lowering costs of internal error, such as trouble shooting and scraps are also discussed. Certain components of further research that would add value to this study have also been suggested and perhaps could be implemented at policy-level initiatives.

Keywords: cost of poor quality (COPQ), correlation, total quality management (TQM), variables

Procedia PDF Downloads 219
918 Velocity Logs Error Reduction for In-Service Calibration of Vessel Performance Indicators

Authors: Maria Tsompanoglou, Dimitris Armenis

Abstract:

Vessel behavior in different operational and weather conditions constitutes the main area of interest for the ship operator. Ship speed and fuel consumption are the most decisive parameters in this respect, as their correlation provides information about the economic and environmental efficiency of the vessel, becoming the basis of decision making in terms of maintenance and trading. In the analysis of vessel operational profile for the evaluation of fuel consumption and the equivalent CO2 emissions footprint, the indications of Speed Through Water are widely used. The seasonal and regional variations in seawater characteristics, which are available nowadays, can provide the basis for accurate estimation of the errors in Speed Through Water indications at any time. Accuracy in the speed value on a route basis can enable operator identify the ship fuel and propulsion efficiency and proceed with improvements. This paper discusses case studies, where the actual vessel speed was corrected by a post-processing algorithm. The effects of the vessel correction to standard Key Performance Indicators, as well as operational findings not identified earlier, are also discussed.

Keywords: data analytics, MATLAB, vessel performance monitoring, speed through water

Procedia PDF Downloads 303
917 Liability of AI in Workplace: A Comparative Approach Between Shari’ah and Common Law

Authors: Barakat Adebisi Raji

Abstract:

In the workplace, Artificial Intelligence has, in recent years, emerged as a transformative technology that revolutionizes how organizations operate and perform tasks. It is a technology that has a significant impact on transportation, manufacturing, education, cyber security, robotics, agriculture, healthcare, and so many other organizations. By harnessing AI technology, workplaces can enhance productivity, streamline processes, and make more informed decisions. Given the potential of AI to change the way we work and its impact on the labor market in years to come, employers understand that it entails legal challenges and risks despite the advantages inherent in it. Therefore, as AI continues to integrate into various aspects of the workplace, understanding the legal and ethical implications becomes paramount. Also central to this study is the question of who is held liable where AI makes any defaults; the person (company) who created the AI, the person who programmed the AI algorithm or the person who uses the AI? Thus, the aim of this paper is to provide a detailed overview of how AI-related liabilities are addressed under each legal tradition and shed light on potential areas of accord and divergence between the two legal cultures. The objectives of this paper are to (i) examine the ability of Common law and Islamic law to accommodate the issues and damage caused by AI in the workplace and the legality of compensation for such injury sustained; (ii) to discuss the extent to which AI can be described as a legal personality to bear responsibility: (iii) examine the similarities and disparities between Common Law and Islamic Jurisprudence on the liability of AI in the workplace. The methodology adopted in this work was qualitative, and the method was purely a doctrinal research method where information is gathered from the primary and secondary sources of law, such as comprehensive materials found in journal articles, expert-authored books and online news sources. Comparative legal method was also used to juxtapose the approach of Islam and Common Law. The paper concludes that since AI, in its current legal state, is not recognized as a legal entity, operators or manufacturers of AI should be held liable for any damage that arises, and the determination of who bears the responsibility should be dependent on the circumstances surrounding each scenario. The study recommends the granting of legal personality to AI systems, the establishment of legal rights and liabilities for AI, the establishment of a holistic Islamic virtue-based AI ethics framework, and the consideration of Islamic ethics.

Keywords: AI, health- care, agriculture, cyber security, common law, Shari'ah

Procedia PDF Downloads 41
916 Grid-Connected Doubly-Fed Induction Generator under Integral Backstepping Control Combined with High Gain Observer

Authors: Oluwaseun Simon Adekanle, M'hammed Guisser, Elhassane Abdelmounim, Mohamed Aboulfatah

Abstract:

In this paper, modeling and control of a grid connected 660KW Doubly-Fed Induction Generator wind turbine is presented. Stator flux orientation is used to realize active-reactive power decoupling to enable independent control of active and reactive power. The recursive Integral Backstepping technique is used to control generator speed to its optimum value and to obtain unity power factor. The controller is combined with High Gain Observer to estimate the mechanical torque of the machine. The most important advantage of this combination of High Gain Observer and the Integral Backstepping controller is the annulation of static error that may occur due to incertitude between the actual value of a parameter and its estimated value by the controller. Simulation results under Matlab/Simulink show the robustness of this control technique in presence of parameter variation.

Keywords: doubly-fed induction generator, field orientation control, high gain observer, integral backstepping control

Procedia PDF Downloads 364
915 Semantic Based Analysis in Complaint Management System with Analytics

Authors: Francis Alterado, Jennifer Enriquez

Abstract:

Semantic Based Analysis in Complaint Management System with Analytics is an enhanced tool of providing complaints by the clients as well as a mechanism for Palawan Polytechnic College to gather, process, and monitor status of these complaints. The study has a mobile application that serves as a remote facility of communication between the students and the school management on the issues encountered by the student and the solution of every complaint received. In processing the complaints, text mining and clustering algorithms were utilized. Every module of the systems was tested and based on the results; these are 100% free from error before integration was done. A system testing was also done by checking the expected functionality of the system which was 100% functional. The system was tested by 10 students by forwarding complaints to 10 departments. Based on results, the students were able to submit complaints, the system was able to process accordingly by identifying to which department the complaints are intended, and the concerned department was able to give feedback on the complaint received to the student. With this, the system gained 4.7 rating which means Excellent.

Keywords: technology adoption, emerging technology, issues challenges, algorithm, text mining, mobile technology

Procedia PDF Downloads 200
914 Optical Imaging Based Detection of Solder Paste in Printed Circuit Board Jet-Printing Inspection

Authors: D. Heinemann, S. Schramm, S. Knabner, D. Baumgarten

Abstract:

Purpose: Applying solder paste to printed circuit boards (PCB) with stencils has been the method of choice over the past years. A new method uses a jet printer to deposit tiny droplets of solder paste through an ejector mechanism onto the board. This allows for more flexible PCB layouts with smaller components. Due to the viscosity of the solder paste, air blisters can be trapped in the cartridge. This can lead to missing solder joints or deviations in the applied solder volume. Therefore, a built-in and real-time inspection of the printing process is needed to minimize uncertainties and increase the efficiency of the process by immediate correction. The objective of the current study is the design of an optimal imaging system and the development of an automatic algorithm for the detection of applied solder joints from optical from the captured images. Methods: In a first approach, a camera module connected to a microcomputer and LED strips are employed to capture images of the printed circuit board under four different illuminations (white, red, green and blue). Subsequently, an improved system including a ring light, an objective lens, and a monochromatic camera was set up to acquire higher quality images. The obtained images can be divided into three main components: the PCB itself (i.e., the background), the reflections induced by unsoldered positions or screw holes and the solder joints. Non-uniform illumination is corrected by estimating the background using a morphological opening and subtraction from the input image. Image sharpening is applied in order to prevent error pixels in the subsequent segmentation. The intensity thresholds which divide the main components are obtained from the multimodal histogram using three probability density functions. Determining the intersections delivers proper thresholds for the segmentation. Remaining edge gradients produces small error areas which are removed by another morphological opening. For quantitative analysis of the segmentation results, the dice coefficient is used. Results: The obtained PCB images show a significant gradient in all RGB channels, resulting from ambient light. Using different lightings and color channels 12 images of a single PCB are available. A visual inspection and the investigation of 27 specific points show the best differentiation between those points using a red lighting and a green color channel. Estimating two thresholds from analyzing the multimodal histogram of the corrected images and using them for segmentation precisely extracts the solder joints. The comparison of the results to manually segmented images yield high sensitivity and specificity values. Analyzing the overall result delivers a Dice coefficient of 0.89 which varies for single object segmentations between 0.96 for a good segmented solder joints and 0.25 for single negative outliers. Conclusion: Our results demonstrate that the presented optical imaging system and the developed algorithm can robustly detect solder joints on printed circuit boards. Future work will comprise a modified lighting system which allows for more precise segmentation results using structure analysis.

Keywords: printed circuit board jet-printing, inspection, segmentation, solder paste detection

Procedia PDF Downloads 338
913 IoT Based Monitoring Temperature and Humidity

Authors: Jay P. Sipani, Riki H. Patel, Trushit Upadhyaya

Abstract:

Today there is a demand to monitor environmental factors almost in all research institutes and industries and even for domestic uses. The analog data measurement requires manual effort to note readings, and there may be a possibility of human error. Such type of systems fails to provide and store precise values of parameters with high accuracy. Analog systems are having drawback of storage/memory. Therefore, there is a requirement of a smart system which is fully automated, accurate and capable enough to monitor all the environmental parameters with utmost possible accuracy. Besides, it should be cost-effective as well as portable too. This paper represents the Wireless Sensor (WS) data communication using DHT11, Arduino, SIM900A GSM module, a mobile device and Liquid Crystal Display (LCD). Experimental setup includes the heating arrangement of DHT11 and transmission of its data using Arduino and SIM900A GSM shield. The mobile device receives the data using Arduino, GSM shield and displays it on LCD too. Heating arrangement is used to heat and cool the temperature sensor to study its characteristics.

Keywords: wireless communication, Arduino, DHT11, LCD, SIM900A GSM module, mobile phone SMS

Procedia PDF Downloads 284
912 Evaluating Forecasts Through Stochastic Loss Order

Authors: Wilmer Osvaldo Martinez, Manuel Dario Hernandez, Juan Manuel Julio

Abstract:

We propose to assess the performance of k forecast procedures by exploring the distributions of forecast errors and error losses. We argue that non systematic forecast errors minimize when their distributions are symmetric and unimodal, and that forecast accuracy should be assessed through stochastic loss order rather than expected loss order, which is the way it is customarily performed in previous work. Moreover, since forecast performance evaluation can be understood as a one way analysis of variance, we propose to explore loss distributions under two circumstances; when a strict (but unknown) joint stochastic order exists among the losses of all forecast alternatives, and when such order happens among subsets of alternative procedures. In spite of the fact that loss stochastic order is stronger than loss moment order, our proposals are at least as powerful as competing tests, and are robust to the correlation, autocorrelation and heteroskedasticity settings they consider. In addition, since our proposals do not require samples of the same size, their scope is also wider, and provided that they test the whole loss distribution instead of just loss moments, they can also be used to study forecast distributions as well. We illustrate the usefulness of our proposals by evaluating a set of real world forecasts.

Keywords: forecast evaluation, stochastic order, multiple comparison, non parametric test

Procedia PDF Downloads 90
911 The Modeling and Effectiveness Evaluation for Vessel Evasion to Acoustic Homing Torpedo

Authors: Li Minghui, Min Shaorong, Zhang Jun

Abstract:

This paper aims for studying the operational efficiency of surface warship’s motorized evasion to acoustic homing torpedo. It orderly developed trajectory model, self-guide detection model, vessel evasion model, as well as anti-torpedo error model in three-dimensional space to make up for the deficiency of precious researches analyzing two-dimensionally confrontational models. Then, making use of the Monte Carlo method, it carried out the simulation for the confrontation process of evasion in the environment of MATLAB. At last, it quantitatively analyzed the main factors which determine vessel’s survival probability. The results show that evasion relative bearing and speed will affect vessel’s survival probability significantly. Thus, choosing appropriate evasion relative bearing and speed according to alarming range and alarming relative bearing for torpedo, improving alarming range and positioning accuracy and reducing the response time against torpedo will improve the vessel’s survival probability significantly.

Keywords: acoustic homing torpedo, vessel evasion, monte carlo method, torpedo defense, vessel's survival probability

Procedia PDF Downloads 457
910 Comparison between Hardy-Cross Method and Water Software to Solve a Pipe Networking Design Problem for a Small Town

Authors: Ahmed Emad Ahmed, Zeyad Ahmed Hussein, Mohamed Salama Afifi, Ahmed Mohammed Eid

Abstract:

Water has a great importance in life. In order to deliver water from resources to the users, many procedures should be taken by the water engineers. One of the main procedures to deliver water to the community is by designing pressurizer pipe networks for water. The main aim of this work is to calculate the water demand of a small town and then design a simple water network to distribute water resources among the town with the smallest losses. Literature has been mentioned to cover the main point related to water distribution. Moreover, the methodology has introduced two approaches to solve the research problem, one by the iterative method of Hardy-cross and the other by water software Pipe Flow. The results have introduced two main designs to satisfy the same research requirements. Finally, the researchers have concluded that the use of water software provides more abilities and options for water engineers.

Keywords: looping pipe networks, hardy cross networks accuracy, relative error of hardy cross method

Procedia PDF Downloads 169
909 Mobile Learning: Toward Better Understanding of Compression Techniques

Authors: Farouk Lawan Gambo

Abstract:

Data compression shrinks files into fewer bits then their original presentation. It has more advantage on internet because the smaller a file, the faster it can be transferred but learning most of the concepts in data compression are abstract in nature therefore making them difficult to digest by some students (Engineers in particular). To determine the best approach toward learning data compression technique, this paper first study the learning preference of engineering students who tend to have strong active, sensing, visual and sequential learning preferences, the paper also study the advantage that mobility of learning have experienced; Learning at the point of interest, efficiency, connection, and many more. A survey is carried out with some reasonable number of students, through random sampling to see whether considering the learning preference and advantages in mobility of learning will give a promising improvement over the traditional way of learning. Evidence from data analysis using Ms-Excel as a point of concern for error-free findings shows that there is significance different in the students after using learning content provided on smart phone, also the result of the findings presented in, bar charts and pie charts interpret that mobile learning has to be promising feature of learning.

Keywords: data analysis, compression techniques, learning content, traditional learning approach

Procedia PDF Downloads 348
908 Kinematic Optimization of Energy Extraction Performances for Flapping Airfoil by Using Radial Basis Function Method and Genetic Algorithm

Authors: M. Maatar, M. Mekadem, M. Medale, B. Hadjed, B. Imine

Abstract:

In this paper, numerical simulations have been carried out to study the performances of a flapping wing used as an energy collector. Metamodeling and genetic algorithms are used to detect the optimal configuration, improving power coefficient and/or efficiency. Radial basis functions and genetic algorithms have been applied to solve this problem. Three optimization factors are controlled, namely dimensionless heave amplitude h₀, pitch amplitude θ₀ and flapping frequency f. ANSYS FLUENT software has been used to solve the principal equations at a Reynolds number of 1100, while the heave and pitch motion of a NACA0015 airfoil has been realized using a developed function (UDF). The results reveal an average power coefficient and efficiency of 0.78 and 0.338 with an inexpensive low-fidelity model and a total relative error of 4.1% versus the simulation. The performances of the simulated optimum RBF-NSGA-II have been improved by 1.2% compared with the validated model.

Keywords: numerical simulation, flapping wing, energy extraction, power coefficient, efficiency, RBF, NSGA-II

Procedia PDF Downloads 46
907 Numerical Applications of Tikhonov Regularization for the Fourier Multiplier Operators

Authors: Fethi Soltani, Adel Almarashi, Idir Mechai

Abstract:

Tikhonov regularization and reproducing kernels are the most popular approaches to solve ill-posed problems in computational mathematics and applications. And the Fourier multiplier operators are an essential tool to extend some known linear transforms in Euclidean Fourier analysis, as: Weierstrass transform, Poisson integral, Hilbert transform, Riesz transforms, Bochner-Riesz mean operators, partial Fourier integral, Riesz potential, Bessel potential, etc. Using the theory of reproducing kernels, we construct a simple and efficient representations for some class of Fourier multiplier operators Tm on the Paley-Wiener space Hh. In addition, we give an error estimate formula for the approximation and obtain some convergence results as the parameters and the independent variables approaches zero. Furthermore, using numerical quadrature integration rules to compute single and multiple integrals, we give numerical examples and we write explicitly the extremal function and the corresponding Fourier multiplier operators.

Keywords: fourier multiplier operators, Gauss-Kronrod method of integration, Paley-Wiener space, Tikhonov regularization

Procedia PDF Downloads 319
906 Minimizing the Impact of Covariate Detection Limit in Logistic Regression

Authors: Shahadut Hossain, Jacek Wesolowski, Zahirul Hoque

Abstract:

In many epidemiological and environmental studies covariate measurements are subject to the detection limit. In most applications, covariate measurements are usually truncated from below which is known as left-truncation. Because the measuring device, which we use to measure the covariate, fails to detect values falling below the certain threshold. In regression analyses, it causes inflated bias and inaccurate mean squared error (MSE) to the estimators. This paper suggests a response-based regression calibration method to correct the deleterious impact introduced by the covariate detection limit in the estimators of the parameters of simple logistic regression model. Compared to the maximum likelihood method, the proposed method is computationally simpler, and hence easier to implement. It is robust to the violation of distributional assumption about the covariate of interest. In producing correct inference, the performance of the proposed method compared to the other competing methods has been investigated through extensive simulations. A real-life application of the method is also shown using data from a population-based case-control study of non-Hodgkin lymphoma.

Keywords: environmental exposure, detection limit, left truncation, bias, ad-hoc substitution

Procedia PDF Downloads 239
905 Tiebout and Crime: How Crime Affect the Income Tax Capacity

Authors: Nik Smits, Stijn Goeminne

Abstract:

Despite the extensive literature on the relation between crime and migration, not much is known about how crime affects the tax capacity of local communities. This paper empirically investigates whether the Flemish local income tax base yield is sensitive to changes in the local crime level. The underlying assumptions are threefold. In a Tiebout world, rational voters holding the local government accountable for the safety of its citizens, move out when the local level of security gets too much alienated from what they want it to be (first assumption). If migration is due to crime, then the more wealthy citizens are expected to move first (second assumption). Looking for a place elsewhere implies transaction costs, which the more wealthy citizens are more likely to be able to pay. As a consequence, the average income per capita and so the income distribution will be affected, which in turn, will influence the local income tax base yield (third assumption). The decreasing average income per capita, if not compensated by increasing earnings by the citizens that are staying or by the new citizens entering the locality, must result in a decreasing local income tax base yield. In the absence of a higher level governments’ compensation, decreasing local tax revenues could prove to be disastrous for a crime-ridden municipality. When communities do not succeed in forcing back the number of offences, this can be the onset of a cumulative process of urban deterioration. A spatial panel data model containing several proxies for the local level of crime in 306 Flemish municipalities covering the period 2000-2014 is used to test the relation between crime and the local income tax base yield. In addition to this direct relation, the underlying assumptions are investigated as well. Preliminary results show a modest, but positive relation between local violent crime rates and the efflux of citizens, persistent up until a 2 year lag. This positive effect is dampened by possible increasing crime rates in neighboring municipalities. The change in violent crimes -and to a lesser extent- thefts and extortions reduce the influx of citizens with a one year lag. Again this effect is diminished by external effects from neighboring municipalities, meaning that increasing crime rates in neighboring municipalities (especially violent crimes) have a positive effect on the local influx of citizens. Crime also has a depressing effect on the average income per capita within a municipality, whereas increasing crime rates in neighboring municipalities increase it. Notwithstanding the previous results, crime does not seem to significantly affect the local tax base yield. The results suggest that the depressing effect of crime on the income basis has to be compensated by a limited, but a wealthier influx of new citizens.

Keywords: crime, local taxes, migration, Tiebout mobility

Procedia PDF Downloads 309
904 Indoor Localization by Pattern Matching Method Based on Extended Database

Authors: Gyumin Hwang, Jihong Lee

Abstract:

This paper studied the CSS-based indoor localization system which is easy to implement, inexpensive to compose the systems, additionally CSS-based indoor localization system covers larger area than other system. However, this system has problem which is affected by reflected distance data. This problem in localization is caused by the multi-path effect. Error caused by multi-path is difficult to be corrected because the indoor environment cannot be described. In this paper, in order to solve the problem by multi-path, we have supplemented the localization system by using pattern matching method based on extended database. Thereby, this method improves precision of estimated. Also this method is verified by experiments in gymnasium. Database was constructed by 1 m intervals, and 16 sample data were collected from random position inside the region of DB points. As a result, this paper shows higher accuracy than existing method through graph and table.

Keywords: chirp spread spectrum, indoor localization, pattern-matching, time of arrival, multi-path, mahalanobis distance, reception rate, simultaneous localization and mapping, laser range finder

Procedia PDF Downloads 244
903 System Identification and Controller Design for a DC Electrical Motor

Authors: Armel Asongu Nkembi, Ahmad Fawad

Abstract:

The aim of this paper is to determine in a concise way the transfer function that characterizes a DC electrical motor with a helix. In practice it can be obtained by applying a particular input to the system and then, based on the observation of its output, determine an approximation to the transfer function of the system. In our case, we use a step input and find the transfer function parameters that give the simulated first-order time response. The simulation of the system is done using MATLAB/Simulink. In order to determine the parameters, we assume a first order system and use the Broida approximation to determine the parameters and then its Mean Square Error (MSE). Furthermore, we design a PID controller for the control process first in the continuous time domain and tune it using the Ziegler-Nichols open loop process. We then digitize the controller to obtain a digital controller since most systems are implemented using computers, which are digital in nature.

Keywords: transfer function, step input, MATLAB, Simulink, DC electrical motor, PID controller, open-loop process, mean square process, digital controller, Ziegler-Nichols

Procedia PDF Downloads 60