Search results for: measurement errors
1781 The Temperature Degradation Process of Siloxane Polymeric Coatings
Authors: Andrzej Szewczak
Abstract:
Study of the effect of high temperatures on polymer coatings represents an important field of research of their properties. Polymers, as materials with numerous features (chemical resistance, ease of processing and recycling, corrosion resistance, low density and weight) are currently the most widely used modern building materials, among others in the resin concrete, plastic parts, and hydrophobic coatings. Unfortunately, the polymers have also disadvantages, one of which decides about their usage - low resistance to high temperatures and brittleness. This applies in particular thin and flexible polymeric coatings applied to other materials, such a steel and concrete, which degrade under varying thermal conditions. Research about improvement of this state includes methods of modification of the polymer composition, structure, conditioning conditions, and the polymerization reaction. At present, ways are sought to reflect the actual environmental conditions, in which the coating will be operating after it has been applied to other material. These studies are difficult because of the need for adopting a proper model of the polymer operation and the determination of phenomena occurring at the time of temperature fluctuations. For this reason, alternative methods are being developed, taking into account the rapid modeling and the simulation of the actual operating conditions of polymeric coating’s materials in real conditions. The nature of a duration is typical for the temperature influence in the environment. Studies typically involve the measurement of variation one or more physical and mechanical properties of such coating in time. Based on these results it is possible to determine the effects of temperature loading and develop methods affecting in the improvement of coatings’ properties. This paper contains a description of the stability studies of silicone coatings deposited on the surface of a ceramic brick. The brick’s surface was hydrophobized by two types of inorganic polymers: nano-polymer preparation based on dialkyl siloxanes (Series 1 - 5) and an aqueous solution of the silicon (series 6 - 10). In order to enhance the stability of the film formed on the brick’s surface and immunize it to variable temperature and humidity loading, the nano silica was added to the polymer. The right combination of the polymer liquid phase and the solid phase of nano silica was obtained by disintegration of the mixture by the sonification. The changes of viscosity and surface tension of polymers were defined, which are the basic rheological parameters affecting the state and the durability of the polymer coating. The coatings created on the brick’s surfaces were then subjected to a temperature loading of 100° C and moisture by total immersion in water, in order to determine any water absorption changes caused by damages and the degradation of the polymer film. The effect of moisture and temperature was determined by measurement (at specified number of cycles) of changes in the surface hardness (using a Vickers’ method) and the absorption of individual samples. As a result, on the basis of the obtained results, the degradation process of polymer coatings related to their durability changes in time was determined.Keywords: silicones, siloxanes, surface hardness, temperature, water absorption
Procedia PDF Downloads 2431780 Experimental Measurement for Vehicular Communication Evaluation Using Obu Arada System
Authors: Aymen Sassi
Abstract:
The equipment of vehicles with wireless communication capabilities is expected to be the key to the evolution to next generation intelligent transportation systems (ITS). The IEEE community has been continuously working on the development of an efficient vehicular communication protocol for the enhancement of Wireless Access in Vehicular Environment (WAVE). Vehicular communication systems, called V2X, support vehicle to vehicle (V2V) and vehicle to infrastructure (V2I) communications. The efficiency of such communication systems depends on several factors, among which the surrounding environment and mobility are prominent. Accordingly, this study focuses on the evaluation of the real performance of vehicular communication with special focus on the effects of the real environment and mobility on V2X communication. It starts by identifying the real maximum range that such communication can support and then evaluates V2I and V2V performances. The Arada LocoMate OBU transmission system was used to test and evaluate the impact of the transmission range in V2X communication. The evaluation of V2I and V2V communication takes the real effects of low and high mobility on transmission into account.Keywords: IEEE 802.11p, V2I, V2X, mobility, PLR, Arada LocoMate OBU, maximum range
Procedia PDF Downloads 4151779 Determinants of Corporate Social Responsibility Adoption: Evidence from China
Authors: Jing (Claire) LI
Abstract:
More than two decades from 2000 to 2020 of economic reforms have brought China unprecedented economic growth. There is an urgent call of research towards corporate social responsibility (CSR) in the context of China because while China continues to develop into a global trading market, it suffers from various serious problems relating to CSR. This study analyses the factors affecting the adoption of CSR practices by Chinese listed companies. The author proposes a new framework of factors of CSR adoption. Following common organisational factors and external factors in the literature (including organisational support, company size, shareholder pressures, and government support), this study introduces two additional factors, dynamic capability and regional culture. A survey questionnaire was conducted on the CSR adoption of Chinese listed companies in Shen Zhen and Shang Hai index from December 2019 to March 2020. The survey was conducted to collect data on the factors that affect the adoption of CSR. After collection of data, this study performed factor analysis to reduce the number of measurement items to several main factors. This procedure is to confirm the proposed framework and ensure the significant factors. Through analysis, this study identifies four grouped factors as determinants of the CSR adoption. The first factor loading includes dynamic capability and organisational support. The study finds that they are positively related to the first factor, so the first factor mainly reflects the capabilities of companies, which is one component in internal factors. In the second factor, measurement items of stakeholder pressures mainly are from regulatory bodies, customer and supplier, employees and community, and shareholders. In sum, they are positively related to the second factor and they reflect stakeholder pressures, which is one component of external factors. The third factor reflects organisational characteristics. Variables include company size and cultural score. Among these variables, company size is negatively related to the third factor. The resulted factor loading of the third factor implies that organisational factor is an important determinant of CSR adoption. Cultural consistency, the variable in the fourth factor, is positively related to the factor. It represents the difference between perception of managers and actual culture of the organisations in terms of cultural dimensions, which is one component in internal factors. It implies that regional culture is an important factor of CSR adoption. Overall, the results are consistent with previous literature. This study is of significance from both theoretical and empirical perspectives. First, from the significance of theoretical perspective, this research combines stakeholder theory, dynamic capability view of a firm, and neo-institutional theory in CSR research. Based on association of these three theories, this study introduces two new factors (dynamic capability and regional culture) to have a better framework for CSR adoption. Second, this study contributes to empirical literature of CSR in the context of China. Extant Chinese companies lack recognition of the importance of CSR practices adoption. This study built a framework and may help companies to design resource allocation strategies and evaluate future CSR and management practices in an early stage.Keywords: China, corporate social responsibility, CSR adoption, dynamic capability, regional culture
Procedia PDF Downloads 1341778 Research of Intrinsic Emittance of Thermal Cathode with Emission Nonuniformity
Authors: Yufei Peng, Zhen Qin, Jianbe Li, Jidong Long
Abstract:
The thermal cathode is widely used in accelerators, FELs and kinds of vacuum electronics. However, emission nonuniformity exists due to surface profile, material distribution, temperature variation, crystal orientation, etc., which will cause intrinsic emittance growth, brightness decline, envelope size augment, device performance deterioration or even failure. To understand how emittance is manipulated by emission nonuniformity, an intrinsic emittance model consisting of contributions from macro and micro surface nonuniformity is developed analytically based on general thermal emission model at temperature limited regime according to a real 3mm cathode. The model shows relative emittance increased about 50% due to temperature variation, and less than 5% from several kinds of micro surface nonuniformity which is much smaller than other research. Otherwise, we also calculated emittance growth combining with Monte Carlo method and PIC simulation, experiments of emission uniformity and emittance measurement are going to be carried out separately.Keywords: thermal cathode, electron emission fluctuation, intrinsic emittance, surface nonuniformity, cathode lifetime
Procedia PDF Downloads 2981777 Topological Sensitivity Analysis for Reconstruction of the Inverse Source Problem from Boundary Measurement
Authors: Maatoug Hassine, Mourad Hrizi
Abstract:
In this paper, we consider a geometric inverse source problem for the heat equation with Dirichlet and Neumann boundary data. We will reconstruct the exact form of the unknown source term from additional boundary conditions. Our motivation is to detect the location, the size and the shape of source support. We present a one-shot algorithm based on the Kohn-Vogelius formulation and the topological gradient method. The geometric inverse source problem is formulated as a topology optimization one. A topological sensitivity analysis is derived from a source function. Then, we present a non-iterative numerical method for the geometric reconstruction of the source term with unknown support using a level curve of the topological gradient. Finally, we give several examples to show the viability of our presented method.Keywords: geometric inverse source problem, heat equation, topological optimization, topological sensitivity, Kohn-Vogelius formulation
Procedia PDF Downloads 3001776 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland
Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski
Abstract:
PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks
Procedia PDF Downloads 1491775 On-Line Data-Driven Multivariate Statistical Prediction Approach to Production Monitoring
Authors: Hyun-Woo Cho
Abstract:
Detection of incipient abnormal events in production processes is important to improve safety and reliability of manufacturing operations and reduce losses caused by failures. The construction of calibration models for predicting faulty conditions is quite essential in making decisions on when to perform preventive maintenance. This paper presents a multivariate calibration monitoring approach based on the statistical analysis of process measurement data. The calibration model is used to predict faulty conditions from historical reference data. This approach utilizes variable selection techniques, and the predictive performance of several prediction methods are evaluated using real data. The results shows that the calibration model based on supervised probabilistic model yielded best performance in this work. By adopting a proper variable selection scheme in calibration models, the prediction performance can be improved by excluding non-informative variables from their model building steps.Keywords: calibration model, monitoring, quality improvement, feature selection
Procedia PDF Downloads 3561774 Wear Particle Analysis from used Gear Lubricants for Maintenance Diagnostics
Authors: Surapol Raadnui
Abstract:
This particular work describes an experimental investigation on gear wear in which wear and pitting were intentionally allowed to occur, namely, moisture corrosion pitting, acid-induced corrosion pitting, hard contaminant-related pitting and mechanical induced wear. A back to back spur gear test rig and a grease lubricated worm gear rig were used. The tests samples of wear debris were collected and assessed through the utilization of an optical microscope in order to correlate and compare the debris morphology to pitting and wear degradation of the worn gears. In addition, weight loss from all test gear pairs were assessed with utilization of statistical design of experiment. It can be deduced that wear debris characteristics from both cases exhibited a direct relationship with different pitting and wear modes. Thus, it should be possible to detect and diagnose gear pitting and wear utilization of worn surfaces, generated wear debris and quantitative measurement such as weight loss.Keywords: predictive maintenance, worm gear, spur gear, wear debris analysis, problem diagnostic
Procedia PDF Downloads 1531773 Cyber Security and Risk Assessment of the e-Banking Services
Authors: Aisha F. Bushager
Abstract:
Today we are more exposed than ever to cyber threats and attacks at personal, community, organizational, national, and international levels. More aspects of our lives are operating on computer networks simply because we are living in the fifth domain, which is called the Cyberspace. One of the most sensitive areas that are vulnerable to cyber threats and attacks is the Electronic Banking (e-Banking) area, where the banking sector is providing online banking services to its clients. To be able to obtain the clients trust and encourage them to practice e-Banking, also, to maintain the services provided by the banks and ensure safety, cyber security and risks control should be given a high priority in the e-banking area. The aim of the study is to carry out risk assessment on the e-banking services and determine the cyber threats, cyber attacks, and vulnerabilities that are facing the e-banking area specifically in the Kingdom of Bahrain. To collect relevant data, structured interviews were taken place with e-banking experts in different banks. Then, collected data where used as in input to the risk management framework provided by the National Institute of Standards and Technology (NIST), which was the model used in the study to assess the risks associated with e-banking services. The findings of the study showed that the cyber threats are commonly human errors, technical software or hardware failure, and hackers, on the other hand, the most common attacks facing the e-banking sector were phishing, malware attacks, and denial-of-service. The risks associated with the e-banking services were around the moderate level, however, more controls and countermeasures must be applied to maintain the moderate level of risks. The results of the study will help banks discover their vulnerabilities and maintain their online services, in addition, it will enhance the cyber security and contribute to the management and control of risks that are facing the e-banking sector.Keywords: cyber security, e-banking, risk assessment, threats identification
Procedia PDF Downloads 3501772 Explaining the Role of Iran Health System in Polypharmacy among the Elderly
Authors: Mohsen Shati, Seyede Salehe Mortazavi, Seyed Kazem Malakouti, Hamidreza Khanke Fazlollah Ahmadi
Abstract:
Taking unnecessary or excessive medication or using drugs with no indication (polypharmacy) by people of all ages, especially the elderly, is associated with increased adverse drug reactions (ADR), medical errors, hospitalization and escalating the costs. It may be facilitated or impeded by the healthcare system. In this study, we are going to describe the role of the health system in the practice of polypharmacy in Iranian elderly. In this Inductive qualitative content analysis using Graneheim and Lundman methods, purposeful sample selection until saturation has been made. Participants have been selected from doctors, pharmacists, policy-makers and the elderly. A total of 25 persons (9 men and 16 women) have participated in this study. Data analysis after incorporating codes with similar characteristics revealed 14 subcategories and six main categories of the referral system, physicians’ accessibility, health data management, drug market, laws enforcement, and social protection. Some of the conditions of the healthcare system have given rise to polypharmacy in the elderly. In the absence of a comprehensive specialty and subspecialty referral system, patients may go to any physician office so may well be confused about numerous doctors' prescriptions. Electronic records not being prepared for the patients, failure to comply with laws, lack of robust enforcement for the existing laws and close surveillance are among the contributing factors. Inadequate insurance and supportive services are also evident. Age-specific care providing has not yet been institutionalized, while, inadequate specialist workforce playing a major role. So, one may not ignore the health system as contributing factor in designing effective interventions to fix the problem.Keywords: elderly, polypharmacy, health system, qualitative study
Procedia PDF Downloads 1511771 An Investigation of Water Atomizer in Ejected Gas of a Vehicle Engine
Authors: Chun-Wei Liu, Feng-Tsai Weng
Abstract:
People faced pollution threaten in modern age although the standard of exhaust gas of vehicles has been established. The goal of this study is to investigate the effect of water atomizer in a vehicle emission system. Diluted 20% ammonia water was used in spraying system. Micro particles produced by exhausted gas from engine of vehicle which were cumulated through atomized spray in a self-development collector. In experiments, a self-designed atomization model plate and a gas tank controlled by the micro-processor using Pulse Width Modulation (PWM) logic was prepared for exhaust test. The gas from gasoline-engine of vehicle was purified with the model panel collector. A soft well named ANSYS was utilized for analyzing the distribution condition of rejected gas. Micro substance and percentage of CO, HC, CO2, NOx in exhausted gas were investigated at different engine speed, and atomizer vibration frequency. Exceptional results in the vehicle engine emissions measurement were obtained. The temperature of exhausted gas can be decreased 3oC. Micro substances PM10 can be decreased and the percentage of CO can be decreased more than 55% at 2500RPM by proposed system. Value of CO, HC, CO2 and NOX was all decreased when atomizers were used with water.Keywords: atomizer, CO, HC, NOx, PM2.5
Procedia PDF Downloads 4571770 Water Leakage Detection System of Pipe Line using Radial Basis Function Neural Network
Authors: A. Ejah Umraeni Salam, M. Tola, M. Selintung, F. Maricar
Abstract:
Clean water is an essential and fundamental human need. Therefore, its supply must be assured by maintaining the quality, quantity and water pressure. However the fact is, on its distribution system, leakage happens and becomes a common world issue. One of the technical causes of the leakage is a leaking pipe. The purpose of the research is how to use the Radial Basis Function Neural (RBFNN) model to detect the location and the magnitude of the pipeline leakage rapidly and efficiently. In this study the RBFNN are trained and tested on data from EPANET hydraulic modeling system. Method of Radial Basis Function Neural Network is proved capable to detect location and magnitude of pipeline leakage with of the accuracy of the prediction results based on the value of RMSE (Root Meant Square Error), comparison prediction and actual measurement approaches 0.000049 for the whole pipeline system.Keywords: radial basis function neural network, leakage pipeline, EPANET, RMSE
Procedia PDF Downloads 3581769 Evaluation Synthesis of Private Sector Engagement in International Development
Authors: Valerie Habbel, Magdalena Orth, Johanna Richter, Steffen Schimko
Abstract:
Cooperation between development actors and the private sector is becoming increasingly important, as it is expected to mobilize additional resources to achieve the Sustainable Development Goals (SDGs), among other things. However, whether the goals of cooperation are achieved has so far only been explored in evaluations and studies of individual projects and instruments. The evaluation synthesis attempts to close this gap by systematically analyzing existing evidence (evaluations and academic studies) from national and international development cooperation on private sector engagement. Overall, the evaluations and studies considered report mainly positive effects on investors and donors, intermediaries, partner countries, and target groups. However, various analyses, including on the quality of the evaluations, point to a positive bias in the results. The evaluation synthesis makes recommendations on the definition of indicators, the measurement and evaluation of impacts and additionality, knowledge management, and the consideration of transaction costs in cooperation with private actors.Keywords: evaluation synthesis, private sector engagement, international development, sustainable development
Procedia PDF Downloads 2101768 Delisting Wave: Corporate Financial Distress, Institutional Investors Perception and Performance of South African Listed Firms
Authors: Adebiyi Sunday Adeyanju, Kola Benson Ajeigbe, Fortune Ganda
Abstract:
In the past three decades, there has been a notable increase in the number of firms delisting from the Johannesburg Stock Exchange (JSE) in South Africa. The recent increasing rate of delisting waves of corporate listed firms motivated this study. This study aims to explore the influence of institutional investor perceptions on the financial distress experienced by delisted firms within the South African market. The study further examined the impact of financial distress on the corporate performance of delisted firms. Using the data of delisted firms spanning from 2000 to 2023 and the FGLS (Feasible Generalized Least Squares) for the short run and PCSE (Panel-Corrected Standard Errors) for the long run effects of the relationship. The finding indicated that a decline in institutional investors’ perceptions was associated with the corporate financial distress of the delisted firms, particularly during the delisting year and the few years preceding the announcement of the delisting. This study addressed the importance of investor recognition in corporate financial distress and the delisting wave among listed firms- a finding supporting the stakeholder theory. This study is an insight for companies’ managements, investors, governments, policymakers, stockbrokers, lending institutions, bankers, the stock market, and other stakeholders in their various decision-making endeavours. Based on the above findings, it was recommended that corporate managements should improve their governance strategies that can help companies’ financial performances. Accountability and transparency through governance must also be improved upon with government support through the introduction of policies and strategies and enabling an easy environment that can help companies perform better.Keywords: delisting wave, institutional investors, financial distress, corporate performance, investors’ perceptions
Procedia PDF Downloads 451767 Generation of Automated Alarms for Plantwide Process Monitoring
Authors: Hyun-Woo Cho
Abstract:
Earlier detection of incipient abnormal operations in terms of plant-wide process management is quite necessary in order to improve product quality and process safety. And generating warning signals or alarms for operating personnel plays an important role in process automation and intelligent plant health monitoring. Various methodologies have been developed and utilized in this area such as expert systems, mathematical model-based approaches, multivariate statistical approaches, and so on. This work presents a nonlinear empirical monitoring methodology based on the real-time analysis of massive process data. Unfortunately, the big data includes measurement noises and unwanted variations unrelated to true process behavior. Thus the elimination of such unnecessary patterns of the data is executed in data processing step to enhance detection speed and accuracy. The performance of the methodology was demonstrated using simulated process data. The case study showed that the detection speed and performance was improved significantly irrespective of the size and the location of abnormal events.Keywords: detection, monitoring, process data, noise
Procedia PDF Downloads 2521766 Exploring Digital Media’s Impact on Sports Sponsorship: A Global Perspective
Authors: Sylvia Chan-Olmsted, Lisa-Charlotte Wolter
Abstract:
With the continuous proliferation of media platforms, there have been tremendous changes in media consumption behaviors. From the perspective of sports sponsorship, while there is now a multitude of platforms to create brand associations, the changing media landscape and shift of message control also mean that sports sponsors will have to take into account the nature of and consumer responses toward these emerging digital media to devise effective marketing strategies. Utilizing the personal interview methodology, this study is qualitative and exploratory in nature. A total of 18 experts from European and American academics, sports marketing industry, and sports leagues/teams were interviewed to address three main research questions: 1) What are the major changes in digital technologies that are relevant to sports sponsorship; 2) How have digital media influenced the channels and platforms of sports sponsorship; and 3) How have these technologies affected the goals, strategies, and measurement of sports sponsorship. The study found that sports sponsorship has moved from consumer engagement, engagement measurement, and consequences of engagement on brand behaviors to micro-targeting one on one, engagement by context, time, and space, and activation and leveraging based on tracking and databases. From the perspective of platforms and channels, the use of mobile devices is prominent during sports content consumption. Increasing multiscreen media consumption means that sports sponsors need to optimize their investment decisions in leagues, teams, or game-related content sources, as they need to go where the fans are most engaged in. The study observed an imbalanced strategic leveraging of technology and digital infrastructure. While sports leagues have had less emphasis on brand value management via technology, sports sponsors have been much more active in utilizing technologies like mobile/LBS tools, big data/user info, real-time marketing and programmatic, and social media activation. Regardless of the new media/platforms, the study found that integration and contextualization are the two essential means of improving sports sponsorship effectiveness through technology. That is, how sponsors effectively integrate social media/mobile/second screen into their existing legacy media sponsorship plan so technology works for the experience/message instead of distracting fans. Additionally, technological advancement and attention economy amplify the importance of consumer data gathering, but sports consumer data does not mean loyalty or engagement. This study also affirms the benefit of digital media as they offer viral and pre-event activations through storytelling way before the actual event, which is critical for leveraging brand association before and after. That is, sponsors now have multiple opportunities and platforms to tell stories about their brands for longer time period. In summary, digital media facilitate fan experience, access to the brand message, multiplatform/channel presentations, storytelling, and content sharing. Nevertheless, rather than focusing on technology and media, today’s sponsors need to define what they want to focus on in terms of content themes that connect with their brands and then identify the channels/platforms. The big challenge for sponsors is to play to the venues/media’s specificity and its fit with the target audience and not uniformly deliver the same message in the same format on different platforms/channels.Keywords: digital media, mobile media, social media, technology, sports sponsorship
Procedia PDF Downloads 2941765 Measurement of Thermal Protrusion Profile in Magnetic Recording Heads via Wyko Interferometry
Authors: Joseph Christopher R. Ragasa, Paolo Gabriel P. Casas, Nemesio S. Mangila, Maria Emma C. Villamin, Myra G. Bungag
Abstract:
A procedure in measuring the thermal protrusion profiles of magnetic recording heads was developed using a Wyko HD-8100 optical interference-based instrument. The protrusions in the heads were made by the application of a constant power through the thermal flying height controller pads. It was found that the thermally-induced bubble is confined to form in the same head locations, primarily in the reader and writer regions, regardless of the direction of approach of temperature. An application of power to the thermal flying height control pads ranging from 0 to 50 milliWatts showed that the protrusions demonstrate a linear dependence with the supplied power. The efficiencies calculated using this method were compared to that obtained through Guzik and found to be 19.57% greater due to the static testing environment used in the testing.Keywords: thermal protrusion profile, magnetic recording heads, wyko interferometry, thermal flying height control
Procedia PDF Downloads 4691764 Housing Price Prediction Using Machine Learning Algorithms: The Case of Melbourne City, Australia
Authors: The Danh Phan
Abstract:
House price forecasting is a main topic in the real estate market research. Effective house price prediction models could not only allow home buyers and real estate agents to make better data-driven decisions but may also be beneficial for the property policymaking process. This study investigates the housing market by using machine learning techniques to analyze real historical house sale transactions in Australia. It seeks useful models which could be deployed as an application for house buyers and sellers. Data analytics show a high discrepancy between the house price in the most expensive suburbs and the most affordable suburbs in the city of Melbourne. In addition, experiments demonstrate that the combination of Stepwise and Support Vector Machine (SVM), based on the Mean Squared Error (MSE) measurement, consistently outperforms other models in terms of prediction accuracy.Keywords: house price prediction, regression trees, neural network, support vector machine, stepwise
Procedia PDF Downloads 2311763 Bank, Stock Market Efficiency and Economic Growth: Lessons for ASEAN-5
Authors: Tan Swee Liang
Abstract:
This paper estimates bank and stock market efficiency associations with real per capita GDP growth by examining panel-data across three different regions using Panel-Corrected Standard Errors (PCSE) regression developed by Beck and Katz (1995). Data from five economies in ASEAN (Singapore, Malaysia, Thailand, Philippines, and Indonesia), five economies in Asia (Japan, China, Hong Kong SAR, South Korea, and India) and seven economies in OECD (Australia, Canada, Denmark, Norway, Sweden, United Kingdom U.K., and United States U.S.), between 1990 and 2017 are used. Empirical findings suggest one, for Asia-5 high bank net interest margin means greater bank profitability, hence spurring economic growth. Two, for OECD-7 low bank overhead costs (as a share of total assets) may reflect weak competition and weak investment in providing superior banking services, hence dampening economic growth. Three, stock market turnover ratio has negative association with OECD-7 economic growth, but a positive association with Asia-5, which suggest the relationship between liquidity and growth is ambiguous. Lastly, for ASEAN-5 high bank overhead costs (as a share of total assets) may suggest expenses have not been channelled efficiently to income generating activities. One practical implication of the findings is that policy makers should take necessary measures toward financial liberalisation policies that boost growth through the efficiency channel, so that funds are efficiently allocated through the financial system between financial and real sectors.Keywords: financial development, banking system, capital markets, economic growth
Procedia PDF Downloads 1381762 Fault Detection and Isolation of a Three-Tank System using Analytical Temporal Redundancy, Parity Space/Relation Based Residual Generation
Authors: A. T. Kuda, J. J. Dayya, A. Jimoh
Abstract:
This paper investigates the fault detection and Isolation technique of measurement data sets from a three tank system using analytical model-based temporal redundancy which is based on residual generation using parity equations/space approach. It further briefly outlines other approaches of model-based residual generation. The basic idea of parity space residual generation in temporal redundancy is dynamic relationship between sensor outputs and actuator inputs (input-output model). These residuals where then used to detect whether or not the system is faulty and indicate the location of the fault when it is faulty. The method obtains good results by detecting and isolating faults from the considered data sets measurements generated from the system.Keywords: fault detection, fault isolation, disturbing influences, system failure, parity equation/relation, structured parity equations
Procedia PDF Downloads 3021761 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race
Authors: Joonas Pääkkönen
Abstract:
In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling
Procedia PDF Downloads 1241760 Theory of the Optimum Signal Approximation Clarifying the Importance in the Recognition of Parallel World and Application to Secure Signal Communication with Feedback
Authors: Takuro Kida, Yuichi Kida
Abstract:
In this paper, it is shown a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detail algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output-signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory, and it is indicated that introducing conversations with feedback do not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.Keywords: matrix filterbank, optimum signal approximation, category theory, simultaneous minimization
Procedia PDF Downloads 1431759 Evaluation of the Grammar Questions at the Undergraduate Level
Authors: Preeti Gacche
Abstract:
A considerable part of undergraduate level English Examination papers is devoted to grammar. Hence the grammar questions in the question papers are evaluated and the opinions of both students and teachers about them are obtained and analyzed. A grammar test of 100 marks is administered to 43 students to check their performance. The question papers have been evaluated by 10 different teachers and their scores compared. The analysis of 38 University question papers reveals that on an average 20 percent marks are allotted to grammar. Almost all the grammar topics are tested. Abundant use of grammatical terminology is observed in the questions. Decontextualization, repetition, possibility of multiple correct answers and grammatical errors in framing the questions have been observed. Opinions of teachers and students about grammar questions vary in many respects. The students responses are analyzed medium-wise and sex-wise. The Medium at the School level and the sex of the students are found to play no role as far as interest in the study of grammar is concerned. English medium students solve grammar questions intuitively whereas non-English medium students are required to recollect the rules of grammar. Prepositions, Verbs, Articles and Model auxiliaries are found to be easy topics for most students whereas the use of conjunctions is the most difficult topic. Out of context items of grammar are difficult to answer in comparison with contextualized items of grammar. Hence contextualized texts to test grammar items are desirable. No formal training in setting questions is imparted to teachers by the competent authorities like the University. They need to be trained in testing. Statistically there is no significant change of score with the change in the rater in testing of grammar items. There is scope of future improvement. The question papers need to be evaluated and feedback needs to be obtained from students and teachers for future improvement.Keywords: context, evaluation, grammar, tests
Procedia PDF Downloads 3531758 The Influence of Emotion on Numerical Estimation: A Drone Operators’ Context
Authors: Ludovic Fabre, Paola Melani, Patrick Lemaire
Abstract:
The goal of this study was to test whether and how emotions influence drone operators in estimation skills. The empirical study was run in the context of numerical estimation. Participants saw a two-digit number together with a collection of cars. They had to indicate whether the stimuli collection was larger or smaller than the number. The two-digit numbers ranged from 12 to 27, and collections included 3-36 cars. The presentation of the collections was dynamic (each car moved 30 deg. per second on the right). Half the collections were smaller collections (including fewer than 20 cars), and the other collections were larger collections (i.e., more than 20 cars). Splits between the number of cars in a collection and the two-digit number were either small (± 1 or 2 units; e.g., the collection included 17 cars and the two-digit number was 19) or larger (± 8 or 9 units; e.g., 17 cars and '9'). Half the collections included more items (and half fewer items) than the number indicated by the two-digit number. Before and after each trial, participants saw an image inducing negative emotions (e.g., mutilations) or neutral emotions (e.g., candle) selected from International Affective Picture System (IAPS). At the end of each trial, participants had to say if the second picture was the same as or different from the first. Results showed different effects of emotions on RTs and percent errors. Participants’ performance was modulated by emotions. They were slower on negative trials compared to the neutral trials, especially on the most difficult items. They errored more on small-split than on large-split problems. Moreover, participants highly overestimated the number of cars when in a negative emotional state. These findings suggest that emotions influence numerical estimation, that effects of emotion in estimation interact with stimuli characteristics. They have important implications for understanding the role of emotions on estimation skills, and more generally, on how emotions influence cognition.Keywords: drone operators, emotion, numerical estimation, arithmetic
Procedia PDF Downloads 1161757 Mobile Microscope for the Detection of Pathogenic Cells Using Image Processing
Authors: P. S. Surya Meghana, K. Lingeshwaran, C. Kannan, V. Raghavendran, C. Priya
Abstract:
One of the most basic and powerful tools in all of science and medicine is the light microscope, the fundamental device for laboratory as well as research purposes. With the improving technology, the need for portable, economic and user-friendly instruments is in high demand. The conventional microscope fails to live up to the emerging trend. Also, adequate access to healthcare is not widely available, especially in developing countries. The most basic step towards the curing of a malady is the diagnosis of the disease itself. The main aim of this paper is to diagnose Malaria with the most common device, cell phones, which prove to be the immediate solution for most of the modern day needs with the development of wireless infrastructure allowing to compute and communicate on the move. This opened up the opportunity to develop novel imaging, sensing, and diagnostics platforms using mobile phones as an underlying platform to address the global demand for accurate, sensitive, cost-effective, and field-portable measurement devices for use in remote and resource-limited settings around the world.Keywords: cellular, hand-held, health care, image processing, malarial parasites, microscope
Procedia PDF Downloads 2671756 Simulation and Experimental Study on Tensile Force Measurement of PS Tendons Using an Embedded EM Sensor
Authors: ByoungJoon Yu, Junkyeong Kim, Seunghee Park
Abstract:
The tensile force estimation PS tendons is in great demand on monitoring the structural health condition of PSC girder bridges. Measuring the tensile force of the PS tendons inside the PSC girder using conventional methods is hard due to its location. In this paper, an embedded EM sensor based tensile force estimation of PS tendon was carried out by measuring the permeability of the PS tendons in PSC girder. The permeability is changed due to the induced tensile force by the magneto-elastic effect and the effect then lead to the gradient change of the B-H curve. An experiment was performed to obtain the signals from the EM sensor using three down-scaled PSC girder models. The permeability of PS tendons was proportionally decreased according to the increase of the tensile forces. To verify the experiment results, a simulation of tensile force estimation will be conducted in further study. Consequently, it is expected that both the experiment results and the simulation results increase the accuracy of the tensile force estimation, and then it could be one of the solutions for evaluating the performance of PSC girder.Keywords: tensile force estimation, embedded EM sensor, PSC girder, EM sensor simulation, cross section loss
Procedia PDF Downloads 4791755 Thulium Laser Design and Experimental Verification for NIR and MIR Nonlinear Applications in Specialty Optical Fibers
Authors: Matej Komanec, Tomas Nemecek, Dmytro Suslov, Petr Chvojka, Stanislav Zvanovec
Abstract:
Nonlinear phenomena in the near- and mid-infrared region are attracting scientific attention mainly due to the supercontinuum generation possibilities and subsequent utilizations for ultra-wideband applications like e.g. absorption spectroscopy or optical coherence tomography. Thulium-based fiber lasers provide access to high-power ultrashort pump pulses in the vicinity of 2000 nm, which can be easily exploited for various nonlinear applications. The paper presents a simulation and experimental study of a pulsed thulium laser based for near-infrared (NIR) and mid-infrared (MIR) nonlinear applications in specialty optical fibers. In the first part of the paper the thulium laser is discussed. The thulium laser is based on a gain-switched seed-laser and a series of amplification stages for obtaining output peak powers in the order of kilowatts for pulses shorter than 200 ps in full-width at half-maximum. The pulsed thulium laser is first studied in a simulation software, focusing on seed-laser properties. Afterward, a pre-amplification thulium-based stage is discussed, with the focus of low-noise signal amplification, high signal gain and eliminating pulse distortions during pulse propagation in the gain medium. Following the pre-amplification stage a second gain stage is evaluated with incorporating a thulium-fiber of shorter length with increased rare-earth dopant ratio. Last a power-booster stage is analyzed, where the peak power of kilowatts should be achieved. Examples of analytical study are further validated by the experimental campaign. The simulation model is further corrected based on real components – parameters such as real insertion-losses, cross-talks, polarization dependencies, etc. are included. The second part of the paper evaluates the utilization of nonlinear phenomena, their specific features at the vicinity of 2000 nm, compared to e.g. 1550 nm, and presents supercontinuum modelling, based on the thulium laser pulsed output. Supercontinuum generation simulation is performed and provides reasonably accurate results, once fiber dispersion profile is precisely defined and fiber nonlinearity is known, furthermore input pulse shape and peak power must be known, which is assured thanks to the experimental measurement of the studied thulium pulsed laser. The supercontinuum simulation model is put in relation to designed and characterized specialty optical fibers, which are discussed in the third part of the paper. The focus is placed on silica and mainly on non-silica fibers (fluoride, chalcogenide, lead-silicate) in their conventional, microstructured or tapered variants. Parameters such as dispersion profile and nonlinearity of exploited fibers were characterized either with an accurate model, developed in COMSOL software or by direct experimental measurement to achieve even higher precision. The paper then combines all three studied topics and presents a possible application of such a thulium pulsed laser system working with specialty optical fibers.Keywords: nonlinear phenomena, specialty optical fibers, supercontinuum generation, thulium laser
Procedia PDF Downloads 3211754 The Monitor for Neutron Dose in Hadrontherapy Project: Secondary Neutron Measurement in Particle Therapy
Authors: V. Giacometti, R. Mirabelli, V. Patera, D. Pinci, A. Sarti, A. Sciubba, G. Traini, M. Marafini
Abstract:
The particle therapy (PT) is a very modern technique of non invasive radiotherapy mainly devoted to the treatment of tumours untreatable with surgery or conventional radiotherapy, because localised closely to organ at risk (OaR). Nowadays, PT is available in about 55 centres in the word and only the 20\% of them are able to treat with carbon ion beam. However, the efficiency of the ion-beam treatments is so impressive that many new centres are in construction. The interest in this powerful technology lies to the main characteristic of PT: the high irradiation precision and conformity of the dose released to the tumour with the simultaneous preservation of the adjacent healthy tissue. However, the beam interactions with the patient produce a large component of secondary particles whose additional dose has to be taken into account during the definition of the treatment planning. Despite, the largest fraction of the dose is released to the tumour volume, a non-negligible amount is deposed in other body regions, mainly due to the scattering and nuclear interactions of the neutrons within the patient body. One of the main concerns in PT treatments is the possible occurrence of secondary malignant neoplasm (SMN). While SMNs can be developed up to decades after the treatments, their incidence impacts directly life quality of the cancer survivors, in particular in pediatric patients. Dedicated Treatment Planning Systems (TPS) are used to predict the normal tissue toxicity including the risk of late complications induced by the additional dose released by secondary neutrons. However, no precise measurement of secondary neutrons flux is available, as well as their energy and angular distributions: an accurate characterization is needed in order to improve TPS and reduce safety margins. The project MONDO (MOnitor for Neutron Dose in hadrOntherapy) is devoted to the construction of a secondary neutron tracker tailored to the characterization of that secondary neutron component. The detector, based on the tracking of the recoil protons produced in double-elastic scattering interactions, is a matrix of thin scintillating fibres, arranged in layer x-y oriented. The final size of the object is 10 x 10 x 20 cm3 (squared 250µm scint. fibres, double cladding). The readout of the fibres is carried out with a dedicated SPAD Array Sensor (SBAM) realised in CMOS technology by FBK (Fondazione Bruno Kessler). The detector is under development as well as the SBAM sensor and it is expected to be fully constructed for the end of the year. MONDO will make data tacking campaigns at the TIFPA Proton Therapy Center of Trento, at the CNAO (Pavia) and at HIT (Heidelberg) with carbon ion in order to characterize the neutron component and predict the additional dose delivered on the patients with much more precision and to drastically reduce the actual safety margins. Preliminary measurements with charged particles beams and MonteCarlo FLUKA simulation will be presented.Keywords: secondary neutrons, particle therapy, tracking detector, elastic scattering
Procedia PDF Downloads 2231753 Municipal-Level Gender Norms: Measurement and Effects on Women in Politics
Authors: Luisa Carrer, Lorenzo De Masi
Abstract:
In this paper, we exploit the massive amount of information from Facebook to build a measure of gender attitudes in Italy at a previously impossible resolution—the municipal level. We construct our index via a machine learning method to replicate a benchmark region-level measure. Interestingly, we find that most of the variation in our Gender Norms Index (GNI) is across towns within narrowly defined geographical areas rather than across regions or provinces. In a second step, we show how this local variation in norms can be leveraged for identification purposes. In particular, we use our index to investigate whether these differences in norms carry over to the policy activity of politicians elected in the Italian Parliament. We document that females are more likely to sit in parliamentary committees focused on gender-sensitive matters, labor, and social issues, but not if they come from a relatively conservative town. These effects are robust to conditioning the legislative term and electoral district, suggesting the importance of social norms in shaping legislators’ policy activity.Keywords: gender equality, gender norms index, Facebook, machine learning, politics
Procedia PDF Downloads 781752 Using Machine Learning Techniques for Autism Spectrum Disorder Analysis and Detection in Children
Authors: Norah Mohammed Alshahrani, Abdulaziz Almaleh
Abstract:
Autism Spectrum Disorder (ASD) is a condition related to issues with brain development that affects how a person recognises and communicates with others which results in difficulties with interaction and communication socially and it is constantly growing. Early recognition of ASD allows children to lead safe and healthy lives and helps doctors with accurate diagnoses and management of conditions. Therefore, it is crucial to develop a method that will achieve good results and with high accuracy for the measurement of ASD in children. In this paper, ASD datasets of toddlers and children have been analyzed. We employed the following machine learning techniques to attempt to explore ASD and they are Random Forest (RF), Decision Tree (DT), Na¨ıve Bayes (NB) and Support Vector Machine (SVM). Then Feature selection was used to provide fewer attributes from ASD datasets while preserving model performance. As a result, we found that the best result has been provided by the Support Vector Machine (SVM), achieving 0.98% in the toddler dataset and 0.99% in the children dataset.Keywords: autism spectrum disorder, machine learning, feature selection, support vector machine
Procedia PDF Downloads 152