Search results for: measurement model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18436

Search results for: measurement model

18226 Simultaneous Measurement of Wave Pressure and Wind Speed with the Specific Instrument and the Unit of Measurement Description

Authors: Branimir Jurun, Elza Jurun

Abstract:

The focus of this paper is the description of an instrument called 'Quattuor 45' and defining of wave pressure measurement. Special attention is given to measurement of wave pressure created by the wind speed increasing obtained with the instrument 'Quattuor 45' in the investigated area. The study begins with respect to theoretical attitudes and numerous up to date investigations related to the waves approaching the coast. The detailed schematic view of the instrument is enriched with pictures from ground plan and side view. Horizontal stability of the instrument is achieved by mooring which relies on two concrete blocks. Vertical wave peak monitoring is ensured by one float above the instrument. The synthesis of horizontal stability and vertical wave peak monitoring allows to create a representative database for wave pressure measuring. Instrument ‘Quattuor 45' is named according to the way the database is received. Namely, the electronic part of the instrument consists of the main chip ‘Arduino', its memory, four load cells with the appropriate modules and the wind speed sensor 'Anemometers'. The 'Arduino' chip is programmed to store two data from each load cell and two data from the anemometer on SD card each second. The next part of the research is dedicated to data processing. All measured results are stored automatically in the database and after that detailed processing is carried out in the MS Excel. The result of the wave pressure measurement is synthesized by the unit of measurement kN/m². This paper also suggests a graphical presentation of the results by multi-line graph. The wave pressure is presented on the left vertical axis, while the wind speed is shown on the right vertical axis. The time of measurement is displayed on the horizontal axis. The paper proposes an algorithm for wind speed measurements showing the results for two characteristic winds in the Adriatic Sea, called 'Bura' and 'Jugo'. The first of them is the northern wind that reaches high speeds, causing low and extremely steep waves, where the pressure of the wave is relatively weak. On the other hand, the southern wind 'Jugo' has a lower speed than the northern wind, but due to its constant duration and constant speed maintenance, it causes extremely long and high waves that cause extremely high wave pressure.

Keywords: instrument, measuring unit, waves pressure metering, wind seed measurement

Procedia PDF Downloads 185
18225 Measuring Systems Interoperability: A Focal Point for Standardized Assessment of Regional Disaster Resilience

Authors: Joel Thomas, Alexa Squirini

Abstract:

The key argument of this research is that every element of systems interoperability is an enabler of regional disaster resilience, and arguably should become a focal point for standardized measurement of communities’ ability to work together. Few resilience research efforts have focused on the development and application of solutions that measurably improve communities’ ability to work together at a regional level, yet a majority of the most devastating and disruptive disasters are those that have had a regional impact. The key findings of the research include a unique theoretical, mathematical, and operational approach to tangibly and defensibly measure and assess systems interoperability required to support crisis information management activities performed by governments, the private sector, and humanitarian organizations. A most effective way for communities to measurably improve regional disaster resilience is through deliberately executed disaster preparedness activities. Developing interoperable crisis information management capabilities is a crosscutting preparedness activity that greatly affects a community’s readiness and ability to work together in times of crisis. Thus, improving communities’ human and technical posture to work together in advance of a crisis, with the ultimate goal of enabling information sharing to support coordination and the careful management of available resources, is a primary means by which communities may improve regional disaster resilience. This model describes how systems interoperability can be qualitatively and quantitatively assessed when characterized as five forms of capital: governance; standard operating procedures; technology; training and exercises; and usage. The unique measurement framework presented defines the relationships between systems interoperability, information sharing and safeguarding, operational coordination, community preparedness and regional disaster resilience, and offers a means by which to implement real-world solutions and measure progress over the course of a multi-year program. The model is being developed and piloted in partnership with the U.S. Department of Homeland Security (DHS) Science and Technology Directorate (S&T) and the North Atlantic Treaty Organization (NATO) Advanced Regional Civil Emergency Coordination Pilot (ARCECP) with twenty-three organizations in Bosnia and Herzegovina, Croatia, Macedonia, and Montenegro. The intended effect of the model implementation is to enable communities to answer two key questions: 'Have we measurably improved crisis information management capabilities as a result of this effort?' and, 'As a result, are we more resilient?'

Keywords: disaster, interoperability, measurement, resilience

Procedia PDF Downloads 123
18224 Quantum Decision Making with Small Sample for Network Monitoring and Control

Authors: Tatsuya Otoshi, Masayuki Murata

Abstract:

With the development and diversification of applications on the Internet, applications that require high responsiveness, such as video streaming, are becoming mainstream. Application responsiveness is not only a matter of communication delay but also a matter of time required to grasp changes in network conditions. The tradeoff between accuracy and measurement time is a challenge in network control. We people make countless decisions all the time, and our decisions seem to resolve tradeoffs between time and accuracy. When making decisions, people are known to make appropriate choices based on relatively small samples. Although there have been various studies on models of human decision-making, a model that integrates various cognitive biases, called ”quantum decision-making,” has recently attracted much attention. However, the modeling of small samples has not been examined much so far. In this paper, we extend the model of quantum decision-making to model decision-making with a small sample. In the proposed model, the state is updated by value-based probability amplitude amplification. By analytically obtaining a lower bound on the number of samples required for decision-making, we show that decision-making with a small number of samples is feasible.

Keywords: quantum decision making, small sample, MPEG-DASH, Grover's algorithm

Procedia PDF Downloads 60
18223 Finite Element Method Analysis of Occluded-Ear Simulator and Natural Human Ear Canal

Authors: M. Sasajima, T. Yamaguchi, Y. Hu, Y. Koike

Abstract:

In this paper, we discuss the propagation of sound in the narrow pathways of an occluded-ear simulator typically used for the measurement of insert-type earphones. The simulator has a standardized frequency response conforming to the international standard (IEC60318-4). In narrow pathways, the speed and phase of sound waves are modified by viscous air damping. In our previous paper, we proposed a new finite element method (FEM) to consider the effects of air viscosity in this type of audio equipment. In this study, we will compare the results from the ear simulator FEM model, and those from a three dimensional human ear canal FEM model made from computed tomography images, with the measured frequency response data from the ear canals of 18 people.

Keywords: ear simulator, FEM, viscosity, human ear canal

Procedia PDF Downloads 391
18222 A Comparative Asessment of Some Algorithms for Modeling and Forecasting Horizontal Displacement of Ialy Dam, Vietnam

Authors: Kien-Trinh Thi Bui, Cuong Manh Nguyen

Abstract:

In order to simulate and reproduce the operational characteristics of a dam visually, it is necessary to capture the displacement at different measurement points and analyze the observed movement data promptly to forecast the dam safety. The accuracy of forecasts is further improved by applying machine learning methods to data analysis progress. In this study, the horizontal displacement monitoring data of the Ialy hydroelectric dam was applied to machine learning algorithms: Gaussian processes, multi-layer perceptron neural networks, and the M5-rules algorithm for modelling and forecasting of horizontal displacement of the Ialy hydropower dam (Vietnam), respectively, for analysing. The database which used in this research was built by collecting time series of data from 2006 to 2021 and divided into two parts: training dataset and validating dataset. The final results show all three algorithms have high performance for both training and model validation, but the MLPs is the best model. The usability of them are further investigated by comparison with a benchmark models created by multi-linear regression. The result show the performance which obtained from all the GP model, the MLPs model and the M5-Rules model are much better, therefore these three models should be used to analyze and predict the horizontal displacement of the dam.

Keywords: Gaussian processes, horizontal displacement, hydropower dam, Ialy dam, M5-Rules, multi-layer perception neural networks

Procedia PDF Downloads 186
18221 Research of Intrinsic Emittance of Thermal Cathode with Emission Nonuniformity

Authors: Yufei Peng, Zhen Qin, Jianbe Li, Jidong Long

Abstract:

The thermal cathode is widely used in accelerators, FELs and kinds of vacuum electronics. However, emission nonuniformity exists due to surface profile, material distribution, temperature variation, crystal orientation, etc., which will cause intrinsic emittance growth, brightness decline, envelope size augment, device performance deterioration or even failure. To understand how emittance is manipulated by emission nonuniformity, an intrinsic emittance model consisting of contributions from macro and micro surface nonuniformity is developed analytically based on general thermal emission model at temperature limited regime according to a real 3mm cathode. The model shows relative emittance increased about 50% due to temperature variation, and less than 5% from several kinds of micro surface nonuniformity which is much smaller than other research. Otherwise, we also calculated emittance growth combining with Monte Carlo method and PIC simulation, experiments of emission uniformity and emittance measurement are going to be carried out separately.

Keywords: thermal cathode, electron emission fluctuation, intrinsic emittance, surface nonuniformity, cathode lifetime

Procedia PDF Downloads 279
18220 Solar Energy Applications in Seawater Distillation

Authors: Yousef Abdulaziz Almolhem

Abstract:

Geographically, the most Arabic countries locate in areas confined to arid or semiarid regions. For this reason, most of our countries have adopted the seawater desalination as a strategy to overcome this problem. For example, the water supply of AUE, Kuwait, and Saudi Arabia is almost 100% from the seawater desalination plants. Many areas in Saudia Arabia and other countries in the world suffer from lack of fresh water which hinders the development of these areas, despite the availability of saline water and high solar radiation intensity. Furthermore, most developing countries do not have sufficient meteorological data to evaluate if the solar radiation is enough to meet the solar desalination. A mathematical model was developed to simulate and predict the thermal behavior of the solar still which used direct solar energy for distillation of seawater. Measurement data were measured in the Environment and Natural Resources Department, Faculty of Agricultural and Food sciences, King Faisal University, Saudi Arabia, in order to evaluate the present model. The simulation results obtained from this model were compared with the measured data. The main results of this research showed that there are slight differences between the measured and predicted values of the elements studied, which is resultant from the change of some factors considered constants in the model such as the sky clearance, wind velocity and the salt concentration in the water in the basin of the solar still. It can be concluded that the present model can be used to estimate the average total solar radiation and the thermal behavior of the solar still in any area with consideration to the geographical location.

Keywords: mathematical model, sea water, distillation, solar radiation

Procedia PDF Downloads 268
18219 Training Can Increase Knowledge and Skill of Teacher's on Measurement and Assessment Nutritional Status Children

Authors: Herawati Tri Siswati, Nurhidayat Ana Sıdık Fatimah

Abstract:

The Indonesia Basic Health Research, 2013 showed that prevalence of stunting of 6–12 children years old was 35,6%, wasting was 12,2% and obesiy was 9,2%. The Indonesian Goverment have School Health Program, held in coordination, plans, directing and responsible, developing and implement health student. However, it's implementation still under expected, while Indonesian Ministry of Health has initiated the School Health Program acceleration. This aimed is to know the influencing of training to knowledge and skill of elementary school teacher about measurement and assesment nutrirional status children. The research is quasy experimental with pre-post design, in Sleman disctrict, Yogyakarta province, Indonesia, 2015. Subject was all of elementary school teacher’s who responsible in School Health Program in Gamping sub-district, Sleman, Yogyakarta, i.e. 32 persons. The independent variable is training, while the dependent variable are teacher’s klowledge and skill on measurement and assesment nutrirional status children. The data was analized by t-test. The result showed that the knowledge score before training is 31,6±9,7 and after 56,4±12,6, with an increase 24,8±15,7, and p=0.00. The skill score before training is 46,6±11,1 and after 61,7±13, with an increase 15,2±14,2, p = 0.00. Training can increase the teacher’s klowledge and skill on measurement and assesment nutrirional status.

Keywords: training, school health program, nutritional status, children.

Procedia PDF Downloads 373
18218 Nonparametric Truncated Spline Regression Model on the Data of Human Development Index in Indonesia

Authors: Kornelius Ronald Demu, Dewi Retno Sari Saputro, Purnami Widyaningsih

Abstract:

Human Development Index (HDI) is a standard measurement for a country's human development. Several factors may have influenced it, such as life expectancy, gross domestic product (GDP) based on the province's annual expenditure, the number of poor people, and the percentage of an illiterate people. The scatter plot between HDI and the influenced factors show that the plot does not follow a specific pattern or form. Therefore, the HDI's data in Indonesia can be applied with a nonparametric regression model. The estimation of the regression curve in the nonparametric regression model is flexible because it follows the shape of the data pattern. One of the nonparametric regression's method is a truncated spline. Truncated spline regression is one of the nonparametric approach, which is a modification of the segmented polynomial functions. The estimator of a truncated spline regression model was affected by the selection of the optimal knots point. Knot points is a focus point of spline truncated functions. The optimal knots point was determined by the minimum value of generalized cross validation (GCV). In this article were applied the data of Human Development Index with a truncated spline nonparametric regression model. The results of this research were obtained the best-truncated spline regression model to the HDI's data in Indonesia with the combination of optimal knots point 5-5-5-4. Life expectancy and the percentage of an illiterate people were the significant factors depend to the HDI in Indonesia. The coefficient of determination is 94.54%. This means the regression model is good enough to applied on the data of HDI in Indonesia.

Keywords: generalized cross validation (GCV), Human Development Index (HDI), knots point, nonparametric regression, truncated spline

Procedia PDF Downloads 318
18217 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 149
18216 Fault Detection and Isolation of a Three-Tank System using Analytical Temporal Redundancy, Parity Space/Relation Based Residual Generation

Authors: A. T. Kuda, J. J. Dayya, A. Jimoh

Abstract:

This paper investigates the fault detection and Isolation technique of measurement data sets from a three tank system using analytical model-based temporal redundancy which is based on residual generation using parity equations/space approach. It further briefly outlines other approaches of model-based residual generation. The basic idea of parity space residual generation in temporal redundancy is dynamic relationship between sensor outputs and actuator inputs (input-output model). These residuals where then used to detect whether or not the system is faulty and indicate the location of the fault when it is faulty. The method obtains good results by detecting and isolating faults from the considered data sets measurements generated from the system.

Keywords: fault detection, fault isolation, disturbing influences, system failure, parity equation/relation, structured parity equations

Procedia PDF Downloads 284
18215 Application on Metastable Measurement with Wide Range High Resolution VDL Circuit

Authors: Po-Hui Yang, Jing-Min Chen, Po-Yu Kuo, Chia-Chun Wu

Abstract:

This paper proposed a high resolution Vernier Delay Line (VDL) measurement circuit with coarse and fine detection mechanism, which improved the trade-off problem between high resolution and less delay cells in traditional VDL circuits. And the measuring time of proposed measurement circuit is also under the high resolution requests. At first, the testing range of input signal which proposed high resolution delay line is detected by coarse detection VDL. Moreover, the delayed input signal is transmitted to fine detection VDL for measuring value with better accuracy. This paper is implemented at 0.18μm process, operating frequency is 100 MHz, and the resolution achieved 2.0 ps with only 16-stage delay cells. The test range is 170ps wide, and 17% stages saved compare with traditional single delay line circuit.

Keywords: vernier delay line, D-type flip-flop, DFF, metastable phenomenon

Procedia PDF Downloads 583
18214 Uncertainty Evaluation of Erosion Volume Measurement Using Coordinate Measuring Machine

Authors: Mohamed Dhouibi, Bogdan Stirbu, Chabotier André, Marc Pirlot

Abstract:

Internal barrel wear is a major factor affecting the performance of small caliber guns in their different life phases. Wear analysis is, therefore, a very important process for understanding how wear occurs, where it takes place, and how it spreads with the aim on improving the accuracy and effectiveness of small caliber weapons. This paper discusses the measurement and analysis of combustion chamber wear for a small-caliber gun using a Coordinate Measuring Machine (CMM). Initially, two different NATO small caliber guns: 5.56x45mm and 7.62x51mm, are considered. A Micura Zeiss Coordinate Measuring Machine (CMM) equipped with the VAST XTR gold high-end sensor is used to measure the inner profile of the two guns every 300-shot cycle. The CMM parameters, such us (i) the measuring force, (ii) the measured points, (iii) the time of masking, and (iv) the scanning velocity, are investigated. In order to ensure minimum measurement error, a statistical analysis is adopted to select the reliable CMM parameters combination. Next, two measurement strategies are developed to capture the shape and the volume of each gun chamber. Thus, a task-specific measurement uncertainty (TSMU) analysis is carried out for each measurement plan. Different approaches of TSMU evaluation have been proposed in the literature. This paper discusses two different techniques. The first is the substitution method described in ISO 15530 part 3. This approach is based on the use of calibrated workpieces with similar shape and size as the measured part. The second is the Monte Carlo simulation method presented in ISO 15530 part 4. Uncertainty evaluation software (UES), also known as the Virtual Coordinate Measuring Machine (VCMM), is utilized in this technique to perform a point-by-point simulation of the measurements. To conclude, a comparison between both approaches is performed. Finally, the results of the measurements are verified through calibrated gauges of several dimensions specially designed for the two barrels. On this basis, an experimental database is developed for further analysis aiming to quantify the relationship between the volume of wear and the muzzle velocity of small caliber guns.

Keywords: coordinate measuring machine, measurement uncertainty, erosion and wear volume, small caliber guns

Procedia PDF Downloads 131
18213 Development of a System for Measuring the Three-axis Pedal Force in Cycling and Its Applications

Authors: Joo-Hack Lee, Jin-Seung Choi, Dong-Won Kang, Jeong-Woo Seo, Ju-Young Kim, Dae-Hyeok Kim, Seung-Tae Yang, Gye-Rae Tack

Abstract:

For cycling, the analysis of the pedal force is one of the important factors in the study of exercise ability assessment and overuse injuries. In past studies, a two-axis measurement sensor was used at the sagittal plane to measure the force only in the anterior, posterior, and vertical directions and to analyze the loss of force and the injury on the frontal plane due to the forces in the right and left directions. In this study, which is a basic study on diverse analyses of the pedal force that consider the forces on the sagittal plane and the frontal plane, a three-axis pedal force measurement sensor was developed to measure the anterior-posterior (Fx), medio-lateral (Fz), and vertical (Fy) forces. The sensor was fabricated with a size and shape similar to those of the general flat pedal, and had a 550g weight that allowed smooth pedaling. Its measurement range was ±1000 N for Fx and Fz and ±2000 N for Fy, and its non-linearity, hysteresis, and repeatability were approximately 0.5%. The data were sampled at 1000 Hz using a signal collector. To use the developed sensor, the pedaling efficiency (index of efficiency, IE) and the range of left and right (medio-lateral, ML) forces were measured with two seat heights (low and high). The results of the measurement showed that the IE was higher and the force range in the ML direction was lower with the high position than with the low position. The developed measurement sensor and its application results will be useful in understanding and explaining the complicated pedaling technique, and will enable diverse kinematic analyses of the pedal force on the sagittal plane and the frontal plane.

Keywords: cycling, pedal force, index of effectiveness, measuring

Procedia PDF Downloads 644
18212 Using the Structural Equation Model to Explain the Effect of Supervisory Practices on Regulatory Density

Authors: Jill Round

Abstract:

In the economic system, the financial sector plays a crucial role as an intermediary between market participants, other financial institutions, and customers. Financial institutions such as banks have to make decisions to satisfy the demands of all the participants by keeping abreast of regulatory change. In recent years, progress has been made regarding frameworks, development of rules, standards, and processes to manage risks in the banking sector. The increasing focus of regulators and policymakers placed on risk management, corporate governance, and the organization’s culture is of special interest as it requires a well-resourced risk controlling function, compliance function, and internal audit function. In the past years, the relevance of these functions that make up the so-called Three Lines of Defense has moved from the backroom to the boardroom. The approach of the model can vary based on the various organizational characteristics. Due to the intense regulatory requirements, organizations operating in the financial sector have more mature models. In less regulated industries there is more cloudiness about what tasks are allocated where. All parties strive to achieve their objectives through the effective management of risks and serve the identical stakeholders. Today, the Three Lines of Defense model is used throughout the world. The research looks at trends and emerging issues in the professions of the Three Lines of Defense within the banking sector. The answers are believed to helping to explain the increasing regulatory requirements for the banking sector. While the number of supervisory practices increases the risk management requirements intensify and demand more regulatory compliance at the same time. The Structural Equation Modeling (SEM) is applied by making use of conducted surveys in the research field. It aims to describe (i) the theoretical model regarding the applicable linearity relationships, (ii) the causal relationship between multiple predictors (exogenous) and multiple dependent variables (endogenous), (iii) taking into consideration the unobservable variables and (iv) the measurement errors. The surveys conducted on the research field suggest that the observable variables are caused by various latent variables. The SEM consists of the 1) measurement model and the 2) structural model. There is a detectable correlation regarding the cause-effect relationship among the performed supervisory practices and the increasing scope of regulation. Supervisory practices reinforce the regulatory density. In the past, controls were placed after supervisory practices were conducted or incidents occurred. In further research, it is of interest to examine, whether risk management is proactive, reactive to incidents and supervisory practices or can be both at the same time.

Keywords: risk management, structural equation model, supervisory practice, three lines of defense

Procedia PDF Downloads 204
18211 Evaluation of Egg Quality Parameters in the Isa Brown Line in Intensive Production Systems in the Ocaña Region, Norte de Santander

Authors: Meza-Quintero Myriam, Lobo Torrado Katty Andrea, Sanchez Picon Yesenia, Hurtado-Lugo Naudin

Abstract:

The objective of the study was to evaluate the internal and external quality of the egg in the three production housing systems: floor, cage, and grazing of laying birds of the Isa Brown line, in the laying period between weeks 35 to 41; 135 hens distributed in 3 treatments of 45 birds per repetition were used (the replicas were the seven weeks of the trial). The feeding treatment supplied in the floor and cage systems contained 114 g/bird/day; for the grazing system, 14 grams less concentrate was provided. Nine eggs were collected to be studied and analyzed in the animal nutrition laboratory (3 eggs per housing system). The random statistical model was implemented: for the statistical analysis of the data, the statistical software of IBM® Statistical Products and Services Solution (SPSS) version 2.3 was used. The evaluation and follow-up instruments were the vernier caliper for the measurement in millimeters, a YolkFan™16 from Roche DSM for the evaluation of the egg yolk pigmentation, a digital scale for the measurement in grams, a micrometer for the measurement in millimeters and evaluation in the laboratory using dry matter, ashes, and ethereal extract. The results suggested that equivalent to the size of the egg (0.04 ± 3.55) and the thickness of the shell (0.46 ± 3.55), where P-Value> 0.05 was obtained, weight albumen (0.18 ± 3.55), albumen height (0.38 ± 3.55), yolk weight (0.64 ± 3.55), yolk height (0.54 ± 3.55) and for yolk pigmentation (1.23 ± 3.55). It was concluded that the hens in the three production systems, floor, cage, and grazing, did not show significant statistical differences in the internal and external quality of the chicken in the parameters studied egg for the production system.

Keywords: biological, territories, genetic resource, egg

Procedia PDF Downloads 68
18210 The Laser Line Detection for Autonomous Mapping Based on Color Segmentation

Authors: Pavel Chmelar, Martin Dobrovolny

Abstract:

Laser projection or laser footprint detection is today widely used in many fields of robotics, measurement, or electronics. The system accuracy strictly depends on precise laser footprint detection on target objects. This article deals with the laser line detection based on the RGB segmentation and the component labeling. As a measurement device was used the developed optical rangefinder. The optical rangefinder is equipped with vertical sweeping of the laser beam and high quality camera. This system was developed mainly for automatic exploration and mapping of unknown spaces. In the first section is presented a new detection algorithm. In the second section are presented measurements results. The measurements were performed in variable light conditions in interiors. The last part of the article present achieved results and their differences between day and night measurements.

Keywords: color segmentation, component labelling, laser line detection, automatic mapping, distance measurement, vector map

Procedia PDF Downloads 415
18209 Spatial Correlation of Channel State Information in Real Long Range Measurement

Authors: Ahmed Abdelghany, Bernard Uguen, Christophe Moy, Dominique Lemur

Abstract:

The Internet of Things (IoT) is developed to ensure monitoring and connectivity within different applications. Thus, it is critical to study the channel propagation characteristics in Low Power Wide Area Network (LPWAN), especially Long Range Wide Area Network (LoRaWAN). In this paper, an in-depth investigation of the reciprocity between the uplink and downlink Channel State Information (CSI) is done by performing an outdoor measurement campaign in the area of Campus Beaulieu in Rennes. At each different location, the CSI reciprocity is quantified using the Pearson Correlation Coefficient (PCC) which shows a very high linear correlation between the uplink and downlink CSI. This reciprocity feature could be utilized for the physical layer security between the node and the gateway. On the other hand, most of the CSI shapes from different locations are highly uncorrelated from each other. Hence, it can be anticipated that this could achieve significant localization gain by utilizing the frequency hopping in the LoRa systems by getting access to a wider band.

Keywords: IoT, LPWAN, LoRa, effective signal power, onsite measurement

Procedia PDF Downloads 149
18208 Kýklos Dimensional Geometry: Entity Specific Core Measurement System

Authors: Steven D. P Moore

Abstract:

A novel method referred to asKýklos(Ky) dimensional geometry is proposed as an entity specific core geometric dimensional measurement system. Ky geometric measures can constructscaled multi-dimensionalmodels using regular and irregular sets in IRn. This entity specific-derived geometric measurement system shares similar fractal methods in which a ‘fractal transformation operator’ is applied to a set S to produce a union of N copies. The Kýklos’ inputs use 1D geometry as a core measure. One-dimensional inputs include the radius interval of a circle/sphere or the semiminor/semimajor axes intervals of an ellipse or spheroid. These geometric inputs have finite values that can be measured by SI distance units. The outputs for each interval are divided and subdivided 1D subcomponents with a union equal to the interval geometry/length. Setting a limit of subdivision iterations creates a finite value for each 1Dsubcomponent. The uniqueness of this method is captured by allowing the simplest 1D inputs to define entity specific subclass geometric core measurements that can also be used to derive length measures. Current methodologies for celestial based measurement of time, as defined within SI units, fits within this methodology, thus combining spatial and temporal features into geometric core measures. The novel Ky method discussed here offers geometric measures to construct scaled multi-dimensional structures, even models. Ky classes proposed for consideration include celestial even subatomic. The application of this offers incredible possibilities, for example, geometric architecture that can represent scaled celestial models that incorporates planets (spheroids) and celestial motion (elliptical orbits).

Keywords: Kyklos, geometry, measurement, celestial, dimension

Procedia PDF Downloads 154
18207 Sensor Fault-Tolerant Model Predictive Control for Linear Parameter Varying Systems

Authors: Yushuai Wang, Feng Xu, Junbo Tan, Xueqian Wang, Bin Liang

Abstract:

In this paper, a sensor fault-tolerant control (FTC) scheme using robust model predictive control (RMPC) and set theoretic fault detection and isolation (FDI) is extended to linear parameter varying (LPV) systems. First, a group of set-valued observers are designed for passive fault detection (FD) and the observer gains are obtained through minimizing the size of invariant set of state estimation-error dynamics. Second, an input set for fault isolation (FI) is designed offline through set theory for actively isolating faults after FD. Third, an RMPC controller based on state estimation for LPV systems is designed to control the system in the presence of disturbance and measurement noise and tolerate faults. Besides, an FTC algorithm is proposed to maintain the plant operate in the corresponding mode when the fault occurs. Finally, a numerical example is used to show the effectiveness of the proposed results.

Keywords: fault detection, linear parameter varying, model predictive control, set theory

Procedia PDF Downloads 230
18206 Forecasting Etching Behavior Silica Sand Using the Design of Experiments Method

Authors: Kefaifi Aissa, Sahraoui Tahar, Kheloufi Abdelkrim, Anas Sabiha, Hannane Farouk

Abstract:

The aim of this study is to show how the Design of Experiments Method (DOE) can be put into use as a practical approach for silica sand etching behavior modeling during its primary step of leaching. In the present work, we have studied etching effect on particle size during a primary step of leaching process on Algerian silica sand with florid acid (HF) at 20% and 30 % during 4 and 8 hours. Therefore, a new purity of the sand is noted depending on the time of leaching. This study was expanded by a numerical approach using a method of experiment design, which shows the influence of each parameter and the interaction between them in the process and approved the obtained experimental results. This model is a predictive approach using hide software. Based on the measured parameters experimentally in the interior of the model, the use of DOE method can make it possible to predict the outside parameters of the model in question and can give us the optimize response without making the experimental measurement.

Keywords: acid leaching, design of experiments method(DOE), purity silica, silica etching

Procedia PDF Downloads 267
18205 The Simulation and Experimental Investigation to Study the Strain Distribution Pattern during the Closed Die Forging Process

Authors: D. B. Gohil

Abstract:

Closed die forging is a very complex process, and measurement of actual forces for real material is difficult and time consuming. Hence, the modelling technique has taken the advantage of carrying out the experimentation with the proper model material which needs lesser forces and relatively low temperature. The results of experiments on the model material then may be correlated with the actual material by using the theory of similarity. There are several methods available to resolve the complexity involved in the closed die forging process. Finite Element Method (FEM) and Finite Difference Method (FDM) are relatively difficult as compared to the slab method. The slab method is very popular and very widely used by the people working on shop floor because it is relatively easy to apply and reasonably accurate for most of the common forging load requirement computations.

Keywords: experimentation, forging, process modeling, strain distribution

Procedia PDF Downloads 186
18204 On the Performance Analysis of Coexistence between IEEE 802.11g and IEEE 802.15.4 Networks

Authors: Chompunut Jantarasorn, Chutima Prommak

Abstract:

This paper presents an intensive measurement studying of the network performance analysis when IEEE 802.11g Wireless Local Area Networks (WLAN) coexisting with IEEE 802.15.4 Wireless Personal Area Network (WPAN). The measurement results show that the coexistence between both networks could increase the Frame Error Rate (FER) of the IEEE 802.15.4 networks up to 60% and it could decrease the throughputs of the IEEE 802.11g networks up to 55%.

Keywords: wireless performance analysis, coexistence analysis, IEEE 802.11g, IEEE 802.15.4

Procedia PDF Downloads 531
18203 Use of Fuzzy Logic in the Corporate Reputation Assessment: Stock Market Investors’ Perspective

Authors: Tomasz L. Nawrocki, Danuta Szwajca

Abstract:

The growing importance of reputation in building enterprise value and achieving long-term competitive advantage creates the need for its measurement and evaluation for the management purposes (effective reputation and its risk management). The paper presents practical application of self-developed corporate reputation assessment model from the viewpoint of stock market investors. The model has a pioneer character and example analysis performed for selected industry is a form of specific test for this tool. In the proposed solution, three aspects - informational, financial and development, as well as social ones - were considered. It was also assumed that the individual sub-criteria will be based on public sources of information, and as the calculation apparatus, capable of obtaining synthetic final assessment, fuzzy logic will be used. The main reason for developing this model was to fulfill the gap in the scope of synthetic measure of corporate reputation that would provide higher degree of objectivity by relying on "hard" (not from surveys) and publicly available data. It should be also noted that results obtained on the basis of proposed corporate reputation assessment method give possibilities of various internal as well as inter-branch comparisons and analysis of corporate reputation impact.

Keywords: corporate reputation, fuzzy logic, fuzzy model, stock market investors

Procedia PDF Downloads 228
18202 The Latent Model of Linguistic Features in Korean College Students’ L2 Argumentative Writings: Syntactic Complexity, Lexical Complexity, and Fluency

Authors: Jiyoung Bae, Gyoomi Kim

Abstract:

This study explores a range of linguistic features used in Korean college students’ argumentative writings for the purpose of developing a model that identifies variables which predict writing proficiencies. This study investigated the latent variable structure of L2 linguistic features, including syntactic complexity, the lexical complexity, and fluency. One hundred forty-six university students in Korea participated in this study. The results of the study’s confirmatory factor analysis (CFA) showed that indicators of linguistic features from this study-provided a foundation for re-categorizing indicators found in extant research on L2 Korean writers depending on each latent variable of linguistic features. The CFA models indicated one measurement model of L2 syntactic complexity and L2 learners’ writing proficiency; these two latent factors were correlated with each other. Based on the overall findings of the study, integrated linguistic features of L2 writings suggested some pedagogical implications in L2 writing instructions.

Keywords: linguistic features, syntactic complexity, lexical complexity, fluency

Procedia PDF Downloads 149
18201 Field Environment Sensing and Modeling for Pears towards Precision Agriculture

Authors: Tatsuya Yamazaki, Kazuya Miyakawa, Tomohiko Sugiyama, Toshitaka Iwatani

Abstract:

The introduction of sensor technologies into agriculture is a necessary step to realize Precision Agriculture. Although sensing methodologies themselves have been prevailing owing to miniaturization and reduction in costs of sensors, there are some difficulties to analyze and understand the sensing data. Targeting at pears ’Le Lectier’, which is particular to Niigata in Japan, cultivation environmental data have been collected at pear fields by eight sorts of sensors: field temperature, field humidity, rain gauge, soil water potential, soil temperature, soil moisture, inner-bag temperature, and inner-bag humidity sensors. With regard to the inner-bag temperature and humidity sensors, they are used to measure the environment inside the fruit bag used for pre-harvest bagging of pears. In this experiment, three kinds of fruit bags were used for the pre-harvest bagging. After over 100 days continuous measurement, volumes of sensing data have been collected. Firstly, correlation analysis among sensing data measured by respective sensors reveals that one sensor can replace another sensor so that more efficient and cost-saving sensing systems can be proposed to pear farmers. Secondly, differences in characteristic and performance of the three kinds of fruit bags are clarified by the measurement results by the inner-bag environmental sensing. It is found that characteristic and performance of the inner-bags significantly differ from each other by statistical analysis. Lastly, a relational model between the sensing data and the pear outlook quality is established by use of Structural Equation Model (SEM). Here, the pear outlook quality is related with existence of stain, blob, scratch, and so on caused by physiological impair or diseases. Conceptually SEM is a combination of exploratory factor analysis and multiple regression. By using SEM, a model is constructed to connect independent and dependent variables. The proposed SEM model relates the measured sensing data and the pear outlook quality determined on the basis of farmer judgement. In particularly, it is found that the inner-bag humidity variable relatively affects the pear outlook quality. Therefore, inner-bag humidity sensing might help the farmers to control the pear outlook quality. These results are supported by a large quantity of inner-bag humidity data measured over the years 2014, 2015, and 2016. The experimental and analytical results in this research contribute to spreading Precision Agriculture technologies among the farmers growing ’Le Lectier’.

Keywords: precision agriculture, pre-harvest bagging, sensor fusion, structural equation model

Procedia PDF Downloads 293
18200 Low-Cost IoT System for Monitoring Ground Propagation Waves due to Construction and Traffic Activities to Nearby Construction

Authors: Lan Nguyen, Kien Le Tan, Bao Nguyen Pham Gia

Abstract:

Due to the high cost, specialized dynamic measurement devices for industrial lands are difficult for many colleges to equip for hands-on teaching. This study connects a dynamic measurement sensor and receiver utilizing an inexpensive Raspberry Pi 4 board, some 24-bit ADC circuits, a geophone vibration sensor, and embedded Python open-source programming. Gather and analyze signals for dynamic measuring, ground vibration monitoring, and structure vibration monitoring. The system may wirelessly communicate data to the computer and is set up as a communication node network, enabling real-time monitoring of background vibrations at various locations. The device can be utilized for a variety of dynamic measurement and monitoring tasks, including monitoring earthquake vibrations, ground vibrations from construction operations, traffic, and vibrations of building structures.

Keywords: sensors, FFT, signal processing, real-time data monitoring, ground propagation wave, python, raspberry Pi 4

Procedia PDF Downloads 86
18199 Mathematical Modeling of the Operating Process and a Method to Determine the Design Parameters in an Electromagnetic Hammer Using Solenoid Electromagnets

Authors: Song Hyok Choe

Abstract:

This study presented a method to determine the optimum design parameters based on a mathematical model of the operating process in a manual electromagnetic hammer using solenoid electromagnets. The operating process of the electromagnetic hammer depends on the circuit scheme of the power controller. Mathematical modeling of the operating process was carried out by considering the energy transfer process in the forward and reverse windings and the electromagnetic force acting on the impact and brake pistons. Using the developed mathematical model, the initial design data of a manual electromagnetic hammer proposed in this paper are encoded and analyzed in Matlab. On the other hand, a measuring experiment was carried out by using a measurement device to check the accuracy of the developed mathematical model. The relative errors of the analytical results for measured stroke distance of the impact piston, peak value of forward stroke current and peak value of reverse stroke current were −4.65%, 9.08% and 9.35%, respectively. Finally, it was shown that the mathematical model of the operating process of an electromagnetic hammer is relatively accurate, and it can be used to determine the design parameters of the electromagnetic hammer. Therefore, the design parameters that can provide the required impact energy in the manual electromagnetic hammer were determined using a mathematical model developed. The proposed method will be used for the further design and development of the various types of percussion rock drills.

Keywords: solenoid electromagnet, electromagnetic hammer, stone processing, mathematical modeling

Procedia PDF Downloads 18
18198 Automated Process Quality Monitoring and Diagnostics for Large-Scale Measurement Data

Authors: Hyun-Woo Cho

Abstract:

Continuous monitoring of industrial plants is one of necessary tasks when it comes to ensuring high-quality final products. In terms of monitoring and diagnosis, it is quite critical and important to detect some incipient abnormal events of manufacturing processes in order to improve safety and reliability of operations involved and to reduce related losses. In this work a new multivariate statistical online diagnostic method is presented using a case study. For building some reference models an empirical discriminant model is constructed based on various past operation runs. When a fault is detected on-line, an on-line diagnostic module is initiated. Finally, the status of the current operating conditions is compared with the reference model to make a diagnostic decision. The performance of the presented framework is evaluated using a dataset from complex industrial processes. It has been shown that the proposed diagnostic method outperforms other techniques especially in terms of incipient detection of any faults occurred.

Keywords: data mining, empirical model, on-line diagnostics, process fault, process monitoring

Procedia PDF Downloads 387
18197 Surface Morphology and Wetting Behavior of the Aspidiotus spp. Scale Covers

Authors: Meril Kate Mariano, Billy Joel Almarinez Divina Amalin, Jose Isagani Janairo

Abstract:

The scale insects Aspidiotus destructor and Aspidiotus rigidus exhibit notable scale covers made of wax which provides protection against water loss and is capable to resist wetting, thus making them a desirable model for biomimetic designs. Their waxy covers enable them to infest mainly leaves of coconut trees despite the harsh wind and rain. This study aims to describe and compare the micro morphological characters on the surfaces of their scale covers consequently, how these micro structures affect their wetting properties. Scanning electron microscope was used for the surface characterization while an optical contact angle meter was employed in the wetting measurement. The scale cover of A. destructor is composed of multiple overlapping layers of wax that is arranged regularly while that of A. rigidus is composed of a uniform layer of wax with much more prominent wax ribbons irregularly arranged compared to the former. The protrusions found on the two organisms are formed by the wax ribbons that differ in arrangement with their height being A. destructor (3.57+1.29) < A. rigidus (4.23+1.22) and their density A. destructor (15+2.94) < A. rigidus (18.33+2.64). These morphological measurements could affect the contact angle (CA θ) measurement of A. destructor (102.66+9.78°) < A. rigidus (102.77 + 11.01°) wherein the assessment that the interaction of the liquid to the microstructures of the substrate is a large factor in the wetting properties of the insect scales is realized. The calculated surface free energy of A. destructor (38.47 mJ/m²) > A. rigidus (31.02 mJ/m²) shows inverse proportionality with the CA measurement. The dispersive interaction between the surface and liquid is more prevalent compared to the polar interaction for both Aspidiotus species, which was observed using the Fowkes method. The results of this study have possible applications to be a potential biomimetic design for various industries such as textiles and coatings.

Keywords: Aspidiotus spp., biomimetics, contact angle, surface characterization, wetting behavior

Procedia PDF Downloads 109