Search results for: measurement uncertainty
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3532

Search results for: measurement uncertainty

2902 A Study on the Relationship between Transaction Fairness, Social Capital, Supply Chain Integration and Sustainability: Focusing on Manufacturing Companies of South Korea

Authors: Sung-Min Park, Chan Kwon Park, Chae-Bogk Kim

Abstract:

The purpose of this study is to analyze the relationship between transaction fairness, social capital, supply chain integration and sustainability. Based on the previous studies, measurement items were determined by using SPSS 22 and exploratory factor analysis was performed, and again, using AMOS 21 for confirmatory factor analysis and path analysis was performed by using study items that satisfy reliability, validity, and appropriateness of measurement model. It has shown that transaction fairness has a (+) significant effect on social capital, social capital on supply chain integration, supply chain integration on economic sustainability and social sustainability, and has a (+), but not significant effect on environmental sustainability. It has shown that supply chain integration has been proven to play a role as a parameter between social capital and economic and social sustainability, but not as a parameter between environmental sustainability. Through this study, it is suggested that clearly examining the relationship between fairness of trade, social capital, supply chain integration and sustainability, maintaining fairness of the transaction make formation of social capital, and further integration of supply chain, and achieve sustainability of entire supply chain.

Keywords: transaction fairness, social capital, supply chain integration, sustainability

Procedia PDF Downloads 437
2901 Evaluation of a Data Fusion Algorithm for Detecting and Locating a Radioactive Source through Monte Carlo N-Particle Code Simulation and Experimental Measurement

Authors: Hadi Ardiny, Amir Mohammad Beigzadeh

Abstract:

Through the utilization of a combination of various sensors and data fusion methods, the detection of potential nuclear threats can be significantly enhanced by extracting more information from different data. In this research, an experimental and modeling approach was employed to track a radioactive source by combining a surveillance camera and a radiation detector (NaI). To run this experiment, three mobile robots were utilized, with one of them equipped with a radioactive source. An algorithm was developed in identifying the contaminated robot through correlation between camera images and camera data. The computer vision method extracts the movements of all robots in the XY plane coordinate system, and the detector system records the gamma-ray count. The position of the robots and the corresponding count of the moving source were modeled using the MCNPX simulation code while considering the experimental geometry. The results demonstrated a high level of accuracy in finding and locating the target in both the simulation model and experimental measurement. The modeling techniques prove to be valuable in designing different scenarios and intelligent systems before initiating any experiments.

Keywords: nuclear threats, radiation detector, MCNPX simulation, modeling techniques, intelligent systems

Procedia PDF Downloads 110
2900 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 82
2899 The Effect of Training Program by Using Especial Strength on the Performance Skills of Hockey Players

Authors: Wesam El Bana

Abstract:

The current research aimed at designing a training program for improving specific muscular strength through using the especial strength and identifying its effects on the performance level skills of hockey players. The researcher used the quasi-experimental approach (two – group design) with pre- and post-measurements. Sample: (n= 35) was purposefully chosen from sharkia sports club. Five hockey player were excluded due to their non-punctuality. The rest were divided into two equal groups (experimental and control). The researcher concluded the following: The traditional training program had a positive effect on improving the physical variables under investigation as it led to increasing the improvement percentages of the physical variables and the performance level skills of the control group between the pre- and post-measurement. The recommended training program had a positive effect on improving the physical variables under investigation as it led to increasing the improvement percentages of the physical variable and the performance level skills of the experimental group between the pre- and post-measurements. Exercises using the especial strength training had a positive effect on the post-measurement of the experimental group.

Keywords: hockey, especial strength, performance skills

Procedia PDF Downloads 234
2898 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 157
2897 Fire Safety Assessment of At-Risk Groups

Authors: Naser Kazemi Eilaki, Carolyn Ahmer, Ilona Heldal, Bjarne Christian Hagen

Abstract:

Older people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to safe places. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. This research deals with the fire safety of mentioned people's buildings by means of probabilistic methods. For this purpose, fire safety is addressed by modeling the egress of our target group from a hazardous zone to a safe zone. A common type of detached house with a prevalent plan has been chosen for safety analysis, and a limit state function has been developed according to the time-line evacuation model, which is based on a two-zone and smoke development model. An analytical computer model (B-Risk) is used to consider smoke development. Since most of the involved parameters in the fire development model pose uncertainty, an appropriate probability distribution function has been considered for each one of the variables with indeterministic nature. To achieve safety and reliability for the at-risk groups, the fire safety index method has been chosen to define the probability of failure (causalities) and safety index (beta index). An improved harmony search meta-heuristic optimization algorithm has been used to define the beta index. Sensitivity analysis has been done to define the most important and effective parameters for the fire safety of the at-risk group. Results showed an area of openings and intervals to egress exits are more important in buildings, and the safety of people would improve with increasing dimensions of occupant space (building). Fire growth is more critical compared to other parameters in the home without a detector and fire distinguishing system, but in a home equipped with these facilities, it is less important. Type of disabilities has a great effect on the safety level of people who live in the same home layout, and people with visual impairment encounter more risk of capturing compared to visual and movement disabilities.

Keywords: fire safety, at-risk groups, zone model, egress time, uncertainty

Procedia PDF Downloads 98
2896 Managing the Magnetic Protection of Workers in Magnetic Resonance Imaging

Authors: Safoin Aktaou, Aya Al Masri, Kamel Guerchouche, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: In the ‘Magnetic Resonance Imaging (MRI)’ department, all workers involved in preparing the patient, setting it up, tunnel cleaning, etc. are likely to be exposed to ‘ElectroMagnetic fields (EMF)’ emitted by the MRI device. Exposure to EMF can cause adverse radio-biological effects to workers. The purpose of this study is to propose an organizational process to manage and control EMF risks. Materials and methods: The study was conducted at seven MRI departments using machines with 1.5 and 3 Tesla magnetic fields. We assessed the exposure of each one by measuring the two electromagnetic fields (static and dynamic) at different distances from the MRI machine both inside and around the examination room. Measurement values were compared with British and American references (those of the UK's ‘Medicines and Healthcare Regulatory Agency (MHRA)’ and the ‘American Radiology Society (ACR)’). Results: Following the results of EMF measurements and their comparison with the recommendations of learned societies, a zoning system that adapts to needs of different MRI services across the country has been proposed. In effect, three risk areas have been identified within the MRI services. This has led to the development of a good practice guide related to the magnetic protection of MRI workers. Conclusion: The guide established by our study is a standard that allows MRI workers to protect themselves against the risk of electromagnetic fields.

Keywords: comparison with international references, measurement of electromagnetic fields, magnetic protection of workers, magnetic resonance imaging

Procedia PDF Downloads 158
2895 The Effect of Socialization Tactics on Job Satisfaction of Employees, Regarding to Personality Types in Tehran University of Medical Science’s Employees

Authors: Maryam Hoorzad, Narges Shokry, Mandan Momeni

Abstract:

According to importance of socialization in effectiveness of organizations and on the other hand assessing the impact of individual differences on socialization tactics by measuring employees satisfaction, can be assessed for each of the personality types which socialization tactics is the more effective. The aim of this paper is to investigate how organizational socialization tactics affect job satisfaction of employees according to personality types. A survey was conducted using a measurement tool based on Van Maanen and Schein’s theory on organizational socialization tactics and Myers Briggs’ measurement tools of personality types. The respondents were employees with more than 3 years backward in Tehran University of Medical Science. Data collection was performed using both library and field, the data collection instrument was questionnaires and data were analysed using the Spss and Lisrel programs. It was found that investiture and serial tactics has a significant effect on employees satisfaction, any increase in investiture and serial tactics led to increase in job satisfaction and any increase in divestiture and disjunctive tactics led to reduction of job satisfaction. Investiture tactic has the most effect on employees satisfaction. Also based on the results, personality types affect the relationship between socialization tactics and job satisfaction. In the ESFJ personality type the effect of investiture tactic on employee satisfaction is the most.

Keywords: organizational socialization, organizational socialization tactics, personality types, job satisfaction

Procedia PDF Downloads 435
2894 Investigation of Efficient Production of ¹³⁵La for the Auger Therapy Using Medical Cyclotron in Poland

Authors: N. Zandi, M. Sitarz, J. Jastrzebski, M. Vagheian, J. Choinski, A. Stolarz, A. Trzcinska

Abstract:

¹³⁵La with the half-life of 19.5 h can be considered as a good candidate for Auger therapy. ¹³⁵La decays almost 100% by electron capture to the stable ¹³⁵Ba. In this study, all important possible reactions leading to ¹³⁵La production are investigated in details, and the corresponding theoretical yield for each reaction using the Monte-Carlo method (MCNPX code) are presented. Among them, the best reaction based on the cost-effectiveness and production yield regarding Poland facilities equipped with medical cyclotron has been selected. ¹³⁵La is produced using 16.5 MeV proton beam of general electric PET trace cyclotron through the ¹³⁵Ba(p,n)¹³⁵La reaction. Moreover, for a consistent facilitating comparison between the theoretical calculations and the experimental measurements, the beam current and also the proton beam energy is measured experimentally. Then, the obtained proton energy is considered as the entrance energy for the theoretical calculations. The production yield finally is measured and compared with the results obtained using the MCNPX code. The results show the experimental measurement and the theoretical calculations are in good agreement.

Keywords: efficient ¹³⁵La production, proton cyclotron energy measurement, MCNPX code, theoretical and experimental production yield

Procedia PDF Downloads 136
2893 Robust Control of a Parallel 3-RRR Robotic Manipulator via μ-Synthesis Method

Authors: A. Abbasi Moshaii, M. Soltan Rezaee, M. Mohammadi Moghaddam

Abstract:

Control of some mechanisms is hard because of their complex dynamic equations. If part of the complexity is resulting from uncertainties, an efficient way for solving that is robust control. By this way, the control procedure could be simple and fast and finally, a simple controller can be designed. One kind of these mechanisms is 3-RRR which is a parallel mechanism and has three revolute joints. This paper aims to robust control a 3-RRR planner mechanism and it presents that this could be used for other mechanisms. So, a significant problem in mechanisms control could be solved. The relevant diagrams are drawn and they show the correctness of control process.

Keywords: 3-RRR, dynamic equations, mechanisms control, structural uncertainty

Procedia PDF Downloads 552
2892 Development and Validation of the Circular Economy Scale

Authors: Yu Fang Chen, Jeng Fung Hung

Abstract:

This study aimed to develop a circular economy scale to assess the level of recognition among high-level executives in businesses regarding the circular economy. The circular economy is crucial for global ESG sustainable development and poses a challenge for corporate social responsibility. The aim of promoting the circular economy is to reduce resource consumption, move towards sustainable development, reduce environmental impact, maintain ecological balance, increase economic value, and promote employment. This study developed a 23-item Circular Economy Scale, which includes three subscales: "Understanding of Circular Economy by Enterprises" (8 items), "Attitudes" (9 items), and "Behaviors" (6 items). The Likert 5-point scale was used to measure responses, with higher scores indicating higher levels of agreement among senior executives with regard to the circular economy. The study tested 105 senior executives and used a structural equation model (SEM) as a measurement indicator to determine the extent to which potential variables were measured. The standard factor loading of the measurement indicator needs to be higher than 0.7, and the average variance explained (AVE) represents the index of convergent validity, which should be greater than 0.5 or at least 0.45 to be acceptable. Out of the 23 items, 12 did not meet the standard, so they were removed, leaving 5 items, 3 items, and 3 items for each of the three subscales, respectively, all with a factor loading greater than 0.7. The AVE for all three subscales was greater than 0.45, indicating good construct validity. The Cronbach's α reliability values for the three subscales were 0.887, 0.787, and 0.734, respectively, and the total scale was 0.860, all of which were higher than 0.7, indicating good reliability. The Circular Economy Scale developed in this study measures three conceptual components that align with the theoretical framework of the literature review and demonstrate good reliability and validity. It can serve as a measurement tool for evaluating the degree of acceptance of the circular economy among senior executives in enterprises. In the future, this scale can be used by senior executives in enterprises as an evaluation tool to further explore its impact on sustainable development and to promote circular economy and sustainable development based on the reference provided.

Keywords: circular economy, corporate social responsibility, scale development, structural equation model

Procedia PDF Downloads 78
2891 Purity Monitor Studies in Medium Liquid Argon TPC

Authors: I. Badhrees

Abstract:

This paper is an attempt to describe some of the results that had been found through a journey of study in the field of particle physics. This study consists of two parts, one about the measurement of the cross section of the decay of the Z particle in two electrons, and the other deals with the measurement of the cross section of the multi-photon absorption process using a beam of laser in the Liquid Argon Time Projection Chamber. The first part of the paper concerns the results based on the analysis of a data sample containing 8120 ee candidates to reconstruct the mass of the Z particle for each event where each event has an ee pair with PT(e) > 20GeV, and η(e) < 2.5. Monte Carlo templates of the reconstructed Z particle were produced as a function of the Z mass scale. The distribution of the reconstructed Z mass in the data was compared to the Monte Carlo templates, where the total cross section is calculated to be equal to 1432 pb. The second part concerns the Liquid Argon Time Projection Chamber, LAr TPC, the results of the interaction of the UV Laser, Nd-YAG with λ= 266mm, with LAr and through the study of the multi-photon ionization process as a part of the R&D at Bern University. The main result of this study was the cross section of the process of the multi-photon ionization process of the LAr, σe = 1.24±0.10stat±0.30sys.10 -56cm4.

Keywords: ATLAS, CERN, KACST, LArTPC, particle physics

Procedia PDF Downloads 342
2890 Enhancing Institutional Roles and Managerial Instruments for Irrigation Modernization in Sudan: The Case of Gezira Scheme

Authors: Mohamed Ahmed Abdelmawla

Abstract:

Calling to achieve Millennium Development Goals (MDGs) engaged with agriculture, i.e. poverty alleviation targets, human resources involved in agricultural sectors with special emphasis on irrigation must receive wealth of practical experience and training. Increased food production, including staple food, is needed to overcome the present and future threats to food security. This should happen within a framework of sustainable management of natural resources, elimination of unsustainable methods of production and poverty reduction (i.e. axes of modernization). A didactic tool to confirm the task of wise and maximum utility is the best management and accurate measurement, as major requisites for modernization process. The key component to modernization as a warranted goal is adhering great attention to management and measurement issues via capacity building. As such, this paper stressed the issues of discharge management and measurement by Field Outlet Pipes (FOP) for selected ones within the Gezira Scheme, where randomly nine FOPs were selected as representative locations. These FOPs extended along the Gezira Main Canal at Kilo 57 areas in the South up to Kilo 194 in the North. The following steps were followed during the field data collection and measurements: For each selected FOP, a 90 v- notch thin plate weir was placed in such away that the water was directed to pass only through the notch. An optical survey level was used to measure the water head of the notch and FOP. Both calculated discharge rates as measured by the v – notch, denoted as [Qc], and the adopted discharges given by (MOIWR), denoted as [Qa], are tackled for the average of three replicated readings undertaken at each location. The study revealed that the FOP overestimates and sometimes underestimates the discharges. This is attributed to the fact that the original design specifications were not fulfilled or met at present conditions where water is allowed to flow day and night with high head fluctuation, knowing that the FOP is non modular structure, i.e. the flow depends on both levels upstream and downstream and confirmed by the results of this study. It is convenient and formative to quantify the discharge in FOP with weirs or Parshall flumes. Cropping calendar should be clearly determined and agreed upon before the beginning of the season in accordance and consistency with the Sudan Gezira Board (SGB) and Ministry of Irrigation and Water Resources. As such, the water indenting should be based on actual Crop Water Requirements (CWRs), not on rules of thumb (420 m3/feddan, irrespective of crop or time of season).

Keywords: management, measurement, MDGs, modernization

Procedia PDF Downloads 248
2889 Experimental Characterization of Composite Material with Non Contacting Methods

Authors: Nikolaos Papadakis, Constantinos Condaxakis, Konstantinos Savvakis

Abstract:

The aim of this paper is to determine the elastic properties (elastic modulus and Poisson ratio) of a composite material based on noncontacting imaging methods. More specifically, the significantly reduced cost of digital cameras has given the opportunity of the high reliability of low-cost strain measurement. The open source platform Ncorr is used in this paper which utilizes the method of digital image correlation (DIC). The use of digital image correlation in measuring strain uses random speckle preparation on the surface of the gauge area, image acquisition, and postprocessing the image correlation to obtain displacement and strain field on surface under study. This study discusses technical issues relating to the quality of results to be obtained are discussed. [0]8 fabric glass/epoxy composites specimens were prepared and tested at different orientations 0[o], 30[o], 45[o], 60[o], 90[o]. Each test was recorded with the camera at a constant frame rate and constant lighting conditions. The recorded images were processed through the use of the image processing software. The parameters of the test are reported. The strain map output which is obtained through strain measurement using Ncorr is validated by a) comparing the elastic properties with expected values from Classical laminate theory, b) through finite element analysis.

Keywords: composites, Ncorr, strain map, videoextensometry

Procedia PDF Downloads 139
2888 Autogenous Diabetic Retinopathy Censor for Ophthalmologists - AKSHI

Authors: Asiri Wijesinghe, N. D. Kodikara, Damitha Sandaruwan

Abstract:

The Diabetic Retinopathy (DR) is a rapidly growing interrogation around the world which can be annotated by abortive metabolism of glucose that causes long-term infection in human retina. This is one of the preliminary reason of visual impairment and blindness of adults. Information on retinal pathological mutation can be recognized using ocular fundus images. In this research, we are mainly focused on resurrecting an automated diagnosis system to detect DR anomalies such as severity level classification of DR patient (Non-proliferative Diabetic Retinopathy approach) and vessel tortuosity measurement of untwisted vessels to assessment of vessel anomalies (Proliferative Diabetic Retinopathy approach). Severity classification method is obtained better results according to the precision, recall, F-measure and accuracy (exceeds 94%) in all formats of cross validation. In ROC (Receiver Operating Characteristic) curves also visualized the higher AUC (Area Under Curve) percentage (exceeds 95%). User level evaluation of severity capturing is obtained higher accuracy (85%) result and fairly better values for each evaluation measurements. Untwisted vessel detection for tortuosity measurement also carried out the good results with respect to the sensitivity (85%), specificity (89%) and accuracy (87%).

Keywords: fundus image, exudates, microaneurisms, hemorrhages, tortuosity, diabetic retinopathy, optic disc, fovea

Procedia PDF Downloads 338
2887 Residual Dipolar Couplings in NMR Spectroscopy Using Lanthanide Tags

Authors: Elias Akoury

Abstract:

Nuclear Magnetic Resonance (NMR) spectroscopy is an indispensable technique used in structure determination of small and macromolecules to study their physical properties, elucidation of characteristic interactions, dynamics and thermodynamic processes. Quantum mechanics defines the theoretical description of NMR spectroscopy and treatment of the dynamics of nuclear spin systems. The phenomenon of residual dipolar coupling (RDCs) has become a routine tool for accurate structure determination by providing global orientation information of magnetic dipole-dipole interaction vectors within a common reference frame. This offers accessibility of distance-independent angular information and insights to local relaxation. The measurement of RDCs requires an anisotropic orientation medium for the molecules to partially align along the magnetic field. This can be achieved by introduction of liquid crystals or attaching a paramagnetic center. Although anisotropic paramagnetic tags continue to mark achievements in the biomolecular NMR of large proteins, its application in small organic molecules remains unspread. Here, we propose a strategy for the synthesis of a lanthanide tag and the measurement of RDCs in organic molecules using paramagnetic lanthanide complexes.

Keywords: lanthanide tags, NMR spectroscopy, residual dipolar coupling, quantum mechanics of spin dynamics

Procedia PDF Downloads 187
2886 Model Averaging for Poisson Regression

Authors: Zhou Jianhong

Abstract:

Model averaging is a desirable approach to deal with model uncertainty, which, however, has rarely been explored for Poisson regression. In this paper, we propose a model averaging procedure based on an unbiased estimator of the expected Kullback-Leibler distance for the Poisson regression. Simulation study shows that the proposed model average estimator outperforms some other commonly used model selection and model average estimators in some situations. Our proposed methods are further applied to a real data example and the advantage of this method is demonstrated again.

Keywords: model averaging, poission regression, Kullback-Leibler distance, statistics

Procedia PDF Downloads 512
2885 Modified Evaluation of the Hydro-Mechanical Dependency of the Water Coefficient of Permeability of a Clayey Sand with a Novel Permeameter for Unsaturated Soils

Authors: G. Adelian, A. Mirzaii, S. S. Yasrobi

Abstract:

This paper represents data of an extensive experimental laboratory testing program for the measurement of the water coefficient of permeability of clayey sand in different hydraulic and mechanical boundary conditions. A novel permeameter was designed and constructed for the experimental testing program, suitable for the study of flow in unsaturated soils in different hydraulic and mechanical loading conditions. In this work, the effect of hydraulic hysteresis, net isotropic confining stress, water flow condition, and sample dimensions are evaluated on the water coefficient of permeability of understudying soil. The experimental results showed a hysteretic variation for the water coefficient of permeability versus matrix suction and degree of saturation, with higher values in drying portions of the SWCC. The measurement of the water permeability in different applied net isotropic stress also signified that the water coefficient of permeability increased within the increment of net isotropic consolidation stress. The water coefficient of permeability also appeared to be independent of different applied flow heads, water flow condition, and sample dimensions.

Keywords: water permeability, unsaturated soils, hydraulic hysteresis, void ratio, matrix suction, degree of saturation

Procedia PDF Downloads 519
2884 Possible Reasons for and Consequences of Generalizing Subgroup-Based Measurement Results to Populations: Based on Research Studies Conducted by Elementary Teachers in South Korea

Authors: Jaejun Jong

Abstract:

Many teachers in South Korea conduct research to improve the quality of their instruction. Unfortunately, many researchers generalize the results of measurements based on one subgroup to other students or to the entire population, which can cause problems. This study aims to determine examples of possible problems resulting from generalizing measurements based on one subgroup to an entire population or another group. This study is needed, as teachers’ instruction and class quality significantly affect the overall quality of education, but the quality of research conducted by teachers can become questionable due to overgeneralization. Thus, finding potential problems of overgeneralization can improve the overall quality of education. The data in this study were gathered from 145 sixth-grade elementary school students in South Korea. The result showed that students in different classes could differ significantly in various ways; thus, generalizing the results of subgroups to an entire population can engender erroneous student predictions and evaluations, which can lead to inappropriate instruction plans. This result shows that finding the reasons for such overgeneralization can significantly improve the quality of education.

Keywords: generalization, measurement, research methodology, teacher education

Procedia PDF Downloads 89
2883 Observationally Constrained Estimates of Aerosol Indirect Radiative Forcing over Indian Ocean

Authors: Sofiya Rao, Sagnik Dey

Abstract:

Aerosol-cloud-precipitation interaction continues to be one of the largest sources of uncertainty in quantifying the aerosol climate forcing. The uncertainty is increasing from global to regional scale. This problem remains unresolved due to the large discrepancy in the representation of cloud processes in the climate models. Most of the studies on aerosol-cloud-climate interaction and aerosol-cloud-precipitation over Indian Ocean (like INDOEX, CAIPEEX campaign etc.) are restricted to either particular to one season or particular to one region. Here we developed a theoretical framework to quantify aerosol indirect radiative forcing using Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol and cloud products of 15 years (2000-2015) period over the Indian Ocean. This framework relies on the observationally constrained estimate of the aerosol-induced change in cloud albedo. We partitioned the change in cloud albedo into the change in Liquid Water Path (LWP) and Effective Radius of Clouds (Reff) in response to an aerosol optical depth (AOD). Cloud albedo response to an increase in AOD is most sensitive in the range of LWP between 120-300 gm/m² for a range of Reff varying from 8-24 micrometer, which means aerosols are most sensitive to this range of LWP and Reff. Using this framework, aerosol forcing during a transition from indirect to semi-direct effect is also calculated. The outcome of this analysis shows best results over the Arabian Sea in comparison with the Bay of Bengal and the South Indian Ocean because of heterogeneity in aerosol spices over the Arabian Sea. Over the Arabian Sea during Winter Season the more absorbing aerosols are dominating, during Pre-monsoon dust (coarse mode aerosol particles) are more dominating. In winter and pre-monsoon majorly the aerosol forcing is more dominating while during monsoon and post-monsoon season meteorological forcing is more dominating. Over the South Indian Ocean, more or less same types of aerosol (Sea salt) are present. Over the Arabian Sea the Aerosol Indirect Radiative forcing are varying from -5 ± 4.5 W/m² for winter season while in other seasons it is reducing. The results provide observationally constrained estimates of aerosol indirect forcing in the Indian Ocean which can be helpful in evaluating the climate model performance in the context of such complex interactions.

Keywords: aerosol-cloud-precipitation interaction, aerosol-cloud-climate interaction, indirect radiative forcing, climate model

Procedia PDF Downloads 169
2882 EcoLife and Greed Index Measurement: An Alternative Tool to Promote Sustainable Communities and Eco-Justice

Authors: Louk Aourelien Andrianos, Edward Dommen, Athena Peralta

Abstract:

Greed, as epitomized by overconsumption of natural resources, is at the root of ecological destruction and unsustainability of modern societies. Presently economies rely on unrestricted structural greed which fuels unlimited economic growth, overconsumption, and individualistic competitive behavior. Structural greed undermines the life support system on earth and threatens ecological integrity, social justice and peace. The World Council of Churches (WCC) has developed a program on ecological and economic justice (EEJ) with the aim to promote an economy of life where the economy is embedded in society and society in ecology. This paper aims at analyzing and assessing the economy of life (EcoLife) by offering an empirical tool to measure and monitor the root causes and effects of unsustainability resulting from human greed on global, national, institutional and individual levels. This holistic approach is based on the integrity of ecology and economy in a society founded on justice. The paper will discuss critical questions such as ‘what is an economy of life’ and ‘how to measure and control it from the effect of greed’. A model called GLIMS, which stands for Greed Lines and Indices Measurement System is used to clarify the concept of greed and help measuring the economy of life index by fuzzy logic reasoning. The inputs of the model are from statistical indicators of natural resources consumption, financial realities, economic performance, social welfare and ethical and political facts. The outputs are concrete measures of three primary indices of ecological, economic and socio-political greed (ECOL-GI, ECON-GI, SOCI-GI) and one overall multidimensional economy of life index (EcoLife-I). EcoLife measurement aims to build awareness of an economy life and to address the effects of greed in systemic and structural aspects. It is a tool for ethical diagnosis and policy making.

Keywords: greed line, sustainability indicators, fuzzy logic, eco-justice, World Council of Churches (WCC)

Procedia PDF Downloads 317
2881 On the Possibility of Real Time Characterisation of Ambient Toxicity Using Multi-Wavelength Photoacoustic Instrument

Authors: Tibor Ajtai, Máté Pintér, Noémi Utry, Gergely Kiss-Albert, Andrea Palágyi, László Manczinger, Csaba Vágvölgyi, Gábor Szabó, Zoltán Bozóki

Abstract:

According to the best knowledge of the authors, here we experimentally demonstrate first, a quantified correlation between the real-time measured optical feature of the ambient and the off-line measured toxicity data. Finally, using these correlations we are presenting a novel methodology for real time characterisation of ambient toxicity based on the multi wavelength aerosol phase photoacoustic measurement. Ambient carbonaceous particulate matter is one of the most intensively studied atmospheric constituent in climate science nowadays. Beyond their climatic impact, atmospheric soot also plays an important role as an air pollutant that harms human health. Moreover, according to the latest scientific assessments ambient soot is the second most important anthropogenic emission source, while in health aspect its being one of the most harmful atmospheric constituents as well. Despite of its importance, generally accepted standard methodology for the quantitative determination of ambient toxicology is not available yet. Dominantly, ambient toxicology measurement is based on the posterior analysis of filter accumulated aerosol with limited time resolution. Most of the toxicological studies are based on operational definitions using different measurement protocols therefore the comprehensive analysis of the existing data set is really limited in many cases. The situation is further complicated by the fact that even during its relatively short residence time the physicochemical features of the aerosol can be masked significantly by the actual ambient factors. Therefore, decreasing the time resolution of the existing methodology and developing real-time methodology for air quality monitoring are really actual issues in the air pollution research. During the last decades many experimental studies have verified that there is a relation between the chemical composition and the absorption feature quantified by Absorption Angström Exponent (AAE) of the carbonaceous particulate matter. Although the scientific community are in the common platform that the PhotoAcoustic Spectroscopy (PAS) is the only methodology that can measure the light absorption by aerosol with accurate and reliable way so far, the multi-wavelength PAS which are able to selectively characterise the wavelength dependency of absorption has become only available in the last decade. In this study, the first results of the intensive measurement campaign focusing the physicochemical and toxicological characterisation of ambient particulate matter are presented. Here we demonstrate the complete microphysical characterisation of winter time urban ambient including optical absorption and scattering as well as size distribution using our recently developed state of the art multi-wavelength photoacoustic instrument (4λ-PAS), integrating nephelometer (Aurora 3000) as well as single mobility particle sizer and optical particle counter (SMPS+C). Beyond this on-line characterisation of the ambient, we also demonstrate the results of the eco-, cyto- and genotoxicity measurements of ambient aerosol based on the posterior analysis of filter accumulated aerosol with 6h time resolution. We demonstrate a diurnal variation of toxicities and AAE data deduced directly from the multi-wavelength absorption measurement results.

Keywords: photoacoustic spectroscopy, absorption Angström exponent, toxicity, Ames-test

Procedia PDF Downloads 297
2880 Validity and Reliability of Lifestyle Measurement of the LSAS among Recurrent Stroke Patients in Selected Hospital, Central Java, Indonesia

Authors: Meida Laely Ramdani, Earmporn Thongkrajai, Dedy Purwito

Abstract:

Lifestyle is one of the most important factors affecting health. Measurement of lifestyle behaviors is necessary for the identification of causal associations between unhealthy lifestyle and health outcomes. There was many instruments have been measured for lifestyle, but not specific for stroke recurrence. This study aimed to develop a new questionnaire of Lifestyle Adjustment Scale (LSAS) among recurrent stroke patients in Indonesia and to measure the reliability and validity of LSAS. The instrument consist of 33 items was developed from the responses of 30 recurrent stroke patients with the maximum age 60 years. Data was collected during October to November 2015. The properties of the instrument were evaluated by validity assessment and reliability measures. The content validity was judged adequate by a panel of five experts, with the result of I-CVI was 0.97. The Cronbach’s alpha analysis was carried out to measure the reliability of LSAS. The result showed that Cronbach’s alpha coefficient was 0.819. LSAS were classified under the domains of dietary habit, smoking habit, physical activity, and stress management. The results of Cronbach’s alpha coefficient for each subscale were 0.60, 0.39, 0.67, 0.65 and 0.76 respectively. LSAS instrument was valid and reliable therefore can be used as research tool among recurrent stroke patients. The development of this questionnaire has been adapted to the socio-cultural context in Indonesia.

Keywords: LSAS, recurrent stroke patients, lifestyle, Indonesia

Procedia PDF Downloads 241
2879 Experiences of Timing Analysis of Parallel Embedded Software

Authors: Muhammad Waqar Aziz, Syed Abdul Baqi Shah

Abstract:

The execution time analysis is fundamental to the successful design and execution of real-time embedded software. In such analysis, the Worst-Case Execution Time (WCET) of a program is a key measure, on the basis of which system tasks are scheduled. The WCET analysis of embedded software is also needed for system understanding and to guarantee its behavior. WCET analysis can be performed statically (without executing the program) or dynamically (through measurement). Traditionally, research on the WCET analysis assumes sequential code running on single-core platforms. However, as computation is steadily moving towards using a combination of parallel programs and multi-core hardware, new challenges in WCET analysis need to be addressed. In this article, we report our experiences of performing the WCET analysis of Parallel Embedded Software (PES) running on multi-core platform. The primary purpose was to investigate how WCET estimates of PES can be computed statically, and how they can be derived dynamically. Our experiences, as reported in this article, include the challenges we faced, possible suggestions to these challenges and the workarounds that were developed. This article also provides observations on the benefits and drawbacks of deriving the WCET estimates using the said methods and provides useful recommendations for further research in this area.

Keywords: embedded software, worst-case execution-time analysis, static flow analysis, measurement-based analysis, parallel computing

Procedia PDF Downloads 317
2878 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples

Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges

Abstract:

Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.

Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review

Procedia PDF Downloads 180
2877 Offshore Wind Assessment and Analysis for South Western Mediterranean Sea

Authors: Abdallah Touaibia, Nachida Kasbadji Merzouk, Mustapha Merzouk, Ryma Belarbi

Abstract:

accuracy assessment and a better understand of the wind resource distribution are the most important tasks for decision making before installing wind energy operating systems in a given region, there where our interest come to the Algerian coastline and its Mediterranean sea area. Despite its large coastline overlooking the border of Mediterranean Sea, there is still no strategy encouraging the development of offshore wind farms in Algerian waters. The present work aims to estimate the offshore wind fields for the Algerian Mediterranean Sea based on wind data measurements ranging from 1995 to 2018 provided of 24 years of measurement by seven observation stations focusing on three coastline cities in Algeria under a different measurement time step recorded from 30 min, 60 min, and 180 min variate from one to each other, two stations in Spain, two other ones in Italy and three in the coast of Algeria from the east Annaba, at the center Algiers, and to Oran taken place at the west of it. The idea behind consists to have multiple measurement points that helping to characterize this area in terms of wind potential by the use of interpolation method of their average wind speed values between these available data to achieve the approximate values of others locations where aren’t any available measurement because of the difficulties against the implementation of masts within the deep depth water. This study is organized as follow: first, a brief description of the studied area and its climatic characteristics were done. After that, the statistical properties of the recorded data were checked by evaluating wind histograms, direction roses, and average speeds using MatLab programs. Finally, ArcGIS and MapInfo soft-wares were used to establish offshore wind maps for better understanding the wind resource distribution, as well as to identify windy sites for wind farm installation and power management. The study pointed out that Cap Carbonara is the windiest site with an average wind speed of 7.26 m/s at 10 m, inducing a power density of 902 W/m², then the site of Cap Caccia with 4.88 m/s inducing a power density of 282 W/m². The average wind speed of 4.83 m/s is occurred for the site of Oran, inducing a power density of 230 W/m². The results indicated also that the dominant wind direction where the frequencies are highest for the site of Cap Carbonara is the West with 34%, an average wind speed of 9.49 m/s, and a power density of 1722 W/m². Then comes the site of Cap Caccia, where the prevailing wind direction is the North-west, about 20% and 5.82 m/s occurring a power density of 452 W/m². The site of Oran comes in third place with the North dominant direction with 32% inducing an average wind speed of 4.59 m/s and power density of 189 W/m². It also shown that the proposed method is either crucial in understanding wind resource distribution for revealing windy sites over a large area and more effective for wind turbines micro-siting.

Keywords: wind ressources, mediterranean sea, offshore, arcGIS, mapInfo, wind maps, wind farms

Procedia PDF Downloads 140
2876 Relative Depth Dose Profile and Peak Scatter Factors Measurement for Co-60 Teletherapy Machine Using Chemical Dosimetry

Authors: O. Moussous, T. Medjadj

Abstract:

The suitability of a Fricke dosimeter for the measurement of a relative depth dose profile and the peak scatter factors was studied. The measurements were carried out in the secondary standard dosimetry laboratory at CRNA Algiers using a collimated 60Co gamma source teletherapy machine. The measurements were performed for different field sizes at the phantom front face, at a fixed source-to-phantom distance of 80 cm. The dose measurements were performed by first placing the dosimeters free-in-air at the distance-source-detector (DSD) of 80.5 cm from the source. Additional measurements were made with the phantom in place. The water phantom type Med-Tec 40x40x40 cm for vertical beam was used in this work as scattering martial. The phantom was placed on the irradiation bench of the cobalt unit at the SSD of 80 cm from the beam focus and the centre of the field coincided with the geometric centre of the dosimeters placed at the depth in water of 5 mm Relative depth dose profile and Peak scatter factors measurements were carried out using our Fricke system. This was intercompared with similar measurements by ionization chamber under identical conditions. There is a good agreement between the relative percentage depth–dose profiles and the PSF values measured by both systems using a water phantom.

Keywords: Fricke dosimeter, depth–dose profiles, peak scatter factors, DSD

Procedia PDF Downloads 247
2875 The Mechanical and Electrochemical Properties of DC-Electrodeposited Ni-Mn Alloy Coating with Low Internal Stress

Authors: Chun-Ying Lee, Kuan-Hui Cheng, Mei-Wen Wu

Abstract:

The nickel-manganese (Ni-Mn) alloy coating prepared from DC electrodeposition process in sulphamate bath was studied. The effects of process parameters, such as current density and electrolyte composition, on the cathodic current efficiency, microstructure, internal stress and mechanical properties were investigated. Because of its crucial effect on the application to the electroforming of microelectronic components, the development of low internal stress coating with high leveling power was emphasized. It was found that both the coating’s manganese content and the cathodic current efficiency increased with the raise in current density. In addition, the internal stress of the deposited coating showed compressive nature at low current densities while changed to tensile one at higher current densities. Moreover, the metallographic observation, X-ray diffraction measurement, transmission electron microscope (TEM) examination, and polarization curve measurement were conducted. It was found that the Ni-Mn coating consisted of nano-sized columnar grains and the maximum hardness of the coating was associated with (111) preferred orientation in the microstructure. The grain size was refined along with the increase in the manganese content of the coating, which accordingly, raised its hardness and mechanical tensile strength. In summary, the Ni-Mn coating prepared at lower current density of 1-2 A/dm2 had low internal stress, high leveling power, and better corrosion resistance.

Keywords: Ni-Mn coating, DC plating, internal stress, leveling power

Procedia PDF Downloads 366
2874 Non-Pharmacological Approach to the Improvement and Maintenance of the Convergence Parameter

Authors: Andreas Aceranti, Guido Bighiani, Francesca Crotto, Marco Colorato, Stefania Zaghi, Marino Zanetti, Simonetta Vernocchi

Abstract:

The management of eye parameters such as convergence, accommodation, and miosis is very complex; in fact, both the neurovegetative system and the complex Oculocephalgiria system come into play. We have found the effectiveness of the "highvelocity low amplitude" technique directed on C7-T1 (where the cilio-spinal nucleus of the budge is located) in improving the convergence parameter through the measurement of the point of maximum convergence. With this research, we set out to investigate whether the improvement obtained through the High Velocity Low Amplitude maneuver lasts over time, carrying out a pre-manipulation measurement, one immediately after manipulation and one month after manipulation. We took a population of 30 subjects with both refractive and non-refractive problems. Of the 30 patients tested, 27 gave a positive result after the High Velocity Low Amplitude maneuver, giving an improvement in the point of maximum convergence. After a month, we retested all 27 subjects: some further improved the result, others kept, and three subjects slightly lost the gain obtained. None of the re-tested patients returned to the point of maximum convergence starting pre-manipulation. This result opens the door to a multidisciplinary approach between ophthalmologists and osteopaths with the aim of addressing oculomotricity and convergence deficits that increasingly afflict our society due to the massive use of devices and for the conduct of life in closed and restricted environments.

Keywords: point of maximum convergence, HVLA, improvement in PPC, convergence

Procedia PDF Downloads 72
2873 Airport Pavement Crack Measurement Systems and Crack Density for Pavement Evaluation

Authors: Ali Ashtiani, Hamid Shirazi

Abstract:

This paper reviews the status of existing practice and research related to measuring pavement cracking and using crack density as a pavement surface evaluation protocol. Crack density for pavement evaluation is currently not widely used within the airport community and its use by the highway community is limited. However, surface cracking is a distress that is closely monitored by airport staff and significantly influences the development of maintenance, rehabilitation and reconstruction plans for airport pavements. Therefore crack density has the potential to become an important indicator of pavement condition if the type, severity and extent of surface cracking can be accurately measured. A pavement distress survey is an essential component of any pavement assessment. Manual crack surveying has been widely used for decades to measure pavement performance. However, the accuracy and precision of manual surveys can vary depending upon the surveyor and performing surveys may disrupt normal operations. Given the variability of manual surveys, this method has shown inconsistencies in distress classification and measurement. This can potentially impact the planning for pavement maintenance, rehabilitation and reconstruction and the associated funding strategies. A substantial effort has been devoted for the past 20 years to reduce the human intervention and the error associated with it by moving toward automated distress collection methods. The automated methods refer to the systems that identify, classify and quantify pavement distresses through processes that require no or very minimal human intervention. This principally involves the use of a digital recognition software to analyze and characterize pavement distresses. The lack of established protocols for measurement and classification of pavement cracks captured using digital images is a challenge to developing a reliable automated system for distress assessment. Variations in types and severity of distresses, different pavement surface textures and colors and presence of pavement joints and edges all complicate automated image processing and crack measurement and classification. This paper summarizes the commercially available systems and technologies for automated pavement distress evaluation. A comprehensive automated pavement distress survey involves collection, interpretation, and processing of the surface images to identify the type, quantity and severity of the surface distresses. The outputs can be used to quantitatively calculate the crack density. The systems for automated distress survey using digital images reviewed in this paper can assist the airport industry in the development of a pavement evaluation protocol based on crack density. Analysis of automated distress survey data can lead to a crack density index. This index can be used as a means of assessing pavement condition and to predict pavement performance. This can be used by airport owners to determine the type of pavement maintenance and rehabilitation in a more consistent way.

Keywords: airport pavement management, crack density, pavement evaluation, pavement management

Procedia PDF Downloads 183