Search results for: weather measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3429

Search results for: weather measurement

1449 Turbulence Measurement Over Rough and Smooth Bed in Open Channel Flow

Authors: Kirti Singh, Kesheo Prasad

Abstract:

A 3D Acoustic Doppler velocimeter was used in the current investigation to quantify the mean and turbulence characteristics in non-uniform open-channel flows. Results are obtained from studies done in the laboratory, analysing the behavior of sand particles under turbulent open channel flow conditions flowing through rough, porous beds. Data obtained from ADV is used to calculate turbulent flow characteristics, Reynolds stresses and turbulent kinetic energy. Theoretical formulations for the distribution of Reynolds stress and the vertical velocity have been constructed using the Reynolds equation and the continuity equation of 2D open-channel flow. The measured Reynolds stress profile and the vertical velocity are comparable with the derived expressions. This study uses the Navier-Stokes equations for analysing the behavior of the vertical velocity profile in the dominant region of full-fledged turbulent flows in open channels, and it gives a new origination of the profile. For both wide and narrow open channels, this origination can estimate the time-averaged primary velocity in the turbulent boundary layer's outer region.

Keywords: turbulence, bed roughness, logarithmic law, shear stress correlations, ADV, Reynolds shear stress

Procedia PDF Downloads 107
1448 Factors Affecting the Results of in vitro Gas Production Technique

Authors: O. Kahraman, M. S. Alatas, O. B. Citil

Abstract:

In determination of values of feeds which, are used in ruminant nutrition, different methods are used like in vivo, in vitro, in situ or in sacco. Generally, the most reliable results are taken from the in vivo studies. But because of the disadvantages like being hard, laborious and expensive, time consuming, being hard to keep the experiment conditions under control and too much samples are needed, the in vitro techniques are more preferred. The most widely used in vitro techniques are two-staged digestion technique and gas production technique. In vitro gas production technique is based on the measurement of the CO2 which is released as a result of microbial fermentation of the feeds. In this review, the factors affecting the results obtained from in vitro gas production technique (Hohenheim Feed Test) were discussed. Some factors must be taken into consideration when interpreting the findings obtained in these studies and also comparing the findings reported by different researchers for the same feeds. These factors were discussed in 3 groups: factors related to animal, factors related to feeds and factors related with differences in the application of method. These factors and their effects on the results were explained. Also it can be concluded that the use of in vitro gas production technique in feed evaluation routinely can be contributed to the comprehensive feed evaluation, but standardization is needed in this technique to attain more reliable results.

Keywords: In vitro, gas production technique, Hohenheim feed test, standardization

Procedia PDF Downloads 599
1447 Use of Computer and Machine Learning in Facial Recognition

Authors: Neha Singh, Ananya Arora

Abstract:

Facial expression measurement plays a crucial role in the identification of emotion. Facial expression plays a key role in psychophysiology, neural bases, and emotional disorder, to name a few. The Facial Action Coding System (FACS) has proven to be the most efficient and widely used of the various systems used to describe facial expressions. Coders can manually code facial expressions with FACS and, by viewing video-recorded facial behaviour at a specified frame rate and slow motion, can decompose into action units (AUs). Action units are the most minor visually discriminable facial movements. FACS explicitly differentiates between facial actions and inferences about what the actions mean. Action units are the fundamental unit of FACS methodology. It is regarded as the standard measure for facial behaviour and finds its application in various fields of study beyond emotion science. These include facial neuromuscular disorders, neuroscience, computer vision, computer graphics and animation, and face encoding for digital processing. This paper discusses the conceptual basis for FACS, a numerical listing of discrete facial movements identified by the system, the system's psychometric evaluation, and the software's recommended training requirements.

Keywords: facial action, action units, coding, machine learning

Procedia PDF Downloads 106
1446 Quantum Decision Making with Small Sample for Network Monitoring and Control

Authors: Tatsuya Otoshi, Masayuki Murata

Abstract:

With the development and diversification of applications on the Internet, applications that require high responsiveness, such as video streaming, are becoming mainstream. Application responsiveness is not only a matter of communication delay but also a matter of time required to grasp changes in network conditions. The tradeoff between accuracy and measurement time is a challenge in network control. We people make countless decisions all the time, and our decisions seem to resolve tradeoffs between time and accuracy. When making decisions, people are known to make appropriate choices based on relatively small samples. Although there have been various studies on models of human decision-making, a model that integrates various cognitive biases, called ”quantum decision-making,” has recently attracted much attention. However, the modeling of small samples has not been examined much so far. In this paper, we extend the model of quantum decision-making to model decision-making with a small sample. In the proposed model, the state is updated by value-based probability amplitude amplification. By analytically obtaining a lower bound on the number of samples required for decision-making, we show that decision-making with a small number of samples is feasible.

Keywords: quantum decision making, small sample, MPEG-DASH, Grover's algorithm

Procedia PDF Downloads 79
1445 From Data Processing to Experimental Design and Back Again: A Parameter Identification Problem Based on FRAP Images

Authors: Stepan Papacek, Jiri Jablonsky, Radek Kana, Ctirad Matonoha, Stefan Kindermann

Abstract:

FRAP (Fluorescence Recovery After Photobleaching) is a widely used measurement technique to determine the mobility of fluorescent molecules within living cells. While the experimental setup and protocol for FRAP experiments are usually fixed, data processing part is still under development. In this paper, we formulate and solve the problem of data selection which enhances the processing of FRAP images. We introduce the concept of the irrelevant data set, i.e., the data which are almost not reducing the confidence interval of the estimated parameters and thus could be neglected. Based on sensitivity analysis, we both solve the problem of the optimal data space selection and we find specific conditions for optimizing an important experimental design factor, e.g., the radius of bleach spot. Finally, a theorem announcing less precision of the integrated data approach compared to the full data case is proven; i.e., we claim that the data set represented by the FRAP recovery curve lead to a larger confidence interval compared to the spatio-temporal (full) data.

Keywords: FRAP, inverse problem, parameter identification, sensitivity analysis, optimal experimental design

Procedia PDF Downloads 278
1444 Intensification of Heat Transfer in Magnetically Assisted Reactor

Authors: Dawid Sołoducha, Tomasz Borowski, Marian Kordas, Rafał Rakoczy

Abstract:

The magnetic field in the past few years became an important part of many studies. Magnetic field (MF) may be used to affect the process in many ways; for example, it can be used as a factor to stabilize the system. We can use MF to steer the operation, to activate or inhibit the process, or even to affect the vital activity of microorganisms. Using various types of magnetic field generators is always connected with the delivery of some heat to the system. Heat transfer is a very important phenomenon; it can influence the process positively and negatively, so it’s necessary to measure heat stream transferred from the place of generation and prevent negative influence on the operation. The aim of the presented work was to apply various types of magnetic fields and to measure heat transfer phenomena. The results were obtained by continuous measurement at several measuring points with temperature probes. Results were compilated in the form of temperature profiles. The study investigated the undetermined heat transfer in a custom system equipped with a magnetic field generator. Experimental investigations are provided for the explanation of the influence of the various type of magnetic fields on the heat transfer process. The tested processes are described by means of the criteria which defined heat transfer intensification under the action of magnetic field.

Keywords: heat transfer, magnetic field, undetermined heat transfer, temperature profile

Procedia PDF Downloads 196
1443 On Estimating the Low Income Proportion with Several Auxiliary Variables

Authors: Juan F. Muñoz-Rosas, Rosa M. García-Fernández, Encarnación Álvarez-Verdejo, Pablo J. Moya-Fernández

Abstract:

Poverty measurement is a very important topic in many studies in social sciences. One of the most important indicators when measuring poverty is the low income proportion. This indicator gives the proportion of people of a population classified as poor. This indicator is generally unknown, and for this reason, it is estimated by using survey data, which are obtained by official surveys carried out by many statistical agencies such as Eurostat. The main feature of the mentioned survey data is the fact that they contain several variables. The variable used to estimate the low income proportion is called as the variable of interest. The survey data may contain several additional variables, also named as the auxiliary variables, related to the variable of interest, and if this is the situation, they could be used to improve the estimation of the low income proportion. In this paper, we use Monte Carlo simulation studies to analyze numerically the performance of estimators based on several auxiliary variables. In this simulation study, we considered real data sets obtained from the 2011 European Union Survey on Income and Living Condition. Results derived from this study indicate that the estimators based on auxiliary variables are more accurate than the naive estimator.

Keywords: inclusion probability, poverty, poverty line, survey sampling

Procedia PDF Downloads 458
1442 Assessment of Exploitation Vulnerability of Quantum Communication Systems with Phase Encryption

Authors: Vladimir V. Nikulin, Bekmurza H. Aitchanov, Olimzhon A. Baimuratov

Abstract:

Quantum communication technology takes advantage of the intrinsic properties of laser carriers, such as very high data rates and low power requirements, to offer unprecedented data security. Quantum processes at the physical layer of encryption are used for signal encryption with very competitive performance characteristics. The ultimate range of applications for QC systems spans from fiber-based to free-space links and from secure banking operations to mobile airborne and space-borne networking where they are subjected to channel distortions. Under practical conditions, the channel can alter the optical wave front characteristics, including its phase. In addition, phase noise of the communication source and photo-detection noises alter the signal to bring additional ambiguity into the measurement process. If quantized values of photons are used to encrypt the signal, exploitation of quantum communication links becomes extremely difficult. In this paper, we present the results of analysis and simulation studies of the effects of noise on phase estimation for quantum systems with different number of encryption bases and operating at different power levels.

Keywords: encryption, phase distortion, quantum communication, quantum noise

Procedia PDF Downloads 553
1441 Improving the Frequency Response of a Circular Dual-Mode Resonator with a Reconfigurable Bandwidth

Authors: Muhammad Haitham Albahnassi, Adnan Malki, Shokri Almekdad

Abstract:

In this paper, a method for reconfiguring bandwidth in a circular dual-mode resonator is presented. The method concerns the optimized geometry of a structure that may be used to host the tuning elements, which are typically RF (Radio Frequency) switches. The tuning elements themselves, and their performance during tuning, are not the focus of this paper. The designed resonator is able to reconfigure its fractional bandwidth by adjusting the inter-coupling level between the degenerate modes, while at the same time improving its response by adjusting the external-coupling level and keeping the center frequency fixed. The inter-coupling level has been adjusted by changing the dimensions of the perturbation element, while the external-coupling level has been adjusted by changing one of the feeder dimensions. The design was arrived at via optimization. Agreeing simulation and measurement results of the designed and implemented filters showed good improvements in return loss values and the stability of the center frequency.

Keywords: dual-mode resonators, perturbation theory, reconfigurable filters, software defined radio, cognitine radio

Procedia PDF Downloads 167
1440 Experimental Investigation of the Aeroacoustics Field for a Rectangular Jet Impinging on a Slotted Plate: Stereoscopic Particle Image Velocimetry Measurement before and after the Plate

Authors: Nour Eldin Afyouni, Hassan Assoum, Kamel Abed-Meraim, Anas Sakout

Abstract:

The acoustic of an impinging jet holds significant importance in the engineering field. In HVAC systems, the jet impingement, in some cases, generates noise that destroys acoustic comfort. This paper presents an experimental study of a rectangular air jet impinging on a slotted plate to investigate the correlation between sound emission and turbulence dynamics. The experiment was conducted with an impact ratio L/H = 4 and a Reynolds number Re = 4700. The survey shows that coherent structures within the impinging jet are responsible for self-sustaining tone production. To achieve this, a specific experimental setup consisting of two simultaneous Stereoscopic Particle Image Velocimetry (S-PIV) measurements was developed to track vortical structures both before and after the plate, in addition to acoustic measurements. The results reveal a significant correlation between acoustic waves and the passage of coherent structures. Variations in the arrangement of vortical structures between the upstream and downstream sides of the plate were observed. This analysis of flow dynamics can enhance our understanding of slot noise.

Keywords: impinging jet, coherent structures, SPIV, aeroacoustics

Procedia PDF Downloads 83
1439 Assessing the Effect of Urban Growth on Land Surface Temperature: A Case Study of Conakry Guinea

Authors: Arafan Traore, Teiji Watanabe

Abstract:

Conakry, the capital city of the Republic of Guinea, has experienced a rapid urban expansion and population increased in the last two decades, which has resulted in remarkable local weather and climate change, raise energy demand and pollution and treating social, economic and environmental development. In this study, the spatiotemporal variation of the land surface temperature (LST) is retrieved to characterize the effect of urban growth on the thermal environment and quantify its relationship with biophysical indices, a normalized difference vegetation index (NDVI) and a normalized difference built up Index (NDBI). Landsat data TM and OLI/TIRS acquired respectively in 1986, 2000 and 2016 were used for LST retrieval and Land use/cover change analysis. A quantitative analysis based on the integration of a remote sensing and a geography information system (GIS) has revealed an important increased in the LST pattern in the average from 25.21°C in 1986 to 27.06°C in 2000 and 29.34°C in 2016, which was quite eminent with an average gain in surface temperature of 4.13°C over 30 years study period. Additionally, an analysis using a Pearson correlation (r) between (LST) and the biophysical indices, normalized difference vegetation index (NDVI) and a normalized difference built-up Index (NDBI) has revealed a negative relationship between LST and NDVI and a strong positive relationship between LST and NDBI. Which implies that an increase in the NDVI value can reduce the LST intensity; conversely increase in NDBI value may strengthen LST intensity in the study area. Although Landsat data were found efficient in assessing the thermal environment in Conakry, however, the method needs to be refined with in situ measurements of LST in the future studies. The results of this study may assist urban planners, scientists and policies makers concerned about climate variability to make decisions that will enhance sustainable environmental practices in Conakry.

Keywords: Conakry, land surface temperature, urban heat island, geography information system, remote sensing, land use/cover change

Procedia PDF Downloads 247
1438 Physical Planning Strategies for Disaster Mitigation and Preparedness in Coastal Region of Andhra Pradesh, India

Authors: Thimma Reddy Pothireddy, Ramesh Srikonda

Abstract:

India is prone to natural disasters such as Floods, droughts, cyclones, earthquakes and landslides frequently due to its geographical considerations. It has become a persistent phenomenon as observed in last ten decades. The recent survey indicates that about 60% of the landmass is prone to earthquakes of various intensities with reference to Richard scale, over 40 million hectares is prone to floods; about 8% of the total area is prone to cyclones and 68% of the area is vulnerable to drought. Climate change is likely to be perceived through the experience of extreme weather events. There is growing societal concern about climate change, given the potential impacts of associated natural hazards such as cyclones, flooding, earthquakes, landslides etc. The recent natural calamities such as Cyclone Hudhud had crossed the land at Northern cost of AP, Vishakapatanam on 12 Oct’2014 with a wind speed ranging between 175 – 200 kmph and the records show that the tidal waves were reached to the height of 14mts and above; and it alarms us to have critical focus on planning issues so as to find appropriate solutions. The existing condition is effective is in terms of institutional set up along with responsive management mechanism of disaster mitigation but considerations at settlement planning level to allow mitigation operations are not adequate. This paper deals to understand the response to climate change will possibly happen through adaptation to climate hazards and essential to work out an appropriate mechanism and disaster receptive settlement planning for responding to natural (and climate-related) calamities particularly to cyclones and floods. The statistics indicate that 40 million hectares flood prone (5% of area), and 1853 kmts of cyclone prone coastal length in India so it is essential and crucial to have appropriate physical planning considerations to improve preparedness and to operate mitigation measures effectively to minimize the loss and damage. Vijayawada capital region which is susceptible to cyclonic and floods has been studied with respect to trajectory analysis to work out risk vulnerability and to integrated disaster mitigation physical planning considerations.

Keywords: meta analysis, vulnerability index, physical planning, trajectories

Procedia PDF Downloads 249
1437 An Experimental Investigation of the Effect of Control Algorithm on the Energy Consumption and Temperature Distribution of a Household Refrigerator

Authors: G. Peker, Tolga N. Aynur, E. Tinar

Abstract:

In order to determine the energy consumption level and cooling characteristics of a domestic refrigerator controlled with various cooling system algorithms, a side by side type (SBS) refrigerator was tested in temperature and humidity controlled chamber conditions. Two different control algorithms; so-called drop-in and frequency controlled variable capacity compressor algorithms, were tested on the same refrigerator. Refrigerator cooling characteristics were investigated for both cases and results were compared with each other. The most important comparison parameters between the two algorithms were taken as; temperature distribution, energy consumption, evaporation and condensation temperatures, and refrigerator run times. Standard energy consumption tests were carried out on the same appliance and resulted in almost the same energy consumption levels, with a difference of %1,5. By using these two different control algorithms, the power consumptions character/profile of the refrigerator was found to be similar. By following the associated energy measurement standard, the temperature values of the test packages were measured to be slightly higher for the frequency controlled algorithm compared to the drop-in algorithm. This paper contains the details of this experimental study conducted with different cooling control algorithms and compares the findings based on the same standard conditions.

Keywords: control algorithm, cooling, energy consumption, refrigerator

Procedia PDF Downloads 373
1436 Multivariate Analysis of the Relationship between Professional Burnout, Emotional Intelligence and Health Level in Teachers University of Guayaquil

Authors: Viloria Marin Hermes, Paredes Santiago Maritza, Viloria Paredes Jonathan

Abstract:

The aim of this study is to assess the prevalence of Burnout syndrome in a sample of 600 professors at the University of Guayaquil (Ecuador) using the Maslach Burnout Inventory (M.B.I.). In addition, assessment was made of the effects on health from professional burnout using the General Health Questionnaire (G.H.Q.-28), and the influence of Emotional Intelligence on prevention of its symptoms using the Spanish version of the Trait Meta-Mood Scale (T.M.M.S.-24). After confirmation of the underlying factor structure, the three measurement tools showed high levels of internal consistency, and specific cut-off points were proposed for the group of Latin American academics in the M.B.I. Statistical analysis showed the syndrome is present extensively, particularly on medium levels, with notably low scores given for Professional Self-Esteem. The application of Canonical Correspondence Analysis revealed that low levels of self-esteem are related to depression, with a lack of personal resources related to anxiety and insomnia, whereas the ability to perceive and control emotions and feelings improves perceptions of professional effectiveness and performance.

Keywords: burnout, academics, emotional intelligence, general health, canonical correspondence analysis

Procedia PDF Downloads 370
1435 The Moderating Role of the Employees' Green Lifestyle to the Effect of Green Human Resource Management Practices to Job Performance: A Structural Equation Model (SEM)

Authors: Lorraine Joyce Chua, Sheena Fatima Ragas, Flora Mae Tantay, Carolyn Marie Sunio

Abstract:

The Philippines is one of the countries most affected by weather-related disasters. The occurrence of natural disasters in this country increases due to environmental degradation making environment preservation a growing trend in the society including the corporate world. Most organizations implemented green practices in order to lower expenses unaware that some of these practices were already a part of a new trend in human resource management known as Green Human Resource Management (GHRM). GHRM is when business organizations implement HR policies programs processes and techniques that bring environmental impact and sustainability practices on the organization. In relation to this, the study hypothesizes that implementing GHRM practices in the workplace will spillover to an employees lifestyle and such lifestyle may moderate the impact of GHRM practices to his job performance. Private industries located in the Philippines National Capital Region (NCR) were purposively selected for the purpose of this study. They must be ISO14001 certified or are currently aiming for such certification. The employee respondents were randomly selected and were asked to answer a reliable and valid researcher-made questionnaire. Structural equation modeling (SEM) supported the hypothesis that GHRM practices may spillover to employees lifestyle stimulating such individual to start a green lifestyle which moderates the impact of GHRM to his job performance. It can also be implied that GHRM practices help shape employees to become environmentally aware and responsible which may help them in preserving the environment. The findings of this study may encourage Human Resource practitioners to implement GHRM practices in the workplace in order to take part in sustaining the environment while maintaining or improving employees job performance and keeping them motivated. This study can serve as a basis for future research regarding the importance of strengthening the GHRM implementation here in the Philippines. Future studies may focus more on the impact of GHRM to other factors, such as job loyalty and job satisfaction of the employees belonging to specific industries which would greatly contribute to the GHRM community in the Philippines.

Keywords: GHRM practices, Green Human Resource Management, Green Lifestyle, ISO14001, job performance, Philippines

Procedia PDF Downloads 266
1434 A New Approach of Preprocessing with SVM Optimization Based on PSO for Bearing Fault Diagnosis

Authors: Tawfik Thelaidjia, Salah Chenikher

Abstract:

Bearing fault diagnosis has attracted significant attention over the past few decades. It consists of two major parts: vibration signal feature extraction and condition classification for the extracted features. In this paper, feature extraction from faulty bearing vibration signals is performed by a combination of the signal’s Kurtosis and features obtained through the preprocessing of the vibration signal samples using Db2 discrete wavelet transform at the fifth level of decomposition. In this way, a 7-dimensional vector of the vibration signal feature is obtained. After feature extraction from vibration signal, the support vector machine (SVM) was applied to automate the fault diagnosis procedure. To improve the classification accuracy for bearing fault prediction, particle swarm optimization (PSO) is employed to simultaneously optimize the SVM kernel function parameter and the penalty parameter. The results have shown feasibility and effectiveness of the proposed approach

Keywords: condition monitoring, discrete wavelet transform, fault diagnosis, kurtosis, machine learning, particle swarm optimization, roller bearing, rotating machines, support vector machine, vibration measurement

Procedia PDF Downloads 437
1433 Multiple Linear Regression for Rapid Estimation of Subsurface Resistivity from Apparent Resistivity Measurements

Authors: Sabiu Bala Muhammad, Rosli Saad

Abstract:

Multiple linear regression (MLR) models for fast estimation of true subsurface resistivity from apparent resistivity field measurements are developed and assessed in this study. The parameters investigated were apparent resistivity (ρₐ), horizontal location (X) and depth (Z) of measurement as the independent variables; and true resistivity (ρₜ) as the dependent variable. To achieve linearity in both resistivity variables, datasets were first transformed into logarithmic domain following diagnostic checks of normality of the dependent variable and heteroscedasticity to ensure accurate models. Four MLR models were developed based on hierarchical combination of the independent variables. The generated MLR coefficients were applied to another data set to estimate ρₜ values for validation. Contours of the estimated ρₜ values were plotted and compared to the observed data plots at the colour scale and blanking for visual assessment. The accuracy of the models was assessed using coefficient of determination (R²), standard error (SE) and weighted mean absolute percentage error (wMAPE). It is concluded that the MLR models can estimate ρₜ for with high level of accuracy.

Keywords: apparent resistivity, depth, horizontal location, multiple linear regression, true resistivity

Procedia PDF Downloads 276
1432 Climate Change and Migration in the Semi-arid Tropic and Eastern Regions of India: Exploring Alternative Adaptation Strategies

Authors: Gauri Sreekumar, Sabuj Kumar Mandal

Abstract:

Contributing about 18% to India’s Gross Domestic Product, the agricultural sector plays a significant role in the Indian rural economy. Despite being the primary source of livelihood for more than half of India’s population, most of them are marginal and small farmers facing several challenges due to agro-climatic shocks. Climate change is expected to increase the risk in the regions that are highly agriculture dependent. With systematic and scientific evidence of changes in rainfall, temperature and other extreme climate events, migration started to emerge as a survival strategy for the farm households. In this backdrop, our present study aims to combine the two strands of literature and attempts to explore whether migration is the only adaptation strategy for the farmers once they experience crop failures due adverse climatic condition. Combining the temperature and rainfall information from the weather data provided by the Indian Meteorological Department with the household level panel data on Indian states belonging to the Eastern and Semi-Arid Tropics regions from the Village Dynamics in South Asia (VDSA) collected by the International Crop Research Institute for the Semi-arid Tropics, we form a rich panel data for the years 2010-2014. A Recursive Econometric Model is used to establish the three-way nexus between climate change-yield-migration while addressing the role of irrigation and local non-farm income diversification. Using Three Stage Least Squares Estimation method, we find that climate change induced yield loss is a major driver of farmers’ migration. However, irrigation and local level non-farm income diversification are found to mitigate the adverse impact of climate change on migration. Based on our empirical results, we suggest for enhancing irrigation facilities and making local non-farm income diversification opportunities available to increase farm productivity and thereby reduce farmers’ migration.

Keywords: climate change, migration, adaptation, mitigation

Procedia PDF Downloads 64
1431 The Effect of Information Technology on the Quality of Accounting Information

Authors: Mohammad Hadi Khorashadi Zadeh, Amin Karkon, Hamid Golnari

Abstract:

This study aimed to investigate the impact of information technology on the quality of accounting information was made in 2014. A survey of 425 executives of listed companies in Tehran Stock Exchange, using the Cochran formula simple random sampling method, 84 managers of these companies as the sample size was considered. Methods of data collection based on questionnaire information technology some of the questions of the impact of information technology was standardized questionnaires and the questions were designed according to existing components. After the distribution and collection of questionnaires, data analysis and hypothesis testing using structural equation modeling Smart PLS2 and software measurement model and the structure was conducted in two parts. In the first part of the questionnaire technical characteristics including reliability, validity, convergent and divergent validity for PLS has been checked and in the second part, application no significant coefficients were used to examine the research hypotheses. The results showed that IT and its dimensions (timeliness, relevance, accuracy, adequacy, and the actual transfer rate) affect the quality of accounting information of listed companies in Tehran Stock Exchange influence.

Keywords: information technology, information quality, accounting, transfer speed

Procedia PDF Downloads 277
1430 Developing Fault Tolerance Metrics of Web and Mobile Applications

Authors: Ahmad Mohsin, Irfan Raza Naqvi, Syda Fatima Usamn

Abstract:

Applications with higher fault tolerance index are considered more reliable and trustworthy to drive quality. In recent years application development has been shifted from traditional desktop and web to native and hybrid application(s) for the web and mobile platforms. With the emergence of Internet of things IOTs, cloud and big data trends, the need for measuring Fault Tolerance for these complex nature applications has increased to evaluate their performance. There is a phenomenal gap between fault tolerance metrics development and measurement. Classic quality metric models focused on metrics for traditional systems ignoring the essence of today’s applications software, hardware & deployment characteristics. In this paper, we have proposed simple metrics to measure fault tolerance considering general requirements for Web and Mobile Applications. We have aligned factors – subfactors, using GQM for metrics development considering the nature of mobile we apps. Systematic Mathematical formulation is done to measure metrics quantitatively. Three web mobile applications are selected to measure Fault Tolerance factors using formulated metrics. Applications are then analysed on the basis of results from observations in a controlled environment on different mobile devices. Quantitative results are presented depicting Fault tolerance in respective applications.

Keywords: web and mobile applications, reliability, fault tolerance metric, quality metrics, GQM based metrics

Procedia PDF Downloads 344
1429 Incident Management System: An Essential Tool for Oil Spill Response

Authors: Ali Heyder Alatas, D. Xin, L. Nai Ming

Abstract:

An oil spill emergency can vary in size and complexity, subject to factors such as volume and characteristics of spilled oil, incident location, impacted sensitivities and resources required. A major incident typically involves numerous stakeholders; these include the responsible party, response organisations, government authorities across multiple jurisdictions, local communities, and a spectrum of technical experts. An incident management team will encounter numerous challenges. Factors such as limited access to location, adverse weather, poor communication, and lack of pre-identified resources can impede a response; delays caused by an inefficient response can exacerbate impacts caused to the wider environment, socio-economic and cultural resources. It is essential that all parties work based on defined roles, responsibilities and authority, and ensure the availability of sufficient resources. To promote steadfast coordination and overcome the challenges highlighted, an Incident Management System (IMS) offers an essential tool for oil spill response. It provides clarity in command and control, improves communication and coordination, facilitates the cooperation between stakeholders, and integrates resources committed. Following the preceding discussion, a comprehensive review of existing literature serves to illustrate the application of IMS in oil spill response to overcome common challenges faced in a major-scaled incident. With a primary audience comprising practitioners in mind, this study will discuss key principles of incident management which enables an effective response, along with pitfalls and challenges, particularly, the tension between government and industry; case studies will be used to frame learning and issues consolidated from previous research, and provide the context to link practice with theory. It will also feature the industry approach to incident management which was further crystallized as part of a review by the Joint Industry Project (JIP) established in the wake of the Macondo well control incident. The authors posit that a common IMS which can be adopted across the industry not only enhances response capacity towards a major oil spill incident but is essential to the global preparedness effort.

Keywords: command and control, incident management system, oil spill response, response organisation

Procedia PDF Downloads 156
1428 Analyzing Current Transformers Saturation Characteristics for Different Connected Burden Using LabVIEW Data Acquisition Tool

Authors: D. Subedi, S. Pradhan

Abstract:

Current transformers are an integral part of power system because it provides a proportional safe amount of current for protection and measurement applications. However when the power system experiences an abnormal situation leading to huge current flow, then this huge current is proportionally injected to the protection and metering circuit. Since the protection and metering equipment’s are designed to withstand only certain amount of current with respect to time, these high currents pose a risk to man and equipment. Therefore during such instances, the CT saturation characteristics have a huge influence on the safety of both man and equipment and also on the reliability of the protection and metering system. This paper shows the effect of burden on the Accuracy Limiting factor/ Instrument security factor of current transformers and also the change in saturation characteristics of the CT’s. The response of the CT to varying levels of overcurrent at different connected burden will be captured using the data acquisition software LabVIEW. Analysis is done on the real time data gathered using LabVIEW. Variation of current transformer saturation characteristics with changes in burden will be discussed.

Keywords: accuracy limiting factor, burden, current transformer, instrument security factor, saturation characteristics

Procedia PDF Downloads 415
1427 Steel Dust as a Coating Agent for Iron Ore Pellets at Ironmaking

Authors: M. Bahgat, H. Hanafy, H. Al-Tassan

Abstract:

Cluster formation is an essential phenomenon during direct reduction processes at shaft furnaces. Decreasing the reducing temperature to avoid this problem can cause a significant drop in throughput. In order to prevent sticking of pellets, a coating material basically inactive under the reducing conditions prevailing in the shaft furnace, should be applied to cover the outer layer of the pellets. In the present work, steel dust is used as coating material for iron ore pellets to explore dust coating effectiveness and determines the best coating conditions. Steel dust coating is applied for iron ore pellets in various concentrations. Dust slurry concentrations of 5.0-30% were used to have a coated steel dust amount of 1.0-5.0 kg per ton iron ore. Coated pellets with various concentrations were reduced isothermally in weight loss technique with simulated gas mixture to the composition of reducing gases at shaft furnaces. The influences of various coating conditions on the reduction behavior and the morphology were studied. The optimum reduced samples were comparatively applied for sticking index measurement. It was found that the optimized steel dust coating condition that achieve higher reducibility with lower sticking index was 30% steel dust slurry concentration with 3.0 kg steel dust/ton ore.

Keywords: reduction, ironmaking, steel dust, coating

Procedia PDF Downloads 302
1426 Automotive Emotions: An Investigation of Their Natures, Frequencies of Occurrence and Causes

Authors: Marlene Weber, Joseph Giacomin, Alessio Malizia, Lee Skrypchuk, Voula Gkatzidou

Abstract:

Technological and sociological developments in the automotive sector are shifting the focus of design towards developing a better understanding of driver needs, desires and emotions. Human centred design methods are being more frequently applied to automotive research, including the use of systems to detect human emotions in real-time. One method for a non-contact measurement of emotion with low intrusiveness is Facial-Expression Analysis (FEA). This paper describes a research study investigating emotional responses of 22 participants in a naturalistic driving environment by applying a multi-method approach. The research explored the possibility to investigate emotional responses and their frequencies during naturalistic driving through real-time FEA. Observational analysis was conducted to assign causes to the collected emotional responses. In total, 730 emotional responses were measured in the collective study time of 440 minutes. Causes were assigned to 92% of the measured emotional responses. This research establishes and validates a methodology for the study of emotions and their causes in the driving environment through which systems and factors causing positive and negative emotional effects can be identified.

Keywords: affective computing, case study, emotion recognition, human computer interaction

Procedia PDF Downloads 203
1425 A Statistical-Algorithmic Approach for the Design and Evaluation of a Fresnel Solar Concentrator-Receiver System

Authors: Hassan Qandil

Abstract:

Using a statistical algorithm incorporated in MATLAB, four types of non-imaging Fresnel lenses are designed; spot-flat, linear-flat, dome-shaped and semi-cylindrical-shaped. The optimization employs a statistical ray-tracing methodology of the incident light, mainly considering effects of chromatic aberration, varying focal lengths, solar inclination and azimuth angles, lens and receiver apertures, and the optimum number of prism grooves. While adopting an equal-groove-width assumption of the Poly-methyl-methacrylate (PMMA) prisms, the main target is to maximize the ray intensity on the receiver’s aperture and therefore achieving higher values of heat flux. The algorithm outputs prism angles and 2D sketches. 3D drawings are then generated via AutoCAD and linked to COMSOL Multiphysics software to simulate the lenses under solar ray conditions, which provides optical and thermal analysis at both the lens’ and the receiver’s apertures while setting conditions as per the Dallas-TX weather data. Once the lenses’ characterization is finalized, receivers are designed based on its optimized aperture size. Several cavity shapes; including triangular, arc-shaped and trapezoidal, are tested while coupled with a variety of receiver materials, working fluids, heat transfer mechanisms, and enclosure designs. A vacuum-reflective enclosure is also simulated for an enhanced thermal absorption efficiency. Each receiver type is simulated via COMSOL while coupled with the optimized lens. A lab-scale prototype for the optimum lens-receiver configuration is then fabricated for experimental evaluation. Application-based testing is also performed for the selected configuration, including that of a photovoltaic-thermal cogeneration system and solar furnace system. Finally, some future research work is pointed out, including the coupling of the collector-receiver system with an end-user power generator, and the use of a multi-layered genetic algorithm for comparative studies.

Keywords: COMSOL, concentrator, energy, fresnel, optics, renewable, solar

Procedia PDF Downloads 155
1424 A Calibration Method of Portable Coordinate Measuring Arm Using Bar Gauge with Cone Holes

Authors: Rim Chang Hyon, Song Hak Jin, Song Kwang Hyok, Jong Ki Hun

Abstract:

The calibration of the articulated arm coordinate measuring machine (AACMM) is key to improving calibration accuracy and saving calibration time. To reduce the time consumed for calibration, we should choose the proper calibration gauges and develop a reasonable calibration method. In addition, we should get the exact optimal solution by accurately removing the rough errors within the experimental data. In this paper, we present a calibration method of the portable coordinate measuring arm (PCMA) using the 1.2m long bar guage with cone-holes. First, we determine the locations of the bar gauge and establish an optimal objective function for identifying the structural parameter errors. Next, we make a mathematical model of the calibration algorithm and present a new mathematical method to remove the rough errors within calibration data. Finally, we find the optimal solution to identify the kinematic parameter errors by using Levenberg-Marquardt algorithm. The experimental results show that our calibration method is very effective in saving the calibration time and improving the calibration accuracy.

Keywords: AACMM, kinematic model, parameter identify, measurement accuracy, calibration

Procedia PDF Downloads 83
1423 Prevalence of Metabolic Syndrome According to Different Criteria in Population over 20 Years Old in Ahvaz

Authors: Armaghan Moravej Aleali, Hajieh Shahbazian, Seyed Mahmoud Latifi, Leila Yazdanpanah

Abstract:

Objective: Metabolic syndrome or insulin resistance syndrome or syndrome X is a collection of abdominal obesity, hypertension, glucose intolerance and lipid abnormalities (elevated triglycerides, elevated LDL, and decrease the amount of HDL). That increases the incidence of diabetes and risk of cardiovascular disease. The aim of this study is to investigate the prevalence of metabolic syndrome in people over 20 years of Ahvaz according to IDF, ATPIII, Harmonized I and Harmonized II. Material & Methods: A cross-sectional study with a random cluster sampling in six health centers in Ahvaz was done. After obtaining informed consent, questionnaire for each person filled up including demographic data and examinations, including blood pressure in sitting position, weight, height, waist circumference, and waist circumference measurement. Results: From all participating 912 people, (434 (2/47%) male and 478 (2/52%) female) were evaluated. Mean age was 42/27± 14years (44/2±14/26 for male and 40/5±13/5 for female). Prevalence of metabolic syndrome was 22/8%, 28/4%, 30/9% and 16/9% according to ATPIII, IDF, Harmonized I and Harmonized II criteria respectively and increased with age in both sexes. IDF and Harmonized I had most kappa coordination (0/94). Conclusion: The results show a high prevalence of metabolic syndrome in Ahvaz. So, identification of the risk factors should be attempted to prevent metabolic syndrome.

Keywords: metabolic syndrome, IDF, ATP III, prevalence

Procedia PDF Downloads 579
1422 Electrochemical Studies of the Inhibition Effect of 2-Dimethylamine on the Corrosion of Austenitic Stainless Steel Type 304 in Dilute Hydrochloric Acid

Authors: Roland Tolulope Loto, Cleophas Akintoye Loto, Abimbola Patricia Popoola

Abstract:

The inhibiting action of 2-dimethylamine on the electrochemical behaviour of austenitic stainless steel (type 304) in dilute hydrochloric was evaluated through weight-loss method, open circuit potential measurement and potentiodynamic polarization tests at specific concentrations of the organic compound. Results obtained reveal that the compound performed effectively giving a maximum inhibition efficiency of 79% at 12.5% concentration from weight loss analysis and 80.9% at 12.5% concentration from polarization tests. The average corrosion potential of -321 mV was obtained the same concentration from other tests which is well within passivation potentials on the steel thus, providing good protection against corrosion in the acid solutions. 2-dimethylamine acted through physiochemical interaction at the steel/solution interface from thermodynamic calculations and obeyed the Langmuir adsorption isotherm. The values of the inhibition efficiency determined from the three methods are in reasonably good agreement. Polarization studies showed that the compounds behaved as cathodic type inhibitor.

Keywords: corrosion, 2-dimethylamine, inhibition, adsorption, hydrochloric acid, steel

Procedia PDF Downloads 319
1421 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea

Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim

Abstract:

Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.

Keywords: deep learning, algae concentration, remote sensing, satellite

Procedia PDF Downloads 183
1420 Reliability Analysis of Soil Liquefaction Based on Standard Penetration: A Case Study in Babol City

Authors: Mehran Naghizaderokni, Asscar Janalizadechobbasty

Abstract:

There are more probabilistic and deterministic liquefaction evaluation procedures in order to judge whether liquefaction will occur or not. A review of this approach reveals that there is a need for a comprehensive procedure that accounts for different sources of uncertainty in liquefaction evaluation. In fact, for the same set of input parameters, different methods provide different factors of safety and/or probabilities of liquefaction. To account for the different uncertainties, including both the model and measurement uncertainties, reliability analysis is necessary. This paper has obtained information from Standard Penetration Test (SPT) and some empirical approaches such as: Seed et al, Highway bridge of Japan approach to soil liquefaction, The Overseas Coastal Area Development Institute of Japan (OCDI) and reliability method to studying potential of liquefaction in soil of Babol city in the north of Iran are compared. Evaluation potential of liquefaction in soil of Babol city is an important issue since the soil of some area contains sand, seismic area, increasing level of underground waters and consequently saturation of soil; therefore, one of the most important goals of this paper is to gain suitable recognition of liquefaction potential and find the most appropriate procedure of evaluation liquefaction potential to decrease related damages.

Keywords: reliability analysis, liquefaction, Babol, civil, construction and geological engineering

Procedia PDF Downloads 498