Search results for: mixed gaussian processes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8524

Search results for: mixed gaussian processes

7804 Extracting the Atmospheric Carbon Dioxide and Convert It into Useful Minerals at the Room Conditions

Authors: Muthana A. M. Jamel Al-Gburi

Abstract:

Elimination of carbon dioxide (CO2) gas from our atmosphere is very important but complicated, and since there is always an increase in the gas amounts of the other greenhouse ones in our atmosphere, causes by both some of the human activities and the burning of the fossil fuels, which leads to the Global Warming phenomena i.e., increasing the earth temperature to a higher level, creates desertification, tornadoes and storms. In our present research project, we constructed our own system to extract carbon dioxide directly from the atmospheric air at the room conditions and investigated how to convert the gas into a useful mineral or Nano scale fibers made of carbon by using several chemical processes and chemical reactions leading to a valuable building material and also to mitigate the environmental negative change. In the present water pool system (Carbone Dioxide Domestic Extractor), the ocean-sea water was used to dissolve the CO2 gas from the room and converted into carbonate minerals by using a number of additives like shampoo, clay and MgO. Note that the atmospheric air includes CO2 gas has circulated within the sea water by air pump connected to a perforated tubes fixed deep on the pool base. Those chemical agents were mixed with the ocean-sea water to convert the formed acid from the water-CO2 reaction into a useful mineral. After we successfully constructed the system, we did intense experiments and investigations on the CO2 gas reduction level and found which is the optimum active chemical agent to work in the atmospheric conditions.

Keywords: global warming, CO₂ gas, ocean-sea water, additives, solubility level

Procedia PDF Downloads 84
7803 Parametric Study of the Structures: Influence of the Shells

Authors: Serikma Mourad, Mezidi Amar

Abstract:

The conception (design) of an earthquake-resistant structure is a complex problem seen the necessity of meeting the requirements of security been imperative by the regulations, and of economy been imperative by the increasing costs of the structures. The resistance of a building in the horizontal actions (shares) is mainly ensured by a mixed brace system; for a concrete building, this system is constituted by frame or shells; or both at the same time. After the earthquake of Boumerdes (May 23; 2003) in Algeria, the studies made by experts, ended in modifications of the Algerian Earthquake-resistant Regulation (AER 99). One of these modifications was to widen the use of shells for the brace system. This modification has create a conflict on the quantities, the positions and the type of the shells at adopt. In the present project, we suggest seeing the effect of the variation of the dimensions, the localization and the conditions of rigidity in extremities of shells. The study will be led on a building (F+5) implanted in zone of seismicity average. To do it, we shall proceed to a classic dynamic study of a structure by using 4 alternatives for shells by varying the lengths and number in order to compare the cost of the structure for 4 dispositions of the shells with a technical-economic study of the brace system by the use of different dispositions of shells and to estimate the quantities of necessary materials (concrete and steel).

Keywords: reinforced concrete, mixed brace system, dynamic analysis, beams, shells

Procedia PDF Downloads 325
7802 Estimating 3D-Position of a Stationary Random Acoustic Source Using Bispectral Analysis of 4-Point Detected Signals

Authors: Katsumi Hirata

Abstract:

To develop the useful acoustic environmental recognition system, the method of estimating 3D-position of a stationary random acoustic source using bispectral analysis of 4-point detected signals is proposed. The method uses information about amplitude attenuation and propagation delay extracted from amplitude ratios and angles of auto- and cross-bispectra of the detected signals. It is expected that using bispectral analysis affects less influence of Gaussian noises than using conventional power spectral one. In this paper, the basic principle of the method is mentioned first, and its validity and features are considered from results of the fundamental experiments assumed ideal circumstances.

Keywords: 4-point detection, a stationary random acoustic source, auto- and cross-bispectra, estimation of 3D-position

Procedia PDF Downloads 360
7801 Green Supply Chain Design: A Mathematical Modeling Approach

Authors: Nusrat T. Chowdhury

Abstract:

Green Supply Chain Management (GSCM) is becoming a key to success for profitable businesses. The various activities contributing to carbon emissions in a supply chain are transportation, ordering and holding of inventory. This research work develops a mixed-integer nonlinear programming (MINLP) model that considers the scenario of a supply chain with multiple periods, multiple products and multiple suppliers. The model assumes that the demand is deterministic, the buyer has a limited storage space in each period, the buyer is responsible for the transportation cost, a supplier-dependent ordering cost applies for each period in which an order is placed on a supplier and inventory shortage is permissible. The model provides an optimal decision regarding what products to order, in what quantities, with which suppliers, and in which periods in order to maximize the profit. For the purpose of evaluating the carbon emissions, three different carbon regulating policies i.e., carbon cap-and-trade, the strict cap on carbon emission and carbon tax on emissions, have been considered. The proposed MINLP has been validated using a randomly generated data set.

Keywords: green supply chain, carbon emission, mixed integer non-linear program, inventory shortage, carbon cap-and-trade

Procedia PDF Downloads 241
7800 Terrain Classification for Ground Robots Based on Acoustic Features

Authors: Bernd Kiefer, Abraham Gebru Tesfay, Dietrich Klakow

Abstract:

The motivation of our work is to detect different terrain types traversed by a robot based on acoustic data from the robot-terrain interaction. Different acoustic features and classifiers were investigated, such as Mel-frequency cepstral coefficient and Gamma-tone frequency cepstral coefficient for the feature extraction, and Gaussian mixture model and Feed forward neural network for the classification. We analyze the system’s performance by comparing our proposed techniques with some other features surveyed from distinct related works. We achieve precision and recall values between 87% and 100% per class, and an average accuracy at 95.2%. We also study the effect of varying audio chunk size in the application phase of the models and find only a mild impact on performance.

Keywords: acoustic features, autonomous robots, feature extraction, terrain classification

Procedia PDF Downloads 370
7799 Influence of Maternal Factors on Growth Patterns of Schoolchildren in a Rural Health and Demographic Surveillance Site in South Africa: A Mixed Method Study

Authors: Perpetua Modjadji, Sphiwe Madiba

Abstract:

Background: The growth patterns of children are good nutritional indicators of their nutritional status, health, and socioeconomic level. However, the maternal factors and the belief system of the society affect the growth of children promoting undernutrition. This study determined the influence of maternal factors on growth patterns of schoolchildren in a rural site. Methods: A convergent mixed method study was conducted among 508 schoolchildren and their mothers in Dikgale Health and Demographic Surveillance System Site, South Africa. Multistage sampling was used to select schools (purposive) and learners (random), who were paired with their mothers. Anthropometry was measured and socio-demographic, obstetrical, household information, maternal influence on children’s nutrition, and growth were assessed using an interviewer administered questionnaire (quantitative). The influence of the cultural beliefs and practices of mothers on the nutrition and growth of their children was explored using focus group discussions (qualitative). Narratives of mothers were used to best understand growth patterns of schoolchildren (mixed method). Data were analyzed using STATA 14 (quantitative) and Nvivo 11 (qualitative). Quantitative and qualitative data were merged for integrated mixed method analysis using a joint display analysis. Results: Mean age of children was 10 ± 2 years, ranging from 6 to 15 years. Substantial percentages of thinness (25%), underweight (24%), and stunting (22%) were observed among the children. Mothers had a mean age of 37 ± 7 years, and 75% were overweight or obese. A depressed socio-economic status indicated by a higher rate of unemployment with no income (82.3%), and dependency on social grants (86.8%) was observed. Determinants of poor growth patterns were child’s age and gender, maternal age, height and BMI, access to water supply, and refrigerator use. The narratives of mothers suggested that the children in most of their households were exposed to poverty and the inadequate intake of quality food. Conclusion: Poor growth patterns were observed among schoolchildren while their mothers were overweight or obese. Child’s gender, school grade, maternal body mass index, and access to water were the main determinants. Congruence was observed between most qualitative themes and quantitative constructs. A need for a multi sectoral approach considering an evidence based and feasible nutrition programs for schoolchildren, especially those in rural settings and educating mothers, cannot be over-emphasized.

Keywords: growth patterns, maternal factors, rural context, schoolchildren, South Africa

Procedia PDF Downloads 182
7798 Methodology for the Integration of Object Identification Processes in Handling and Logistic Systems

Authors: L. Kiefer, C. Richter, G. Reinhart

Abstract:

The uprising complexity in production systems due to an increasing amount of variants up to customer innovated products leads to requirements that hierarchical control systems are not able to fulfil. Therefore, factory planners can install autonomous manufacturing systems. The fundamental requirement for an autonomous control is the identification of objects within production systems. In this approach an attribute-based identification is focused for avoiding dose-dependent identification costs. Instead of using an identification mark (ID) like a radio frequency identification (RFID)-Tag, an object type is directly identified by its attributes. To facilitate that it’s recommended to include the identification and the corresponding sensors within handling processes, which connect all manufacturing processes and therefore ensure a high identification rate and reduce blind spots. The presented methodology reduces the individual effort to integrate identification processes in handling systems. First, suitable object attributes and sensor systems for object identification in a production environment are defined. By categorising these sensor systems as well as handling systems, it is possible to match them universal within a compatibility matrix. Based on that compatibility further requirements like identification time are analysed, which decide whether the combination of handling and sensor system is well suited for parallel handling and identification within an autonomous control. By analysing a list of more than thousand possible attributes, first investigations have shown, that five main characteristics (weight, form, colour, amount, and position of subattributes as drillings) are sufficient for an integrable identification. This knowledge limits the variety of identification systems and leads to a manageable complexity within the selection process. Besides the procedure, several tools, as an example a sensor pool are presented. These tools include the generated specific expert knowledge and simplify the selection. The primary tool is a pool of preconfigured identification processes depending on the chosen combination of sensor and handling device. By following the defined procedure and using the created tools, even laypeople out of other scientific fields can choose an appropriate combination of handling devices and sensors which enable parallel handling and identification.

Keywords: agent systems, autonomous control, handling systems, identification

Procedia PDF Downloads 177
7797 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics

Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic

Abstract:

Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.

Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress

Procedia PDF Downloads 227
7796 Comprehensive Analysis of Power Allocation Algorithms for OFDM Based Communication Systems

Authors: Rakesh Dubey, Vaishali Bahl, Dalveer Kaur

Abstract:

The spiralling urge for high rate data transmission over wireless mediums needs intelligent use of electromagnetic resources considering restrictions like power ingestion, spectrum competence, robustness against multipath propagation and implementation intricacy. Orthogonal frequency division multiplexing (OFDM) is a capable technique for next generation wireless communication systems. For such high rate data transfers there is requirement of proper allocation of resources like power and capacity amongst the sub channels. This paper illustrates various available methods of allocating power and the capacity requirement with the constraint of Shannon limit.

Keywords: Additive White Gaussian Noise, Multi-Carrier Modulation, Orthogonal Frequency Division Multiplexing (OFDM), Signal to Noise Ratio (SNR), Water Filling

Procedia PDF Downloads 555
7795 Resistive Switching Characteristics of Resistive Random Access Memory Devices after Furnace Annealing Processes

Authors: Chi-Yan Chu, Kai-Chi Chuang, Huang-Chung Cheng

Abstract:

In this study, the RRAM devices with the TiN/Ti/HfOx/TiN structure were fabricated, then the electrical characteristics of the devices without annealing and after 400 °C and 500 °C of the furnace annealing (FA) temperature processes were compared. The RRAM devices after the FA’s 400 °C showed the lower forming, set and reset voltages than the other devices without annealing. However, the RRAM devices after the FA’s 500 °C did not show any electrical characteristics because the TiN/Ti/HfOx/TiN device was oxidized, as shown in the XPS analysis. From these results, the RRAM devices after the FA’s 400 °C showed the best electrical characteristics.

Keywords: RRAM, furnace annealing (FA), forming, set and reset voltages, XPS

Procedia PDF Downloads 372
7794 An Adaptive CFAR Algorithm Based on Automatic Censoring in Heterogeneous Environments

Authors: Naime Boudemagh

Abstract:

In this work, we aim to improve the detection performances of radar systems. To this end, we propose and analyze a novel censoring technique of undesirable samples, of priori unknown positions, that may be present in the environment under investigation. Therefore, we consider heterogeneous backgrounds characterized by the presence of some irregularities such that clutter edge transitions and/or interfering targets. The proposed detector, termed automatic censoring constant false alarm (AC-CFAR), operates exclusively in a Gaussian background. It is built to allow the segmentation of the environment to regions and switch automatically to the appropriate detector; namely, the cell averaging CFAR (CA-CFAR), the censored mean level CFAR (CMLD-CFAR) or the order statistic CFAR (OS-CFAR). Monte Carlo simulations show that the AC-CFAR detector performs like the CA-CFAR in a homogeneous background. Moreover, the proposed processor exhibits considerable robustness in a heterogeneous background.

Keywords: CFAR, automatic censoring, heterogeneous environments, radar systems

Procedia PDF Downloads 602
7793 DYVELOP Method Implementation for the Research Development in Small and Middle Enterprises

Authors: Jiří F. Urbánek, David Král

Abstract:

Small and Middle Enterprises (SME) have a specific mission, characteristics, and behavior in global business competitive environments. They must respect policy, rules, requirements and standards in all their inherent and outer processes of supply - customer chains and networks. Paper aims and purposes are to introduce computational assistance, which enables us the using of prevailing operation system MS Office (SmartArt...) for mathematical models, using DYVELOP (Dynamic Vector Logistics of Processes) method. It is providing for SMS´s global environment the capability and profit to achieve its commitment regarding the effectiveness of the quality management system in customer requirements meeting and also the continual improvement of the organization’s and SME´s processes overall performance and efficiency, as well as its societal security via continual planning improvement. DYVELOP model´s maps - the Blazons are able mathematically - graphically express the relationships among entities, actors, and processes, including the discovering and modeling of the cycling cases and their phases. The blazons need live PowerPoint presentation for better comprehension of this paper mission – added value analysis. The crisis management of SMEs is obliged to use the cycles for successful coping of crisis situations.  Several times cycling of these cases is a necessary condition for the encompassment of the both the emergency event and the mitigation of organization´s damages. Uninterrupted and continuous cycling process is a good indicator and controlling actor of SME continuity and its sustainable development advanced possibilities.

Keywords: blazons, computational assistance, DYVELOP method, small and middle enterprises

Procedia PDF Downloads 342
7792 Experimental Study of the Electrical Conductivity and Thermal Conductivity Property of Micro-based Al-Cu-Nb-Mo Alloy

Authors: Uwa C. A., Jamiru T.

Abstract:

Aluminum based alloys with a certain compositional blend and manufacturing method have been reported to have excellent electrical conductors. In the current investigation, metal powders of Aluminum (Al), Copper (Cu), Niobium (Nb), and Molybdenum (Mo) were weighed in accordance with certain ratios and spread equally by combining the powder particles. The metal particles were mixed using a tube mixer for 12 hours. Before pouring into a 30mm-diameter graphite mold, pre-pressed, and placed into an SPS furnace, the thermal conductivity of the mixed metal powders was evaluated using a portable Thermtest device. Axial pressure of 50 MPa was used at a heating rate of 50 oC/min, and a multi-stage heating procedure with a holding period of 10 min. was used to sinter at temperatures between 300 oC and 480 oC. After being cooled to room temperature, the specimens were unmolded to produce the aluminum, copper, niobium, and molybdenum alloy material. The HPS 2662 Precision Four-point Probe Meter was used to determine the electrical resistivity and the values used to calculate the electrical conductivity of the sintered alloy samples. Finally, the alloy with the highest electrical conductivity and thermal conductivity qualities was the one with the following composition: Al 93.5Cu4Nb1.5Mo1. It also had a density of 3.23 g/cm3. It could be advisable for usage in automobile radiator and electric transmission line components.

Keywords: Al-Cu-Nb-Mo, electrical conductivity, alloy, sintering, thermal conductivity

Procedia PDF Downloads 92
7791 Effect the Use of Steel Fibers (Dramix) on Reinforced Concrete Slab

Authors: Faisal Ananda, Junaidi Al-Husein, Oni Febriani, Juli Ardita, N. Indra, Syaari Al-Husein, A. Bukri

Abstract:

Currently, concrete technology continues to grow and continue to innovate one of them using fibers. Fiber concrete has advantages over non-fiber concrete, among others, strong against the effect of shrinkage, ability to reduce crack, fire resistance, etc. In this study, concrete mix design using the procedures listed on SNI 03-2834-2000. The sample used is a cylinder with a height of 30 cm and a width of 15cm in diameter, which is used for compression and tensile testing, while the slab is 400cm x 100cm x 15cm. The fiber used is steel fiber (dramix), with the addition of 2/3 of the thickness of the slabs. The charging is done using a two-point loading. From the result of the research, it is found that the loading of non-fiber slab (0%) of the initial crack is the maximum crack that has passed the maximum crack allowed with a crack width of 1.3 mm with a loading of 1160 kg. The initial crack with the largest load is found on the 1% fiber mixed slab, with the initial crack also being a maximum crack of 0.5mm which also has exceeded the required maximum crack. In the 4% slab the initial crack of 0.1 mm is a minimal initial crack with a load greater than the load of a non-fiber (0%) slab by load1200 kg. While the maximum load on the maximum crack according to the applicable maximum crack conditions, on the 5% fiber mixed slab with a crack width of 0.32mm by loading 1250 kg.

Keywords: crack, dramix, fiber, load, slab

Procedia PDF Downloads 515
7790 Alumina Supported Cu-Mn-Cr Catalysts for CO and VOCs oxidation

Authors: Krasimir Ivanov, Elitsa Kolentsova, Dimitar Dimitrov, Petya Petrova, Tatyana Tabakova

Abstract:

This work studies the effect of chemical composition on the activity and selectivity of γ–alumina supported CuO/ MnO2/Cr2O3 catalysts toward deep oxidation of CO, dimethyl ether (DME) and methanol. The catalysts were prepared by impregnation of the support with an aqueous solution of copper nitrate, manganese nitrate and CrO3 under different conditions. Thermal, XRD and TPR analysis were performed. The catalytic measurements of single compounds oxidation were carried out on continuous flow equipment with a four-channel isothermal stainless steel reactor. Flow-line equipment with an adiabatic reactor for simultaneous oxidation of all compounds under the conditions that mimic closely the industrial ones was used. The reactant and product gases were analyzed by means of on-line gas chromatographs. On the basis of XRD analysis it can be concluded that the active component of the mixed Cu-Mn-Cr/γ–alumina catalysts consists of at least six compounds – CuO, Cr2O3, MnO2, Cu1.5Mn1.5O4, Cu1.5Cr1.5O4 and CuCr2O4, depending on the Cu/Mn/Cr molar ratio. Chemical composition strongly influences catalytic properties, this influence being quite variable with regards to the different processes. The rate of CO oxidation rapidly decrease with increasing of chromium content in the active component while for the DME was observed the reverse trend. It was concluded that the best compromise are the catalysts with Cu/(Mn + Cr) molar ratio 1:5 and Mn/Cr molar ratio from 1:3 to 1:4.

Keywords: Cu-Mn-Cr oxide catalysts, volatile organic compounds, deep oxidation, dimethyl ether (DME)

Procedia PDF Downloads 370
7789 Mid-Temperature Methane-Based Chemical Looping Reforming for Hydrogen Production via Iron-Based Oxygen Carrier Particles

Authors: Yang Li, Mingkai Liu, Qiong Rao, Zhongrui Gai, Ying Pan, Hongguang Jin

Abstract:

Hydrogen is an ideal and potential energy carrier due to its high energy efficiency and low pollution. An alternative and promising approach to hydrogen generation is the chemical looping steam reforming of methane (CL-SRM) over iron-based oxygen carriers. However, the process faces challenges such as high reaction temperature (>850 ℃) and low methane conversion. We demonstrate that Ni-mixed Fe-based oxygen carrier particles have significantly improved the methane conversion and hydrogen production rate in the range of 450-600 ℃ under atmospheric pressure. The effect on the reaction reactivity of oxygen carrier particles mixed with different Ni-based particle mass ratios has been determined in the continuous unit. More than 85% of methane conversion has been achieved at 600 ℃, and hydrogen can be produced in both reduction and oxidation steps. Moreover, the iron-based oxygen carrier particles exhibited good cyclic performance during 150 consecutive redox cycles at 600 ℃. The mid-temperature iron-based oxygen carrier particles, integrated with a moving-bed chemical looping system, might provide a powerful approach toward more efficient and scalable hydrogen production.

Keywords: chemical looping, hydrogen production, mid-temperature, oxygen carrier particles

Procedia PDF Downloads 145
7788 The Perception and Use of Vocabulary Learning Strategies Among Non-English Major at Ho Chi Minh City University of Technology (Hutech)

Authors: T. T. K. Nguyen, T. H. Doan

Abstract:

The study investigates students’ perceptions and students’ use of vocabulary learning strategies (VLS) among non-English majors at Ho Chi Minh City University of Technology (HUTECH). Three main issues addressed are (1) to determine students’ perception in terms of their awareness and the level of the importance of vocabulary learning strategies; (2) students’ use in terms of frequency and preference; (3) the correlation between students’ perception in terms of the level of the importance of vocabulary learning strategies and their use in terms of frequency. The mixed method is applied in this investigation; additionally, questionnaires focus on social groups, memory groups, cognitive groups, and metacognitive groups with 350 sophomores from four different majors, and 10 sophomores are invited to structured interviews. The results showed that the vocabulary learning strategies of the current study were well aware. All those strategies were perceived as important in learning vocabulary, and four groups of vocabulary were used frequently. Students’ responses in terms of preference also confirmed students’ use in terms of frequency. On the other hand, students’ perception correlated with students’ use in only the cognitive group of vocabulary learning strategies, but not the three others.

Keywords: vocabulary learning strategies, students' perceptions, students' use, mixed methods, non-English majors

Procedia PDF Downloads 50
7787 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization

Authors: Aitor Bilbao, Dragos Axinte, John Billingham

Abstract:

The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.

Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation

Procedia PDF Downloads 277
7786 Incorporating Priority Round-Robin Scheduler to Sustain Indefinite Blocking Issue and Prioritized Processes in Operating System

Authors: Heng Chia Ying, Charmaine Tan Chai Nie, Burra Venkata Durga Kumar

Abstract:

Process scheduling is the method of process management that determines which process the CPU will proceed with for the next task and how long it takes. Some issues were found in process management, particularly for Priority Scheduling (PS) and Round Robin Scheduling (RR). The proposed recommendations made for IPRRS are to combine the strengths of both into a combining algorithm while they draw on others to compensate for each weakness. A significant improvement on the combining technique of scheduler, Incorporating Priority Round-Robin Scheduler (IPRRS) address an algorithm for both high and low priority task to sustain the indefinite blocking issue faced in the priority scheduling algorithm and minimize the average turnaround time (ATT) and average waiting time (AWT) in RR scheduling algorithm. This paper will delve into the simple rules introduced by IPRRS and enhancements that both PS and RR bring to the execution of processes in the operating system. Furthermore, it incorporates the best aspects of each algorithm to build the optimum algorithm for a certain case in terms of prioritized processes, ATT, and AWT.

Keywords: round Robin scheduling, priority scheduling, indefinite blocking, process management, sustain, turnaround time

Procedia PDF Downloads 152
7785 Investigating Jacket-Type Offshore Structures Failure Probability by Applying the Reliability Analyses Methods

Authors: Majid Samiee Zonoozian

Abstract:

For such important constructions as jacket type platforms, scrupulous attention in analysis, design and calculation processes is needed. The reliability assessment method has been established into an extensively used method to behavior safety calculation of jacket platforms. In the present study, a methodology for the reliability calculation of an offshore jacket platform in contradiction of the extreme wave loading state is available. Therefore, sensitivity analyses are applied to acquire the nonlinear response of jacket-type platforms against extreme waves. The jacket structure is modeled by applying a nonlinear finite-element model with regards to the tubular members' behave. The probability of a member’s failure under extreme wave loading is figured by a finite-element reliability code. The FORM and SORM approaches are applied for the calculation of safety directories and reliability indexes have been detected. A case study for a fixed jacket-type structure positioned in the Persian Gulf is studied by means of the planned method. Furthermore, to define the failure standards, equations suggested by the 21st version of the API RP 2A-WSD for The jacket-type structures’ tubular members designing by applying the mixed axial bending and axial pressure. Consequently, the effect of wave Loades in the reliability index was considered.

Keywords: Jacket-Type structure, reliability, failure probability, tubular members

Procedia PDF Downloads 173
7784 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 107
7783 Implementation and Use of Person-Centered Care in Europe: A Literature Review

Authors: Kristina Rosengren, Petra Brannefors, Eric Carlstrom

Abstract:

Background: Implementation and use of person-centered care (PCC) is increasing worldwide, and why the current study intends to increase knowledge regarding how PCC is used in different European countries. Purpose: To describe the extent of person-centred care in 23 European countries in relation to use specific countries healthcare system (Beveridge, Bismarck, Mixed/OOP). Methods: The study was conducted by literature review inspired by Spice, both scientific empirical studies (Cinahl, Medline, Scopus) as well as grey literature (Google) were used. Totally 1194 documents were included divided into Cinahl n=139, Medline n=245, Scopus n=493 and Google n=317. Data were analysed with descriptive (percentage, range) regarding content and scope of PCC/country according to content and scope of PCC in each country, grouped into the healthcare system (Beveridge, Bismarck, Mixed/OOP) and geographic placement. Results: PCC were most common in UK (England, Scotland, Wales, North Ireland), n=481, 40.3%, Sweden (n=231, 19.3%), The Netherlands (n=80, 6.7%), Ireland (n=79, 6.6%) and Norway (n=61, 5.1%); and less common in Poland (0.6%), Hungary (0.5%), Greece (0.4%), Latvia (0.4%) and Serbia (0%). Beveridge healthcare system (12/23=0.5217, 52.2%) show 85 percent of documents with advantage of scientific literature valued 2.92 (n=999, 0.55-4.07), compare to advantage of grey literature in Bismarck (10/23=0.4347, 43.5%) with 15 percentage valued 2.35 (n=190, 0-3.27) followed by Mixed/OOP (1/23=4%) with 0.4 valued 2.25. Conclusions: Seven out of 10 countries with Beveridge health system used PCC compare to less-used PCC in countries with the Bismarck system. Research, as well as national regulations regarding PCC, are important tools to motivate the advantage of PCC in clinical practice. Moreover, implementation of PCC needs a systematic approach, from national (politicians), regional (guideline) and local (specific healthcare settings) levels visualized by decision-making as law, mission, policies, and routines in clinical practice to establish a well-integrated phenomenon in Europe.

Keywords: Beveridge, Bismarck, Europe, evidence-based, literature review, person-centered care

Procedia PDF Downloads 112
7782 Groundwater Pollution Models for Hebron/Palestine

Authors: Hassan Jebreen

Abstract:

These models of a conservative pollutant in groundwater do not include representation of processes in soils and in the unsaturated zone, or biogeochemical processes in groundwater, These demonstration models can be used as the basis for more detailed simulations of the impacts of pollution sources at a local scale, but such studies should address processes related to specific pollutant species, and should consider local hydrogeology in more detail, particularly in relation to possible impacts on shallow systems which are likely to respond more quickly to changes in pollutant inputs. The results have demonstrated the interaction between groundwater flow fields and pollution sources in abstraction areas, and help to emphasise that wadi development is one of the key elements of water resources planning. The quality of groundwater in the Hebron area indicates a gradual increase in chloride and nitrate with time. Since the aquifers in Hebron districts are highly vulnerable due to their karstic nature, continued disposal of untreated domestic and industrial wastewater into the wadi will lead to unacceptably poor water quality in drinking water, which may ultimately require expensive treatment if significant health problems are to be avoided. Improvements are required in wastewater treatment at the municipal and domestic levels, the latter requiring increased public awareness of the issues, as well as improved understanding of the hydrogeological behaviour of the aquifers.

Keywords: groundwater, models, pollutants, wadis, hebron

Procedia PDF Downloads 440
7781 Aggregation of Butanediyl-1,4-Bis(Tetradecyldimethylammonium Bromide) (14–4–14) Gemini Surfactants in Presence of Ethylene Glycol and Propylene Glycol

Authors: P. Ajmal Koya, Tariq Ahmad Wagay, K. Ismail

Abstract:

One of the fundamental property of surfactant molecules are their ability to aggregate in water or binary mixtures of water and organic solvents as an effort to minimize their unfavourable interaction with the medium. In this work, influence two co-solvents (ethylene glycol (EG) and propylene glycol (PG)) on the aggregation properties of a cationic gemini surfactant, butanediyl-1,4-bis(tetradecyldimethylammonium bromide) (14–4–14), has been studied by conductance and steady state fluorescence at 298 K. The weight percentage of two co-solvents varied in between 0 and 50 % at an interval of 5 % up to 20 % and then 10 % up to 50 %. It was found that micellization process is delayed by the inclusion of both the co-solvents; consequently, a progressive increase was observed in critical micelle concentration (cmc) and Gibbs free energy of micellization (∆G0m), whereas a rough increase was observed in the values of degree of counter ion dissociation (α) and a decrease was obtained in values of average aggregation number (Nagg) and Stern-Volmer constant (KSV). At low weight percentage (up to 15 %) of co-solvents, 14–4–14 geminis were found to be almost equally prone to micellization both in EG–water (EG–WR) and in PG–water (PG–WR) mixed media while at high weight percentages they are more prone to micellization in EG–WR than in PG–WR mixed media.

Keywords: aggregation number, gemini surfactant, micellization, non aqueous solvent

Procedia PDF Downloads 325
7780 Multi-Robotic Partial Disassembly Line Balancing with Robotic Efficiency Difference via HNSGA-II

Authors: Tao Yin, Zeqiang Zhang, Wei Liang, Yanqing Zeng, Yu Zhang

Abstract:

To accelerate the remanufacturing process of electronic waste products, this study designs a partial disassembly line with the multi-robotic station to effectively dispose of excessive wastes. The multi-robotic partial disassembly line is a technical upgrade to the existing manual disassembly line. Balancing optimization can make the disassembly line smoother and more efficient. For partial disassembly line balancing with the multi-robotic station (PDLBMRS), a mixed-integer programming model (MIPM) considering the robotic efficiency differences is established to minimize cycle time, energy consumption and hazard index and to calculate their optimal global values. Besides, an enhanced NSGA-II algorithm (HNSGA-II) is proposed to optimize PDLBMRS efficiently. Finally, MIPM and HNSGA-II are applied to an actual mixed disassembly case of two types of computers, the comparison of the results solved by GUROBI and HNSGA-II verifies the correctness of the model and excellent performance of the algorithm, and the obtained Pareto solution set provides multiple options for decision-makers.

Keywords: waste disposal, disassembly line balancing, multi-robot station, robotic efficiency difference, HNSGA-II

Procedia PDF Downloads 239
7779 Inhibition of Mild Steel Corrosion in Hydrochloric Acid Medium Using an Aromatic Hydrazide Derivative

Authors: Preethi Kumari P., Shetty Prakasha, Rao Suma A.

Abstract:

Mild steel has been widely employed as construction materials for pipe work in the oil and gas production such as down hole tubular, flow lines and transmission pipelines, in chemical and allied industries for handling acids, alkalis and salt solutions due to its excellent mechanical property and low cost. Acid solutions are widely used for removal of undesirable scale and rust in many industrial processes. Among the commercially available acids hydrochloric acid is widely used for pickling, cleaning, de-scaling and acidization of oil process. Mild steel exhibits poor corrosion resistance in presence of hydrochloric acid. The high reactivity of mild steel in presence of hydrochloric acid is due to the soluble nature of ferrous chloride formed and the cementite phase (Fe3C) normally present in the steel is also readily soluble in hydrochloric acid. Pitting attack is also reported to be a major form of corrosion in mild steel in the presence of high concentrations of acids and thereby causing the complete destruction of metal. Hydrogen from acid reacts with the metal surface and makes it brittle and causes cracks, which leads to pitting type of corrosion. The use of chemical inhibitor to minimize the rate of corrosion has been considered to be the first line of defense against corrosion. In spite of long history of corrosion inhibition, a highly efficient and durable inhibitor that can completely protect mild steel in aggressive environment is yet to be realized. It is clear from the literature review that there is ample scope for the development of new organic inhibitors, which can be conveniently synthesized from relatively cheap raw materials and provide good inhibition efficiency with least risk of environmental pollution. The aim of the present work is to evaluate the electrochemical parameters for the corrosion inhibition behavior of an aromatic hydrazide derivative, 4-hydroxy- N '-[(E)-1H-indole-2-ylmethylidene)] benzohydrazide (HIBH) on mild steel in 2M hydrochloric acid using Tafel polarization and electrochemical impedance spectroscopy (EIS) techniques at 30-60 °C. The results showed that inhibition efficiency increased with increase in inhibitor concentration and decreased marginally with increase in temperature. HIBH showed a maximum inhibition efficiency of 95 % at 8×10-4 M concentration at 30 °C. Polarization curves showed that HIBH act as a mixed-type inhibitor. The adsorption of HIBH on mild steel surface obeys the Langmuir adsorption isotherm. The adsorption process of HIBH at the mild steel/hydrochloric acid solution interface followed mixed adsorption with predominantly physisorption at lower temperature and chemisorption at higher temperature. Thermodynamic parameters for the adsorption process and kinetic parameters for the metal dissolution reaction were determined.

Keywords: electrochemical parameters, EIS, mild steel, tafel polarization

Procedia PDF Downloads 337
7778 Examining Reading Comprehension Skills Based on Different Reading Comprehension Frameworks and Taxonomies

Authors: Seval Kula-Kartal

Abstract:

Developing students’ reading comprehension skills is an aim that is difficult to accomplish and requires to follow long-term and systematic teaching and assessment processes. In these processes, teachers need tools to provide guidance to them on what reading comprehension is and which comprehension skills they should develop. Due to a lack of clear and evidence-based frameworks defining reading comprehension skills, especially in Turkiye, teachers and students mostly follow various processes in the classrooms without having an idea about what their comprehension goals are and what those goals mean. Since teachers and students do not have a clear view of comprehension targets, strengths, and weaknesses in students’ comprehension skills, the formative feedback processes cannot be managed in an effective way. It is believed that detecting and defining influential comprehension skills may provide guidance both to teachers and students during the feedback process. Therefore, in the current study, some of the reading comprehension frameworks that define comprehension skills operationally were examined. The aim of the study is to develop a simple and clear framework that can be used by teachers and students during their teaching, learning, assessment, and feedback processes. The current study is qualitative research in which documents related to reading comprehension skills were analyzed. Therefore, the study group consisted of recourses and frameworks which made big contributions to theoretical and operational definitions of reading comprehension. A content analysis was conducted on the resources included in the study group. To determine the validity of the themes and sub-categories revealed as the result of content analysis, three educational assessment experts were asked to examine the content analysis results. The Fleiss’ Cappa coefficient revealed that there is consistency among themes and categories defined by three different experts. The content analysis of the reading comprehension frameworks revealed that comprehension skills could be examined under four different themes. The first and second themes focus on understanding information given explicitly or implicitly within a text. The third theme includes skills used by the readers to make connections between their personal knowledge and the information given in the text. Lastly, the fourth theme focus on skills used by readers to examine the text with a critical view. The results suggested that fundamental reading comprehension skills can be examined under four themes. Teachers are recommended to use these themes in their reading comprehension teaching and assessment processes. Acknowledgment: This research is supported by Pamukkale University Scientific Research Unit within the project, whose title is Developing A Reading Comprehension Rubric.

Keywords: reading comprehension, assessing reading comprehension, comprehension taxonomies, educational assessment

Procedia PDF Downloads 82
7777 Subcontractor Development Practices and Processes: A Conceptual Model for LEED Projects

Authors: Andrea N. Ofori-Boadu

Abstract:

The purpose is to develop a conceptual model of subcontractor development practices and processes that strengthen the integration of subcontractors into construction supply chain systems for improved subcontractor performance on Leadership in Energy and Environmental Design (LEED) certified building projects. The construction management of a LEED project has an important objective of meeting sustainability certification requirements. This is in addition to the typical project management objectives of cost, time, quality, and safety for traditional projects; and, therefore increases the complexity of LEED projects. Considering that construction management organizations rely heavily on subcontractors, poor performance on complex projects such as LEED projects has been largely attributed to the unsatisfactory preparation of subcontractors. Furthermore, the extensive use of unique and non-repetitive short term contracts limits the full integration of subcontractors into construction supply chains and hinders long-term cooperation and benefits that could enhance performance on construction projects. Improved subcontractor development practices are needed to better prepare and manage subcontractors, so that complex objectives can be met or exceeded. While supplier development and supply chain theories and practices for the manufacturing sector have been extensively investigated to address similar challenges, investigations in the construction sector are not that obvious. Consequently, the objective of this research is to investigate effective subcontractor development practices and processes to guide construction management organizations in their development of a strong network of high performing subcontractors. Drawing from foundational supply chain and supplier development theories in the manufacturing sector, a mixed interpretivist and empirical methodology is utilized to assess the body of knowledge within literature for conceptual model development. A self-reporting survey with five-point Likert scale items and open-ended questions is administered to 30 construction professionals to estimate their perceptions of the effectiveness of 37 practices, classified into five subcontractor development categories. Data analysis includes descriptive statistics, weighted means, and t-tests that guide the effectiveness ranking of practices and categories. The results inform the proposed three-phased LEED subcontractor development program model which focuses on preparation, development and implementation, and monitoring. Highly ranked LEED subcontractor pre-qualification, commitment, incentives, evaluation, and feedback practices are perceived as more effective, when compared to practices requiring more direct involvement and linkages between subcontractors and construction management organizations. This is attributed to unfamiliarity, conflicting interests, lack of trust, and resource sharing challenges. With strategic modifications, the recommended practices can be extended to other non-LEED complex projects. Additional research is needed to guide the development of subcontractor development programs that strengthen direct involvement between construction management organizations and their network of high performing subcontractors. Insights from this present research strengthen theoretical foundations to support future research towards more integrated construction supply chains. In the long-term, this would lead to increased performance, profits and client satisfaction.

Keywords: construction management, general contractor, supply chain, sustainable construction

Procedia PDF Downloads 111
7776 Multivariate Control Chart to Determine Efficiency Measurements in Industrial Processes

Authors: J. J. Vargas, N. Prieto, L. A. Toro

Abstract:

Control charts are commonly used to monitor processes involving either variable or attribute of quality characteristics and determining the control limits as a critical task for quality engineers to improve the processes. Nonetheless, in some applications it is necessary to include an estimation of efficiency. In this paper, the ability to define the efficiency of an industrial process was added to a control chart by means of incorporating a data envelopment analysis (DEA) approach. In depth, a Bayesian estimation was performed to calculate the posterior probability distribution of parameters as means and variance and covariance matrix. This technique allows to analyse the data set without the need of using the hypothetical large sample implied in the problem and to be treated as an approximation to the finite sample distribution. A rejection simulation method was carried out to generate random variables from the parameter functions. Each resulting vector was used by stochastic DEA model during several cycles for establishing the distribution of each efficiency measures for each DMU (decision making units). A control limit was calculated with model obtained and if a condition of a low level efficiency of DMU is presented, system efficiency is out of control. In the efficiency calculated a global optimum was reached, which ensures model reliability.

Keywords: data envelopment analysis, DEA, Multivariate control chart, rejection simulation method

Procedia PDF Downloads 377
7775 Bioflocculation Using the Purified Wild Strain of P. aeruginosa Culture in Wastewater Treatment

Authors: Mohammad Hajjartabar, Tahereh Kermani Ranjbar

Abstract:

P. aeruginosa EF2 was isolated and identified from human infection sources before in our previous study. The present study was performed to determine the characteristics and activity role of bioflocculant produced by the bacterium in flocculation of the wastewater active sludge treatment. The bacterium was inoculated and then was grown in an orbital shaker at 250 rpm for 5 days at 35 °C under TSB and peptone water media. After incubation period, culture broths of the bacterial strain was collected and washed. The concentration of the bacteria was adjusted. For the extraction of the bacterial bioflocculant, culture was centrifuged at 6000 rpm for 20 min at 4 °C to remove bacterial cells. Supernatant was decanted and pellet containing bioflocculant was dried at 105 °C to a constant weight according to APHA, 2005. The chemical composition of the extracted bioflocculant from the bacterial sample was then analyzed. Wastewater active sludge sample obtained from aeration tank from one of wastewater treatment plants in Tehran, was first mixed thoroughly. After addition of bioflocculant, improvements in floc density were observed with an increase in bioflocculant. The results of this study strongly suggested that the extracted bioflucculant played a significant role in flocculation of the wastewater sample. The use of wild bacteria and nutrient regulation techniques instead of genetic manipulation opens wide investigation area in the future to improve wastewater treatment processes. Also this may put a new path in front of us to attain and improve the more effective bioflocculant using the purified microbial culture in wastewater treatment.

Keywords: wastewater treatment, P. aeruginosa, sludge treatment

Procedia PDF Downloads 156