Search results for: The linear quantum hydrodynamic model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8693

Search results for: The linear quantum hydrodynamic model

3683 An Investigation into Ozone Concentration at Urban and Rural Monitoring Stations in Malaysia

Authors: Negar Banan, Mohd Talib Latif

Abstract:

This study investigated the relationship between urban and rural ozone concentrations and quantified the extent to which ambient rural conditions and the concentrations of other pollutants can be used to predict urban ozone concentrations. The study describes the variations of ozone in weekday and weekends as well as the daily maximum recorded at selected monitoring stations. The results showed that Putrajaya station had the highest concentrations of O3 on weekend due the titration of NO during the weekday. Additionally, Jerantut had the lowest average concentration with a reading value high on Wednesdays. The comparisons of average and maximum concentrations of ozone for the three stations showed that the strongest significant correlation is recorded in Jerantut station with the value R2= 0.769. Ozone concentrations originating from a neighbouring urban site form a better predictor to the urban ozone concentrations than widespread rural ozone at some levels of temporal averaging. It is found that in urban and rural of Malaysian peninsular, the concentration of ozone depends on the concentration of NOx and seasonal meteorological factors. The HYSPLIT Model (the northeast monsoon) showed that the wind direction can also influence the concentration of ozone in the atmosphere in the studied areas.

Keywords: Ozone, Hysplit model, Weekend effect, Daily Average and Daily maximum, Malaysia

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2116
3682 Using HMM-based Classifier Adapted to Background Noises with Improved Sounds Features for Audio Surveillance Application

Authors: Asma Rabaoui, Zied Lachiri, Noureddine Ellouze

Abstract:

Discrimination between different classes of environmental sounds is the goal of our work. The use of a sound recognition system can offer concrete potentialities for surveillance and security applications. The first paper contribution to this research field is represented by a thorough investigation of the applicability of state-of-the-art audio features in the domain of environmental sound recognition. Additionally, a set of novel features obtained by combining the basic parameters is introduced. The quality of the features investigated is evaluated by a HMM-based classifier to which a great interest was done. In fact, we propose to use a Multi-Style training system based on HMMs: one recognizer is trained on a database including different levels of background noises and is used as a universal recognizer for every environment. In order to enhance the system robustness by reducing the environmental variability, we explore different adaptation algorithms including Maximum Likelihood Linear Regression (MLLR), Maximum A Posteriori (MAP) and the MAP/MLLR algorithm that combines MAP and MLLR. Experimental evaluation shows that a rather good recognition rate can be reached, even under important noise degradation conditions when the system is fed by the convenient set of features.

Keywords: Sounds recognition, HMM classifier, Multi-style training, Environmental Adaptation, Feature combinations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611
3681 Experiment and Simulation of Laser Effect on Thermal Field of Porcine Liver

Authors: K.Ting, K. T. Chen, Y. L. Su, C. J. Chang

Abstract:

In medical therapy, laser has been widely used to conduct cosmetic, tumor and other treatments. During the process of laser irradiation, there may be thermal damage caused by excessive laser exposure. Thus, the establishment of a complete thermal analysis model is clinically helpful to physicians in reference data. In this study, porcine liver in place of tissue was subjected to laser irradiation to set up the experimental data considering the explored impact on surface thermal field and thermal damage region under different conditions of power, laser irradiation time, and distance between laser and porcine liver. In the experimental process, the surface temperature distribution of the porcine lever was measured by the infrared thermal imager. In the part of simulation, the bio heat transfer Pennes-s equation was solved by software SYSWELD applying in welding process. The double ellipsoid function as a laser source term is firstly considered in the prediction for surface thermal field and internal tissue damage. The simulation results are compared with the experimental data to validate the mathematical model established here in.

Keywords: laser infrared thermal imager, bio-heat transfer, double ellipsoid function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2042
3680 Influence of a High-Resolution Land Cover Classification on Air Quality Modelling

Authors: C. Silveira, A. Ascenso, J. Ferreira, A. I. Miranda, P. Tuccella, G. Curci

Abstract:

Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.

Keywords: Land cover, tropospheric ozone, WRF-Chem, air quality assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 771
3679 Educational Data Mining: The Case of Department of Mathematics and Computing in the Period 2009-2018

Authors: M. Sitoe, O. Zacarias

Abstract:

University education is influenced by several factors that range from the adoption of strategies to strengthen the whole process to the academic performance improvement of the students themselves. This work uses data mining techniques to develop a predictive model to identify students with a tendency to evasion and retention. To this end, a database of real students’ data from the Department of University Admission (DAU) and the Department of Mathematics and Informatics (DMI) was used. The data comprised 388 undergraduate students admitted in the years 2009 to 2014. The Weka tool was used for model building, using three different techniques, namely: K-nearest neighbor, random forest, and logistic regression. To allow for training on multiple train-test splits, a cross-validation approach was employed with a varying number of folds. To reduce bias variance and improve the performance of the models, ensemble methods of Bagging and Stacking were used. After comparing the results obtained by the three classifiers, Logistic Regression using Bagging with seven folds obtained the best performance, showing results above 90% in all evaluated metrics: accuracy, rate of true positives, and precision. Retention is the most common tendency.

Keywords: Evasion and retention, cross validation, bagging, stacking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 80
3678 Glorification Trap in Combating Human Trafficking in Indonesia: An Application of Three-Dimensional Model of Anti-Trafficking Policy

Authors: M. Kosandi, V. Susanti, N. I. Subono, E. Kartini

Abstract:

This paper discusses the risk of glorification trap in combating human trafficking, as it is shown in the case of Indonesia. Based on a research on Indonesian combat against trafficking in 2017-2018, this paper shows the tendency of misinterpretation and misapplication of the Indonesian anti-trafficking law into misusing the law for glorification, to create an image of certain extent of achievement in combating human trafficking. The objective of this paper is to explain the persistent occurrence of human trafficking crimes despite the significant progress of anti-trafficking efforts of Indonesian government. The research was conducted in 2017-2018 by qualitative approach through observation, depth interviews, discourse analysis, and document study, applying the three-dimensional model for analyzing human trafficking in the source country. This paper argues that the drive for glorification of achievement in the combat against trafficking has trapped Indonesian government in the loop of misinterpretation, misapplication, and misuse of the anti-trafficking law. In return, the so-called crime against humanity remains high and tends to increase in Indonesia.

Keywords: Human trafficking, anti-trafficking policy, transnational crime, source country, glorification trap.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 934
3677 Fast Adjustable Threshold for Uniform Neural Network Quantization

Authors: Alexander Goncharenko, Andrey Denisov, Sergey Alyamkin, Evgeny Terentev

Abstract:

The neural network quantization is highly desired procedure to perform before running neural networks on mobile devices. Quantization without fine-tuning leads to accuracy drop of the model, whereas commonly used training with quantization is done on the full set of the labeled data and therefore is both time- and resource-consuming. Real life applications require simplification and acceleration of quantization procedure that will maintain accuracy of full-precision neural network, especially for modern mobile neural network architectures like Mobilenet-v1, MobileNet-v2 and MNAS. Here we present a method to significantly optimize training with quantization procedure by introducing the trained scale factors for discretization thresholds that are separate for each filter. Using the proposed technique, we quantize the modern mobile architectures of neural networks with the set of train data of only ∼ 10% of the total ImageNet 2012 sample. Such reduction of train dataset size and small number of trainable parameters allow to fine-tune the network for several hours while maintaining the high accuracy of quantized model (accuracy drop was less than 0.5%). Ready-for-use models and code are available in the GitHub repository.

Keywords: Distillation, machine learning, neural networks, quantization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 709
3676 Effect of Bentonite on the Rheological Behavior of Cement Grout in Presence of Superplasticizer

Authors: K. Benyounes, A. Benmounah

Abstract:

Cement-based grouts has been used successfully to repair cracks in many concrete structures such as bridges, tunnels, buildings and to consolidate soils or rock foundations. In the present study the rheological characterization of cement grout with water/binder ratio (W/B) is fixed at 0.5. The effect of the replacement of cement by bentonite (2 to 10% wt) in presence of superplasticizer (0.5% wt) was investigated. Several rheological tests were carried out by using controlled-stress rheometer equipped with vane geometry in temperature of 20°C. To highlight the influence of bentonite and superplasticizer on the rheological behavior of grout cement, various flow tests in a range of shear rate from 0 to 200 s-1 were observed. Cement grout showed a non-Newtonian viscosity behavior at all concentrations of bentonite. Three parameter model Herschel- Bulkley was chosen for fitting of experimental data. Based on the values of correlation coefficients of the estimated parameters, The Herschel-Bulkley law model well described the rheological behavior of the grouts. Test results showed that the dosage of bentonite increases the viscosity and yield stress of the system and introduces more thixotropy. While the addition of both bentonite and superplasticizer with cement grout improve significantly the fluidity and reduced the yield stress due to the action of dispersion of SP.

Keywords: Cement grout, bentonite, superplasticizer, viscosity, yield stress.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3545
3675 Comparison between the Efficiency of Heterojunction Thin Film InGaP\GaAs\Ge and InGaP\GaAs Solar Cell

Authors: F. Djaafar, B. Hadri, G. Bachir

Abstract:

This paper presents the design parameters for a thin film 3J InGaP/GaAs/Ge solar cell with a simulated maximum efficiency of 32.11% using Tcad Silvaco. Design parameters include the doping concentration, molar fraction, layers’ thickness and tunnel junction characteristics. An initial dual junction InGaP/GaAs model of a previous published heterojunction cell was simulated in Tcad Silvaco to accurately predict solar cell performance. To improve the solar cell’s performance, we have fixed meshing, material properties, models and numerical methods. However, thickness and layer doping concentration were taken as variables. We, first simulate the InGaP\GaAs dual junction cell by changing the doping concentrations and thicknesses which showed an increase in efficiency. Next, a triple junction InGaP/GaAs/Ge cell was modeled by adding a Ge layer to the previous dual junction InGaP/GaAs model with an InGaP /GaAs tunnel junction.

Keywords: Heterojunction, modeling, simulation, thin film, Tcad Silvaco.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1220
3674 A New Framework and a Model for Product Development with an Application in the Telecommunications Services Sector

Authors: Ghada A. El Khayat

Abstract:

This paper argues that a product development exercise involves in addition to the conventional stages, several decisions regarding other aspects. These aspects should be addressed simultaneously in order to develop a product that responds to the customer needs and that helps realize objectives of the stakeholders in terms of profitability, market share and the like. We present a framework that encompasses these different development dimensions. The framework shows that a product development methodology such as the Quality Function Deployment (QFD) is the basic tool which allows definition of the target specifications of a new product. Creativity is the first dimension that enables the development exercise to live and end successfully. A number of group processes need to be followed by the development team in order to ensure enough creativity and innovation. Secondly, packaging is considered to be an important extension of the product. Branding strategies, quality and standardization requirements, identification technologies, design technologies, production technologies and costing and pricing are also integral parts to the development exercise. These dimensions constitute the proposed framework. The paper also presents a mathematical model used to calculate the design targets based on the target costing principle. The framework is used to study a case of a new product development in the telecommunications services sector.

Keywords: Product Development Framework, Quality FunctionDeployment, Mathematical Models, Telecommunications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1541
3673 Study on Plasma Creation and Propagation in a Pulsed Magnetoplasmadynamic Thruster

Authors: Tony Schönherr, Kimiya Komurasaki, Georg Herdrich

Abstract:

The performance and the plasma created by a pulsed magnetoplasmadynamic thruster for small satellite application is studied to understand better the ablation and plasma propagation processes occurring during the short-time discharge. The results can be applied to improve the quality of the thruster in terms of efficiency, and to tune the propulsion system to the needs required by the satellite mission. Therefore, plasma measurements with a high-speed camera and induction probes, and performance measurements of mass bit and impulse bit were conducted. Values for current sheet propagation speed, mean exhaust velocity and thrust efficiency were derived from these experimental data. A maximum in current sheet propagation was found by the high-speed camera measurements for a medium energy input and confirmed by the induction probes. A quasilinear tendency between the mass bit and the energy input, the current action integral respectively, was found, as well as a linear tendency between the created impulse and the discharge energy. The highest mean exhaust velocity and thrust efficiency was found for the highest energy input.

Keywords: electric propulsion, low-density plasma, pulsed magnetoplasmadynamicthruster, space engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2500
3672 Determinants of Brand Equity: Offering a Model to Chocolate Industry

Authors: Emari Hossien

Abstract:

This study examined the underlying dimensions of brand equity in the chocolate industry. For this purpose, researchers developed a model to identify which factors are influential in building brand equity. The second purpose was to assess brand loyalty and brand images mediating effect between brand attitude, brand personality, brand association with brand equity. The study employed structural equation modeling to investigate the causal relationships between the dimensions of brand equity and brand equity itself. It specifically measured the way in which consumers’ perceptions of the dimensions of brand equity affected the overall brand equity evaluations. Data were collected from a sample of consumers of chocolate industry in Iran. The results of this empirical study indicate that brand loyalty and brand image are important components of brand equity in this industry. Moreover, the role of brand loyalty and brand image as mediating factors in the intention of brand equity are supported. The principal contribution of the present research is that it provides empirical evidence of the multidimensionality of consumer based brand equity, supporting Aaker´s and Keller´s conceptualization of brand equity. The present research also enriched brand equity building by incorporating the brand personality and brand image, as recommended by previous researchers. Moreover, creating the brand equity index in chocolate industry of Iran particularly is novel.

Keywords: brand equity, brand personality, structural equationmodeling, Iran.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3588
3671 Development of Coronal Field and Solar Wind Components for MHD Interplanetary Simulations

Authors: Ljubomir Nikolic, Larisa Trichtchenko

Abstract:

The connection between solar activity and adverse phenomena in the Earth’s environment that can affect space and ground based technologies has spurred interest in Space Weather (SW) research. A great effort has been put on the development of suitable models that can provide advanced forecast of SW events. With the progress in computational technology, it is becoming possible to develop operational large scale physics based models which can incorporate the most important physical processes and domains of the Sun-Earth system. In order to enhance our SW prediction capabilities we are developing advanced numerical tools. With operational requirements in mind, our goal is to develop a modular simulation framework of propagation of the disturbances from the Sun through interplanetary space to the Earth. Here, we report and discuss on the development of coronal field and solar wind components for a large scale MHD code. The model for these components is based on a potential field source surface model and an empirical Wang-Sheeley-Arge solar wind relation. 

Keywords: Space weather, numerical modeling, coronal field, solar wind.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2116
3670 A Finite Volume Procedure on Unstructured Meshes for Fluid-Structure Interaction Problems

Authors: P I Jagad, B P Puranik, A W Date

Abstract:

Flow through micro and mini channels requires relatively high driving pressure due to the large fluid pressure drop through these channels. Consequently the forces acting on the walls of the channel due to the fluid pressure are also large. Due to these forces there are displacement fields set up in the solid substrate containing the channels. If the movement of the substrate is constrained at some points, then stress fields are established in the substrate. On the other hand, if the deformation of the channel shape is sufficiently large then its effect on the fluid flow is important to be calculated. Such coupled fluid-solid systems form a class of problems known as fluidstructure interactions. In the present work a co-located finite volume discretization procedure on unstructured meshes is described for solving fluid-structure interaction type of problems. A linear elastic solid is assumed for which the effect of the channel deformation on the flow is neglected. Thus the governing equations for the fluid and the solid are decoupled and are solved separately. The procedure is validated by solving two benchmark problems, one from fluid mechanics and another from solid mechanics. A fluid-structure interaction problem of flow through a U-shaped channel embedded in a plate is solved.

Keywords: Finite volume method, flow induced stresses, fluidstructureinteraction, unstructured meshes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1875
3669 Evaluation of Microleakage of a New Generation Nano-Ionomer in Class II Restoration of Primary Molars

Authors: Ghada Salem, Nihal Kabel

Abstract:

Objective: This in vitro study was carried out to assess the microleakage properties of nano-filled glass ionomer in comparison to resin-reinforced glass ionomers. Material and Methods: 40 deciduous molar teeth were included in this study. Class-II cavity was prepared in a standard form for all the specimens. The teeth were randomly distributed into two groups (20 per group) according to the restorative material used either nano-glass ionomer or Photac Fill glass ionomer restoration. All specimens were thermocycled for 1000 cycles between 5 and 55 °C. After that, the teeth were immersed in 2% methylene blue dye then sectioned and evaluated under a stereomicroscope. Microleakage was assessed using linear dye penetration and on a scale from zero to five. Results: Two way ANOVA test revealed a statistically significant lower degree of microleakage in both occlusal and gingival restorations (0.4±0.2), (0.9±0.1) for nano-filled glass ionomer group in comparison to resin modified glass ionomer (2.3±0.7), (2.4±0.5). No statistical difference was found between gingival and occlusal leakage regarding the effect of the measured site. Conclusion: Nano-filled glass ionomer shows superior sealing ability which enables this type of restoration to be used in minimum invasive treatment.

Keywords: Microleakage, nano-ionomer, resin-reinforced glass ionomer, proximal cavity preparation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1232
3668 FEM Simulation of HE Blast-Fragmentation Warhead and the Calculation of Lethal Range

Authors: G. Tanapornraweekit, W. Kulsirikasem

Abstract:

This paper presents the simulation of fragmentation warhead using a hydrocode, Autodyn. The goal of this research is to determine the lethal range of such a warhead. This study investigates the lethal range of warheads with and without steel balls as preformed fragments. The results from the FE simulation, i.e. initial velocities and ejected spray angles of fragments, are further processed using an analytical approach so as to determine a fragment hit density and probability of kill of a modelled warhead. In order to simulate a plenty of preformed fragments inside a warhead, the model requires expensive computation resources. Therefore, this study attempts to model the problem in an alternative approach by considering an equivalent mass of preformed fragments to the mass of warhead casing. This approach yields approximately 7% and 20% difference of fragment velocities from the analytical results for one and two layers of preformed fragments, respectively. The lethal ranges of the simulated warheads are 42.6 m and 56.5 m for warheads with one and two layers of preformed fragments, respectively, compared to 13.85 m for a warhead without preformed fragment. These lethal ranges are based on the requirement of fragment hit density. The lethal ranges which are based on the probability of kill are 27.5 m, 61 m and 70 m for warheads with no preformed fragment, one and two layers of preformed fragments, respectively.

Keywords: Lethal Range, Natural Fragment, Preformed Fragment, Warhead.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4276
3667 Stereotypical Motor Movement Recognition Using Microsoft Kinect with Artificial Neural Network

Authors: M. Jazouli, S. Elhoufi, A. Majda, A. Zarghili, R. Aalouane

Abstract:

Autism spectrum disorder is a complex developmental disability. It is defined by a certain set of behaviors. Persons with Autism Spectrum Disorders (ASD) frequently engage in stereotyped and repetitive motor movements. The objective of this article is to propose a method to automatically detect this unusual behavior. Our study provides a clinical tool which facilitates for doctors the diagnosis of ASD. We focus on automatic identification of five repetitive gestures among autistic children in real time: body rocking, hand flapping, fingers flapping, hand on the face and hands behind back. In this paper, we present a gesture recognition system for children with autism, which consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using artificial neural network (ANN). The first one uses the Microsoft Kinect sensor, the second one chooses points of interest from the 3D skeleton to characterize the gestures, and the last one proposes a neural connectionist model to perform the supervised classification of data. The experimental results show that our system can achieve above 93.3% recognition rate.

Keywords: ASD, stereotypical motor movements, repetitive gesture, kinect, artificial neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881
3666 The Effects of Consumer Inertia and Emotions on New Technology Acceptance

Authors: Chyi Jaw

Abstract:

Prior literature on innovation diffusion or acceptance has almost exclusively concentrated on consumers’ positive attitudes and behaviors for new products/services. Consumers’ negative attitudes or behaviors to innovations have received relatively little marketing attention, but it happens frequently in practice. This study discusses consumer psychological factors when they try to learn or use new technologies. According to recent research, technological innovation acceptance has been considered as a dynamic or mediated process. This research argues that consumers can experience inertia and emotions in the initial use of new technologies. However, given such consumer psychology, the argument can be made as to whether the inclusion of consumer inertia (routine seeking and cognitive rigidity) and emotions increases the predictive power of new technology acceptance model. As data from the empirical study find, the process is potentially consumer emotion changing (independent of performance benefits) because of technology complexity and consumer inertia, and impact innovative technology use significantly. Finally, the study presents the superior predictability of the hypothesized model, which let managers can better predict and influence the successful diffusion of complex technological innovations.

Keywords: Cognitive rigidity, consumer emotions, new technology acceptance, routine seeking, technology complexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2778
3665 Food Security in Nigeria: An Examination of Food Availability and Accessibility in Nigeria

Authors: Chimaobi Valentine Okolo, Chizoba Obidigbo

Abstract:

As a basic physiology need, threat to sufficient food production is threat to human survival. Food security has been an issue that has gained global concern. This paper looks at the food security in Nigeria by assessing the availability of food and accessibility of the available food. The paper employed multiple linear regression technique and graphic trends of growth rates of relevant variables to show the situation of food security in Nigeria. Results of the tests revealed that population growth rate was higher than the growth rate of food availability in Nigeria for the earlier period of the study. Commercial bank credit to agricultural sector, foreign exchange utilization for food and the Agricultural Credit Guarantee Scheme Fund (ACGSF) contributed significantly to food availability in Nigeria. Food prices grew at a faster rate than the average income level, making it difficult to access sufficient food. It implies that prior to the year 2012; there was insufficient food to feed the Nigerian populace. However, continued credit to the food and agricultural sector will ensure sustained and sufficient production of food in Nigeria. Microfinance banks should make sufficient credit available to smallholder farmer. Government should further control and subsidize the rising price of food to make it more accessible by the people.

Keywords: Food security, food availability and food accessibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6117
3664 Validation and Application of a New Optimized RP-HPLC-Fluorescent Detection Method for Norfloxacin

Authors: Mahmood Ahmad, Ghulam Murtaza, Sonia Khiljee, Muhammad Asadullah Madni

Abstract:

A new reverse phase-high performance liquid chromatography (RP-HPLC) method with fluorescent detector (FLD) was developed and optimized for Norfloxacin determination in human plasma. Mobile phase specifications, extraction method and excitation and emission wavelengths were varied for optimization. HPLC system contained a reverse phase C18 (5 μm, 4.6 mm×150 mm) column with FLD operated at excitation 330 nm and emission 440 nm. The optimized mobile phase consisted of 14% acetonitrile in buffer solution. The aqueous phase was prepared by mixing 2g of citric acid, 2g sodium acetate and 1 ml of triethylamine in 1 L of Milli-Q water was run at a flow rate of 1.2 mL/min. The standard curve was linear for the range tested (0.156–20 μg/mL) and the coefficient of determination was 0.9978. Aceclofenac sodium was used as internal standard. A detection limit of 0.078 μg/mL was achieved. Run time was set at 10 minutes because retention time of norfloxacin was 0.99 min. which shows the rapidness of this method of analysis. The present assay showed good accuracy, precision and sensitivity for Norfloxacin determination in human plasma with a new internal standard and can be applied pharmacokinetic evaluation of Norfloxacin tablets after oral administration in human.

Keywords: Norfloxacin, Aceclofenac sodium, Methodoptimization, RP-HPLC method, Fluorescent detection, Calibrationcurve.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2086
3663 Multi-Robotic Partial Disassembly Line Balancing with Robotic Efficiency Difference via HNSGA-II

Authors: Tao Yin, Zeqiang Zhang, Wei Liang, Yanqing Zeng, Yu Zhang

Abstract:

To accelerate the remanufacturing process of electronic waste products, this study designs a partial disassembly line with the multi-robotic station to effectively dispose of excessive wastes. The multi-robotic partial disassembly line is a technical upgrade to the existing manual disassembly line. Balancing optimization can make the disassembly line smoother and more efficient. For partial disassembly line balancing with the multi-robotic station (PDLBMRS), a mixed-integer programming model (MIPM) considering the robotic efficiency differences is established to minimize cycle time, energy consumption and hazard index and to calculate their optimal global values. Besides, an enhanced NSGA-II algorithm (HNSGA-II) is proposed to optimize PDLBMRS efficiently. Finally, MIPM and HNSGA-II are applied to an actual mixed disassembly case of two types of computers, the comparison of the results solved by GUROBI and HNSGA-II verifies the correctness of the model and excellent performance of the algorithm, and the obtained Pareto solution set provides multiple options for decision-makers.

Keywords: Waste disposal, disassembly line balancing, multi-robot station, robotic efficiency difference, HNSGA-II.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 465
3662 Effects of Knitting Variables for Pressure Controlling of Tubular Compression Fabrics

Authors: Yu Shi, Rong Liu, Jingyun Lv

Abstract:

Compression textiles with ergonomic-fit and controllable pressure performance have demonstrated positive effect on prevention and treatment of chronic venous insufficiency (CVI). Well-designed compression textile products contribute to improving user compliance in their daily application. This study explored the effects of multiple knitting variables (yarn-machinery settings) on the physical-mechanical properties and the produced pressure magnitudes of tubular compression fabrics (TCFs) through experimental testing and multiple regression modeling. The results indicated that fabric physical (stitch densities and circumference) and mechanical (tensile) properties were affected by the linear density of inlay yarns, which, to some extent, influenced pressure magnitudes of the TCFs. Knitting variables (e.g., feeding velocity of inlay yarns and loop size settings) can alter circumferences and tensile properties of tubular fabrics, respectively, and significantly varied pressure values of the TCFs. This study enhanced the understanding of the effects of knitting factors on pressure controlling of TCFs, thus facilitating dimension and pressure design of compression textiles in future development.

Keywords: Laid-in knitted fabric, yarn-machinery settings, pressure magnitudes, quantitative analysis, compression textiles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 229
3661 Growth and Characterization of L-Asparagine (LAS) Crystal Admixture of Paranitrophenol (PNP): A NLO Material

Authors: Grace Sahaya Sheba, P. Omegala Priyakumari, M. Gunasekaran

Abstract:

L-asparagine admixture Paranitrophenol (LAPNP) single crystals were grown successfully by solution method with slow evaporation technique at room temperature. Crystals of size 12mm×5 mm×3mm have been obtained in 15 days. The grown crystals were Brown color and transparent. The solubility of the grown samples has been found out at various temperatures. The lattice parameters of the grown crystals were determined by X-ray diffraction technique. The reflection planes of the sample were confirmed by the powder X-ray diffraction study and diffraction peaks were indexed. Fourier transform infrared (FTIR) studies were used to confirm the presence of various functional groups in the crystals. UV–visible absorption spectrum was recorded to study the optical transparency of grown crystal. The nonlinear optical (NLO) property of the grown crystal was confirmed by Kurtz–Perry powder technique and a study of its second harmonic generation efficiency in comparison with potassium dihydrogen phosphate (KDP) has been made. The mechanical strength of the crystal was estimated by Vickers hardness test. The grown crystals were subjected to thermo gravimetric and differential thermal analysis (TG/DTA). The dielectric behavior of the sample was also studied

Keywords: Characterization, Microhardnes, Non-linear optical materials, Solution growth, Spectroscopy, XRD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2975
3660 Matching Pursuit based Removal of Cardiac Pulse-Related Artifacts in EEG/fMRI

Authors: Rainer Schneider, Stephan Lau, Levin Kuhlmann, Simon Vogrin, Maciej Gratkowski, Mark Cook, Jens Haueisen

Abstract:

Cardiac pulse-related artifacts in the EEG recorded simultaneously with fMRI are complex and highly variable. Their effective removal is an unsolved problem. Our aim is to develop an adaptive removal algorithm based on the matching pursuit (MP) technique and to compare it to established methods using a visual evoked potential (VEP). We recorded the VEP inside the static magnetic field of an MR scanner (with artifacts) as well as in an electrically shielded room (artifact free). The MP-based artifact removal outperformed average artifact subtraction (AAS) and optimal basis set removal (OBS) in terms of restoring the EEG field map topography of the VEP. Subsequently, a dipole model was fitted to the VEP under each condition using a realistic boundary element head model. The source location of the VEP recorded inside the MR scanner was closest to that of the artifact free VEP after cleaning with the MP-based algorithm as well as with AAS. While none of the tested algorithms offered complete removal, MP showed promising results due to its ability to adapt to variations of latency, frequency and amplitude of individual artifact occurrences while still utilizing a common template.

Keywords: matching pursuit, ballistocardiogram, artifactremoval, EEG/fMRI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1669
3659 Evaluation of Settlement of Coastal Embankments Using Finite Elements Method

Authors: Sina Fadaie, Seyed Abolhassan Naeini

Abstract:

Coastal embankments play an important role in coastal structures by reducing the effect of the wave forces and controlling the movement of sediments. Many coastal areas are underlain by weak and compressible soils. Estimation of during construction settlement of coastal embankments is highly important in design and safety control of embankments and appurtenant structures. Accordingly, selecting and establishing of an appropriate model with a reasonable level of complication is one of the challenges for engineers. Although there are advanced models in the literature regarding design of embankments, there is not enough information on the prediction of their associated settlement, particularly in coastal areas having considerable soft soils. Marine engineering study in Iran is important due to the existence of two important coastal areas located in the northern and southern parts of the country. In the present study, the validity of Terzaghi’s consolidation theory has been investigated. In addition, the settlement of these coastal embankments during construction is predicted by using special methods in PLAXIS software by the help of appropriate boundary conditions and soil layers. The results indicate that, for the existing soil condition at the site, some parameters are important to be considered in analysis. Consequently, a model is introduced to estimate the settlement of the embankments in such geotechnical conditions.

Keywords: Consolidation, coastal embankments, settlement, numerical methods, finite elements method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 768
3658 Simulation Model for Optimizing Energy in Supply Chain Management

Authors: Nazli Akhlaghinia, Ali Rajabzadeh Ghatari

Abstract:

In today's world, with increasing environmental awareness, firms are facing severe pressure from various stakeholders, including the government and customers, to reduce their harmful effects on the environment. Over the past few decades, the increasing effects of global warming, climate change, waste, and air pollution have increased the global attention of experts to the issue of the green supply chain and led them to the optimal solution for greenery. Green supply chain management (GSCM) plays an important role in motivating the sustainability of the organization. With increasing environmental concerns, the main objective of the research is to use system thinking methodology and Vensim software for designing a dynamic system model for green supply chain and observing behaviors. Using this methodology, we look for the effects of a green supply chain structure on the behavioral dynamics of output variables. We try to simulate the complexity of GSCM in a period of 30 months and observe the complexity of behaviors of variables including sustainability, providing green products, and reducing energy consumption, and consequently reducing sample pollution.

Keywords: Supply chain management, green supply chain management, system dynamics, energy consumption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 858
3657 Enhancing the Performance of Wireless Sensor Networks Using Low Power Design

Authors: N. Mahendran, R. Madhuranthi

Abstract:

Wireless sensor networks (WSNs), are constantly in demand to process information more rapidly with less energy and area cost. Presently, processor based solutions have difficult to achieve high processing speed with low-power consumption. This paper presents a simple and accurate data processing scheme for low power wireless sensor node, based on reduced number of processing element (PE). The presented model provides a simple recursive structure (SRS) to process the sampled data in the wireless sensor environment and to reduce the power consumption in wireless sensor node. Based on this model, to process the incoming samples and produce a smaller amount of data sufficient to reconstruct the original signal. The ModelSim simulator used to simulate SRS structure. Functional simulation is carried out for the validation of the presented architecture. Xilinx Power Estimator (XPE) tool is used to measure the power consumption. The experimental results show the average power consumption of 91 mW; this is 42% improvement compared to the folded tree architecture.

Keywords: Power consumption, energy efficiency, low power WSN node, recursive structure, sleep/wake scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 997
3656 A Theoretical Model for a Humidification Dehumidification (HD) Solar Desalination Unit

Authors: Yasser Elhenawy, M. Abd Elkader, Gamal H. Moustafa

Abstract:

A theoretical study of a humidification dehumidification solar desalination unit has been carried out to increase understanding the effect of weather conditions on the unit productivity. A humidification-dehumidification (HD) solar desalination unit has been designed to provide fresh water for population in remote arid areas. It consists of solar water collector and air collector; to provide the hot water and air to the desalination chamber. The desalination chamber is divided into humidification and dehumidification towers. The circulation of air between the two towers is maintained by the forced convection. A mathematical model has been formulated, in which the thermodynamic relations were used to study the flow, heat and mass transfer inside the humidifier and dehumidifier. The present technique is performed in order to increase the unit performance. Heat and mass balance has been done and a set of governing equations has been solved using the finite difference technique. The unit productivity has been calculated along the working day during the summer and winter sessions and has compared with the available experimental results. The average accumulative productivity of the system in winter has been ranged between 2.5 to 4 (kg/m2)/day, while the average summer productivity has been found between 8 to 12 (kg/m2)/day.

Keywords: Finite difference, Dehumidification, Humidification, Solar desalination, Solar collector, Simulation, Water productivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668
3655 Simulation Modeling for Analysis and Evaluation of the Internal Handling Fleet System at Shahid Rajaee Container Port

Authors: Parham Azimi, Mohammad Reza Ghanbari

Abstract:

The dramatic increasing of sea-freight container transportations and the developing trends for using containers in the multimodal handling systems through the sea, rail, road and land in nowadays market cause general managers of container terminals to face challenges such as increasing demand, competitive situation, new investments and expansion of new activities and need to use new methods to fulfil effective operations both along quayside and within the yard. Among these issues, minimizing the turnaround time of vessels is considered to be the first aim of every container port system. Regarding the complex structure of container ports, this paper presents a simulation model that calculates the number of trucks needed in the Iranian Shahid Rajaee Container Port for handling containers between the berth and the yard. In this research, some important criteria such as vessel turnaround time, gantry crane utilization and truck utilization have been considered. By analyzing the results of the model, it has been shown that increasing the number of trucks to 66 units has a significant effect on the performance indices of the port and can increase the capacity of loading and unloading up to 10.8%.

Keywords: Container Terminal, Gantry Crane Utilization, Simulation, Vessel Turnaround Time

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1859
3654 A Fuzzy TOPSIS Based Model for Safety Risk Assessment of Operational Flight Data

Authors: N. Borjalilu, P. Rabiei, A. Enjoo

Abstract:

Flight Data Monitoring (FDM) program assists an operator in aviation industries to identify, quantify, assess and address operational safety risks, in order to improve safety of flight operations. FDM is a powerful tool for an aircraft operator integrated into the operator’s Safety Management System (SMS), allowing to detect, confirm, and assess safety issues and to check the effectiveness of corrective actions, associated with human errors. This article proposes a model for safety risk assessment level of flight data in a different aspect of event focus based on fuzzy set values. It permits to evaluate the operational safety level from the point of view of flight activities. The main advantages of this method are proposed qualitative safety analysis of flight data. This research applies the opinions of the aviation experts through a number of questionnaires Related to flight data in four categories of occurrence that can take place during an accident or an incident such as: Runway Excursions (RE), Controlled Flight Into Terrain (CFIT), Mid-Air Collision (MAC), Loss of Control in Flight (LOC-I). By weighting each one (by F-TOPSIS) and applying it to the number of risks of the event, the safety risk of each related events can be obtained.

Keywords: F-TOPSIS, fuzzy set, FDM, flight safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 868