Search results for: real time model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32120

Search results for: real time model

20000 Stomach Specific Delivery of Andrographolide from Floating in Situ Gelling System

Authors: Pravina Gurjar, Bothiraja Pour, Vijay Kumbhar, Ganesh Dama

Abstract:

Andrographolide (AG), a bioactive phytoconstituent, has a wider range of pharmacological action. However, due to the intestinal degradation, shows low oral bioavailability. The aim of the present work was to develop Floating In-situ gelling Gastro retentive System (FISGS) for AG in order to enhance its site specific absorption and minimize pH dependent hydrolysis in alkaline environment. Further to increase its therapeutic efficacy for peptic ulcer disease caused by H. pyroli. Gellan based floating in situ gelling system of AG were prepared by using sodium citrate and calcium carbonate. The 32 factorial designs was used to study the effect of gellan and calcium carbonate concentration (independent variables) on dependent variable such as viscosity, floating lag time and drug release. Developed system was evaluated for drug content, floating lag time, viscosity, and drug release studies. Drug content, viscosity, and floating lag time was found to be 81-99%, 67-117 Cps, and 3-5 sec, respectively. The obtained system showed good in vitro floating ability for more than 12 h using 0.1 N HCl as dissolution medium with initial burst release followed by the controlled zero order drug release up to 24 hrs. In vivo testing of FISGS of AG to rats demonstrated significant antiulcer activity that were evaluated by various parameters like pH, volume, total acidity, millimole equivalent of H+ ions/30 min, and protein content of gastric content. The densities of all the formulation batches were found to be near about 0.9 and floating duration above 12 hr. It was observed that with the increase in conc. of gellan there was increase in the viscosity of formulation but all formulations were in optimum range. The drug content of optimized batch was found to be 99.23. In histopathology study of stomach, the villi at the mucosal surface, the intercellular junction, the intestinal lumen were intact; no destruction of the epithelium, and submucosal gland in formulation treated and control group animals as compared to pure drug AG and standard ranitidine. Gellan-based in situ gastro retentive floating system could be advantageous in terms of increased bioavailability of AG to maintain an effective drug conc. in gastric fluid as well as in serum for longer period of time.

Keywords: andrographolide, floating drug delivery, in situ gelling system, gastroretentive system

Procedia PDF Downloads 356
19999 Visual Overloaded on User-Generated Content by the Net Generation: Participatory Cultural Viewpoint

Authors: Hasanah Md. Amin

Abstract:

The existence of cyberspace and its growing contents is real and overwhelming. Visual as one of the properties of cyber contents is increasingly becoming more significant and popular among creator and user. The visual and aesthetic of the content is consistent with many similarities. Aesthetic, although universal, has slight differences across the world. Aesthetic power could impress, influence, and cause bias among the users. The content creator who knows how to manipulate this visuals and aesthetic expression can dominate the scenario and the user who is ‘expressive literate’ will gain much from the scenes. User who understands aesthetic will be rewarded with competence, confidence, and certainly, a personality enhanced experience in carrying out a task when participating in this chaotic but promising cyberworld. The aim of this article is to gain knowledge from related literature and research regarding User-Generated Content (UGC), which focuses on aesthetic expression by the Net generation. The objective of this preliminary study is to analyze the aesthetic expression linked to visual from the participatory cultural viewpoint looking for meaning, value, patterns, and characteristics.

Keywords: visual overloaded, user-generated content, net generation, visual arts

Procedia PDF Downloads 432
19998 A Folk Theorem with Public Randomization Device in Repeated Prisoner’s Dilemma under Costly Observation

Authors: Yoshifumi Hino

Abstract:

An infinitely repeated prisoner’s dilemma is a typical model that represents teamwork situation. If both players choose costly actions and contribute to the team, then both players are better off. However, each player has an incentive to choose a selfish action. We analyze the game under costly observation. Each player can observe the action of the opponent only when he pays an observation cost in that period. In reality, teamwork situations are often costly observation. Members of some teams sometimes work in distinct rooms, areas, or countries. In those cases, they have to spend their time and money to see other team members if they want to observe it. The costly observation assumption makes the cooperation difficult substantially because the equilibrium must satisfy the incentives not only on the action but also on the observational decision. Especially, it is the most difficult to cooperate each other when the stage-game is prisoner's dilemma because players have to communicate through only two actions. We examine whether or not players can cooperate each other in prisoner’s dilemma under costly observation. Specifically, we check whether symmetric Pareto efficient payoff vectors in repeated prisoner’s dilemma can be approximated by sequential equilibria or not (efficiency result). We show the efficiency result without any randomization device under certain circumstances. It means that players can cooperate with each other without any randomization device even if the observation is costly. Next, we assume that public randomization device is available, and then we show that any feasible and individual rational payoffs in prisoner’s dilemma can be approximated by sequential equilibria under a specific situation (folk theorem). It implies that players can achieve asymmetric teamwork like leadership situation when public randomization device is available.

Keywords: cost observation, efficiency, folk theorem, prisoner's dilemma, private monitoring, repeated games.

Procedia PDF Downloads 233
19997 The Effects of Consumer Inertia and Emotions on New Technology Acceptance

Authors: Chyi Jaw

Abstract:

Prior literature on innovation diffusion or acceptance has almost exclusively concentrated on consumers’ positive attitudes and behaviors for new products/services. Consumers’ negative attitudes or behaviors to innovations have received relatively little marketing attention, but it happens frequently in practice. This study discusses consumer psychological factors when they try to learn or use new technologies. According to recent research, technological innovation acceptance has been considered as a dynamic or mediated process. This research argues that consumers can experience inertia and emotions in the initial use of new technologies. However, given such consumer psychology, the argument can be made as to whether the inclusion of consumer inertia (routine seeking and cognitive rigidity) and emotions increases the predictive power of new technology acceptance model. As data from the empirical study find, the process is potentially consumer emotion changing (independent of performance benefits) because of technology complexity and consumer inertia, and impact innovative technology use significantly. Finally, the study presents the superior predictability of the hypothesized model, which let managers can better predict and influence the successful diffusion of complex technological innovations.

Keywords: cognitive rigidity, consumer emotions, new technology acceptance, routine seeking, technology complexity

Procedia PDF Downloads 288
19996 Gender, Age, and Race Differences in Self-Reported Reading Attitudes of College Students

Authors: Jill Villarreal, Kristalyn Cooksey, Kai Lloyd, Daniel Ha

Abstract:

Little research has been conducted to examine college students' reading attitudes, including students' perceptions of reading behaviors and reading abilities. This is problematic, as reading assigned course material is a critical component to an undergraduate student's academic success. For this study, flyers were electronically disseminated to instructors at 24 public and 10 private U.S. institutions in “Reading-Intensive Departments” including Psychology, Sociology, Education, Business, and Communications. We requested the online survey be completed as an in-class activity during the fall 2019 and spring 2020 semesters. All participants voluntarily completed the questionnaire anonymously. Of the participants, 280 self-identified their race as Black and 280 self-identified their race as White. Of the participants, 177 self-identified their gender as Male and 383 self-identified their Gender as Female. Participants ranged in age from 18-24. Factor analysis found four dimensions resulting from the questions regarding reading. The first we interpret as “Reading Proficiency”, accounted for 19% of the variability. The second dimension was “Reading Anxiety” (15%), the third was “Textbook Reading Ability” (9%), and the fourth was “Reading Enjoyment” (8%). Linear models on each of these dimensions revealed no effect of Age, Gender, Race, or Income on “Reading proficiency”. The linear model of “Reading Anxiety” showed a significant effect of race (p = 0.02), with higher anxiety in white students, as well as higher reading anxiety in female students (p < 0.001). The model of “Textbook Reading Ability” found a significant effect of race (p < 0.001), with higher textbook problems in white students. The model of “Reading Enjoyment” showed significant effects of race (p = 0.013) with more enjoyment for white students, gender (p = 0.001) with higher enjoyment for female students, and age (p = 0.033) with older students showing higher enjoyment. These findings suggest that gender, age, and race are important factors in many aspects of college students' reading attitudes. Further research will investigate possible causes for these differences. In addition, the effectiveness of college-level programs to reduce reading anxiety, promote the reading of textbooks, and foster a love of reading will be assessed.

Keywords: age, college, gender, race, reading

Procedia PDF Downloads 144
19995 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network

Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin

Abstract:

The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.

Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake

Procedia PDF Downloads 58
19994 Performance of Coded Multi-Line Copper Wire for G.fast Communications in the Presence of Impulsive Noise

Authors: Israa Al-Neami, Ali J. Al-Askery, Martin Johnston, Charalampos Tsimenidis

Abstract:

In this paper, we focus on the design of a multi-line copper wire (MLCW) communication system. First, we construct our proposed MLCW channel and verify its characteristics based on the Kolmogorov-Smirnov test. In addition, we apply Middleton class A impulsive noise (IN) to the copper channel for further investigation. Second, the MIMO G.fast system is adopted utilizing the proposed MLCW channel model and is compared to a single line G-fast system. Second, the performance of the coded system is obtained utilizing concatenated interleaved Reed-Solomon (RS) code with four-dimensional trellis-coded modulation (4D TCM), and compared to the single line G-fast system. Simulations are obtained for high quadrature amplitude modulation (QAM) constellations that are commonly used with G-fast communications, the results demonstrate that the bit error rate (BER) performance of the coded MLCW system shows an improvement compared to the single line G-fast systems.

Keywords: G.fast, Middleton Class A impulsive noise, mitigation techniques, Copper channel model

Procedia PDF Downloads 129
19993 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 34
19992 Multi-Agent System Based Distributed Voltage Control in Distribution Systems

Authors: A. Arshad, M. Lehtonen. M. Humayun

Abstract:

With the increasing Distributed Generation (DG) penetration, distribution systems are advancing towards the smart grid technology for least latency in tackling voltage control problem in a distributed manner. This paper proposes a Multi-agent based distributed voltage level control. In this method a flat architecture of agents is used and agents involved in the whole controlling procedure are On Load Tap Changer Agent (OLTCA), Static VAR Compensator Agent (SVCA), and the agents associated with DGs and loads at their locations. The objectives of the proposed voltage control model are to minimize network losses and DG curtailments while maintaining voltage value within statutory limits as close as possible to the nominal. The total loss cost is the sum of network losses cost, DG curtailment costs, and voltage damage cost (which is based on penalty function implementation). The total cost is iteratively calculated for various stricter limits by plotting voltage damage cost and losses cost against varying voltage limit band. The method provides the optimal limits closer to nominal value with minimum total loss cost. In order to achieve the objective of voltage control, the whole network is divided into multiple control regions; downstream from the controlling device. The OLTCA behaves as a supervisory agent and performs all the optimizations. At first, a token is generated by OLTCA on each time step and it transfers from node to node until the node with voltage violation is detected. Upon detection of such a node, the token grants permission to Load Agent (LA) for initiation of possible remedial actions. LA will contact the respective controlling devices dependent on the vicinity of the violated node. If the violated node does not lie in the vicinity of the controller or the controlling capabilities of all the downstream control devices are at their limits then OLTC is considered as a last resort. For a realistic study, simulations are performed for a typical Finnish residential medium-voltage distribution system using Matlab ®. These simulations are executed for two cases; simple Distributed Voltage Control (DVC) and DVC with optimized loss cost (DVC + Penalty Function). A sensitivity analysis is performed based on DG penetration. The results indicate that costs of losses and DG curtailments are directly proportional to the DG penetration, while in case 2 there is a significant reduction in total loss. For lower DG penetration, losses are reduced more or less 50%, while for higher DG penetration, loss reduction is not very significant. Another observation is that the newer stricter limits calculated by cost optimization moves towards the statutory limits of ±10% of the nominal with the increasing DG penetration as for 25, 45 and 65% limits calculated are ±5, ±6.25 and 8.75% respectively. Observed results conclude that the novel voltage control algorithm proposed in case 1 is able to deal with the voltage control problem instantly but with higher losses. In contrast, case 2 make sure to reduce the network losses through proposed iterative method of loss cost optimization by OLTCA, slowly with time.

Keywords: distributed voltage control, distribution system, multi-agent systems, smart grids

Procedia PDF Downloads 306
19991 An Integrated Approach to the Carbonate Reservoir Modeling: Case Study of the Eastern Siberia Field

Authors: Yana Snegireva

Abstract:

Carbonate reservoirs are known for their heterogeneity, resulting from various geological processes such as diagenesis and fracturing. These complexities may cause great challenges in understanding fluid flow behavior and predicting the production performance of naturally fractured reservoirs. The investigation of carbonate reservoirs is crucial, as many petroleum reservoirs are naturally fractured, which can be difficult due to the complexity of their fracture networks. This can lead to geological uncertainties, which are important for global petroleum reserves. The problem outlines the key challenges in carbonate reservoir modeling, including the accurate representation of fractures and their connectivity, as well as capturing the impact of fractures on fluid flow and production. Traditional reservoir modeling techniques often oversimplify fracture networks, leading to inaccurate predictions. Therefore, there is a need for a modern approach that can capture the complexities of carbonate reservoirs and provide reliable predictions for effective reservoir management and production optimization. The modern approach to carbonate reservoir modeling involves the utilization of the hybrid fracture modeling approach, including the discrete fracture network (DFN) method and implicit fracture network, which offer enhanced accuracy and reliability in characterizing complex fracture systems within these reservoirs. This study focuses on the application of the hybrid method in the Nepsko-Botuobinskaya anticline of the Eastern Siberia field, aiming to prove the appropriateness of this method in these geological conditions. The DFN method is adopted to model the fracture network within the carbonate reservoir. This method considers fractures as discrete entities, capturing their geometry, orientation, and connectivity. But the method has significant disadvantages since the number of fractures in the field can be very high. Due to limitations in the amount of main memory, it is very difficult to represent these fractures explicitly. By integrating data from image logs (formation micro imager), core data, and fracture density logs, a discrete fracture network (DFN) model can be constructed to represent fracture characteristics for hydraulically relevant fractures. The results obtained from the DFN modeling approaches provide valuable insights into the East Siberia field's carbonate reservoir behavior. The DFN model accurately captures the fracture system, allowing for a better understanding of fluid flow pathways, connectivity, and potential production zones. The analysis of simulation results enables the identification of zones of increased fracturing and optimization opportunities for reservoir development with the potential application of enhanced oil recovery techniques, which were considered in further simulations on the dual porosity and dual permeability models. This approach considers fractures as separate, interconnected flow paths within the reservoir matrix, allowing for the characterization of dual-porosity media. The case study of the East Siberia field demonstrates the effectiveness of the hybrid model method in accurately representing fracture systems and predicting reservoir behavior. The findings from this study contribute to improved reservoir management and production optimization in carbonate reservoirs with the use of enhanced and improved oil recovery methods.

Keywords: carbonate reservoir, discrete fracture network, fracture modeling, dual porosity, enhanced oil recovery, implicit fracture model, hybrid fracture model

Procedia PDF Downloads 72
19990 Applying Genetic Algorithm in Exchange Rate Models Determination

Authors: Mehdi Rostamzadeh

Abstract:

Genetic Algorithms (GAs) are an adaptive heuristic search algorithm premised on the evolutionary ideas of natural selection and genetic. In this study, we apply GAs for fundamental and technical models of exchange rate determination in exchange rate market. In this framework, we estimated absolute and relative purchasing power parity, Mundell-Fleming, sticky and flexible prices (monetary models), equilibrium exchange rate and portfolio balance model as fundamental models and Auto Regressive (AR), Moving Average (MA), Auto-Regressive with Moving Average (ARMA) and Mean Reversion (MR) as technical models for Iranian Rial against European Union’s Euro using monthly data from January 1992 to December 2014. Then, we put these models into the genetic algorithm system for measuring their optimal weight for each model. These optimal weights have been measured according to four criteria i.e. R-Squared (R2), mean square error (MSE), mean absolute percentage error (MAPE) and root mean square error (RMSE).Based on obtained Results, it seems that for explaining of Iranian Rial against EU Euro exchange rate behavior, fundamental models are better than technical models.

Keywords: exchange rate, genetic algorithm, fundamental models, technical models

Procedia PDF Downloads 268
19989 Development of a Forecast-Supported Approach for the Continuous Pre-Planning of Mandatory Transportation Capacity for the Design of Sustainable Transport Chains: A Literature Review

Authors: Georg Brunnthaller, Sandra Stein, Wilfried Sihn

Abstract:

Transportation service providers are facing increasing volatility concerning future transport demand. Short-term planning horizons and planning uncertainties lead to reduced capacity utilization and increasing empty mileage. To overcome these challenges, a model is proposed to continuously pre-plan future transportation capacity in order to redesign and adjust the intermodal fleet accordingly. It is expected that the model will enable logistics service providers to organize more economically and ecologically sustainable transport chains in a more flexible way. To further describe these planning aspects, this paper gives an overview on transportation planning problems in a structured way. The focus is on strategic and tactical planning levels, comprising relevant fleet-sizing, service-network-design and choice-of-carriers-problems. Models and their developed solution techniques are presented, and the literature review is concluded with an outlook to our future research directions.

Keywords: freight transportation planning, multimodal, fleet-sizing, service network design, choice of transportation mode, review

Procedia PDF Downloads 311
19988 Long Wavelength Coherent Pulse of Sound Propagating in Granular Media

Authors: Rohit Kumar Shrivastava, Amalia Thomas, Nathalie Vriend, Stefan Luding

Abstract:

A mechanical wave or vibration propagating through granular media exhibits a specific signature in time. A coherent pulse or wavefront arrives first with multiply scattered waves (coda) arriving later. The coherent pulse is micro-structure independent i.e. it depends only on the bulk properties of the disordered granular sample, the sound wave velocity of the granular sample and hence bulk and shear moduli. The coherent wavefront attenuates (decreases in amplitude) and broadens with distance from its source. The pulse attenuation and broadening effects are affected by disorder (polydispersity; contrast in size of the granules) and have often been attributed to dispersion and scattering. To study the effect of disorder and initial amplitude (non-linearity) of the pulse imparted to the system on the coherent wavefront, numerical simulations have been carried out on one-dimensional sets of particles (granular chains). The interaction force between the particles is given by a Hertzian contact model. The sizes of particles have been selected randomly from a Gaussian distribution, where the standard deviation of this distribution is the relevant parameter that quantifies the effect of disorder on the coherent wavefront. Since, the coherent wavefront is system configuration independent, ensemble averaging has been used for improving the signal quality of the coherent pulse and removing the multiply scattered waves. The results concerning the width of the coherent wavefront have been formulated in terms of scaling laws. An experimental set-up of photoelastic particles constituting a granular chain is proposed to validate the numerical results.

Keywords: discrete elements, Hertzian contact, polydispersity, weakly nonlinear, wave propagation

Procedia PDF Downloads 197
19987 How Addictive Are They: Effects of E-Cigarette Vapor on Intracranial Self-Stimulation Compared to Nicotine Alone

Authors: Annika Skansberg

Abstract:

Electronic cigarettes (e-cigarettes) use vapor to deliver nicotine, have recently become popular, especially amongst adolescents. Because of this, the FDA has decided to regulate e-cigarettes, and therefore would like to determine the abuse liability of the products compared to traditional nicotine products. This will allow them to determine the impact of regulating them on public health and shape the decisions they make when creating new laws. This study assessed the abuse liability of Aroma E-juice Dark Honey Tobacco compared to nicotine using an animal model. This e-liquid contains minor alkaloids that may increase abuse liability compared to nicotine alone. The abuse liability of nicotine alone and e-juice liquid were compared in rats using intracranial self-stimulation (ICSS) thresholds. E-liquid had less aversive effects at high nicotine doses in the ICSS model, suggesting that the minor alkaloids in the e-liquid allow users to use higher doses without experiencing the negative effects felt when using high doses of nicotine alone. This finding could mean that e-cigarettes have a higher abuse liability than nicotine alone, but more research is needed before this can be concluded. These findings are useful in observing the abuse liability of e-cigarettes and will help inform the FDA while regulating these products.

Keywords: electronic cigarettes, intra-cranial self stimulation, abuse liability, anhedonia

Procedia PDF Downloads 307
19986 The Association between Facebook Emotional Dependency with Psychological Well-Being in Eudaimonic Approach among Adolescents 13-16 Years Old

Authors: Somayyeh Naeemi, Ezhar Tamam

Abstract:

In most of the countries, Facebook allocated high rank of usage among other social network sites. Several studies have examined the effect of Facebook intensity on individuals’ psychological well-being. However, few studies have investigated its effect on eudaimonic well-being. The current study explored how emotional dependency to Facebook relates to psychological well-being in terms of eudaimonic well-being. The number of 402 adolescents 13-16 years old who studied in upper secondary school in Malaysia participated in this study. It was expected to find out a negative association between emotional dependency to Facebook and time spent on Facebook and psychological well-being. It also was examined the moderation effects of self-efficacy on psychological well-being. The results by Structural Equation Modeling revealed that emotional dependency to Facebook has a negative effect on adolescents’ psychological well-being. Surprisingly self-efficacy did not have moderation effect on the relationship between emotional dependency to Facebook and psychological well-being. Lastly, the emotional dependency to Facebook and not the time spent on Facebook lessen adolescents’ psychological well-being, suggesting the value of investigating Facebook usage among college students in future studies.

Keywords: emotional dependency to facebook, psychological well-being, eudaimonic well-being, self-efficacy, adolescent

Procedia PDF Downloads 511
19985 A Unique Multi-Class Support Vector Machine Algorithm Using MapReduce

Authors: Aditi Viswanathan, Shree Ranjani, Aruna Govada

Abstract:

With data sizes constantly expanding, and with classical machine learning algorithms that analyze such data requiring larger and larger amounts of computation time and storage space, the need to distribute computation and memory requirements among several computers has become apparent. Although substantial work has been done in developing distributed binary SVM algorithms and multi-class SVM algorithms individually, the field of multi-class distributed SVMs remains largely unexplored. This research seeks to develop an algorithm that implements the Support Vector Machine over a multi-class data set and is efficient in a distributed environment. For this, we recursively choose the best binary split of a set of classes using a greedy technique. Much like the divide and conquer approach. Our algorithm has shown better computation time during the testing phase than the traditional sequential SVM methods (One vs. One, One vs. Rest) and out-performs them as the size of the data set grows. This approach also classifies the data with higher accuracy than the traditional multi-class algorithms.

Keywords: distributed algorithm, MapReduce, multi-class, support vector machine

Procedia PDF Downloads 396
19984 Study of Ultrasonic Waves in Unidirectional Fiber-Reinforced Composite Plates for the Aerospace Applications

Authors: DucTho Le, Duy Kien Dao, Quoc Tinh Bui, Haidang Phan

Abstract:

The article is concerned with the motion of ultrasonic guided waves in a unidirectional fiber-reinforced composite plate under acoustic sources. Such unidirectional composite material has orthotropic elastic properties as it is very stiff along the fibers and rather compliant across the fibers. The dispersion equations of free Lamb waves propagating in an orthotropic layer are derived that results in the dispersion curves. The connection of these equations to the Rayleigh-Lamb frequency relations of isotropic plates is discussed. By the use of reciprocity in elastodynamics, closed-form solutions of elastic wave motions subjected to time-harmonic loads in the layer are computed in a simple manner. We also consider the problem of Lamb waves generated by a set of time-harmonic sources. The obtained computations can be very useful for developing ultrasound-based methods for nondestructive evaluation of composite structures.

Keywords: lamb waves, fiber-reinforced composite plates, dispersion equations, nondestructive evaluation, reciprocity theorems

Procedia PDF Downloads 143
19983 Design and Development of the Force Plate for the Study of Driving-Point Biodynamic Responses

Authors: Vikas Kumar, V. H. Saran, Arpit Mathur, Avik Kathuria

Abstract:

The evaluation of biodynamic responses of the human body to whole body vibration exposure is necessary to quantify the exposure effects. A force plate model has been designed with the help of CAD software, which was investigated by performing the modal, stress and strain analysis using finite element approach in the software. The results of the modal, stress and strain analysis were under the limits for measurements of biodynamic responses to whole body vibration. The physical model of the force plate was manufactured and fixed to the vibration simulator and further used in the experimentation for the evaluation of apparent mass responses of the ten recruited subjects standing in an erect posture exposed to vertical whole body vibration. The platform was excited with sinusoidal vibration at vibration magnitude: 1.0 and 1.5 m/s2 rms at different frequency of 2, 3, 4, 5, 6, 8, 10, 12.5, 16 and 20 Hz. The results of magnitude of normalised apparent mass have shown the trend observed in the many past studies. The peak in the normalised apparent mass has been observed at 4 & 5 Hz frequency of vertical whole body vibration. The nonlinearity with respect to vibration magnitude has been also observed in the normalised apparent mass responses.

Keywords: whole body vibration, apparent mass, modeling, force plate

Procedia PDF Downloads 408
19982 Evaluating Robustness of Conceptual Rainfall-runoff Models under Climate Variability in Northern Tunisia

Authors: H. Dakhlaoui, D. Ruelland, Y. Tramblay, Z. Bargaoui

Abstract:

To evaluate the impact of climate change on water resources at the catchment scale, not only future projections of climate are necessary but also robust rainfall-runoff models that are able to be fairly reliable under changing climate conditions. This study aims at assessing the robustness of three conceptual rainfall-runoff models (GR4j, HBV and IHACRES) on five basins in Northern Tunisia under long-term climate variability. Their robustness was evaluated according to a differential split sample test based on a climate classification of the observation period regarding simultaneously precipitation and temperature conditions. The studied catchments are situated in a region where climate change is likely to have significant impacts on runoff and they already suffer from scarcity of water resources. They cover the main hydrographical basins of Northern Tunisia (High Medjerda, Zouaraâ, Ichkeul and Cap bon), which produce the majority of surface water resources in Tunisia. The streamflow regime of the basins can be considered as natural since these basins are located upstream from storage-dams and in areas where withdrawals are negligible. A 30-year common period (1970‒2000) was considered to capture a large spread of hydro-climatic conditions. The calibration was based on the Kling-Gupta Efficiency (KGE) criterion, while the evaluation of model transferability is performed according to the Nash-Suttfliff efficiency criterion and volume error. The three hydrological models were shown to have similar behaviour under climate variability. Models prove a better ability to simulate the runoff pattern when transferred toward wetter periods compared to the case when transferred to drier periods. The limits of transferability are beyond -20% of precipitation and +1.5 °C of temperature in comparison with the calibration period. The deterioration of model robustness could in part be explained by the climate dependency of some parameters.

Keywords: rainfall-runoff modelling, hydro-climate variability, model robustness, uncertainty, Tunisia

Procedia PDF Downloads 289
19981 OpenFOAM Based Simulation of High Reynolds Number Separated Flows Using Bridging Method of Turbulence

Authors: Sagar Saroha, Sawan S. Sinha, Sunil Lakshmipathy

Abstract:

Reynolds averaged Navier-Stokes (RANS) model is the popular computational tool for prediction of turbulent flows. Being computationally less expensive as compared to direct numerical simulation (DNS), RANS has received wide acceptance in industry and research community as well. However, for high Reynolds number flows, the traditional RANS approach based on the Boussinesq hypothesis is incapacitated to capture all the essential flow characteristics, and thus, its performance is restricted in high Reynolds number flows of practical interest. RANS performance turns out to be inadequate in regimes like flow over curved surfaces, flows with rapid changes in the mean strain rate, duct flows involving secondary streamlines and three-dimensional separated flows. In the recent decade, partially averaged Navier-Stokes (PANS) methodology has gained acceptability among seamless bridging methods of turbulence- placed between DNS and RANS. PANS methodology, being a scale resolving bridging method, is inherently more suitable than RANS for simulating turbulent flows. The superior ability of PANS method has been demonstrated for some cases like swirling flows, high-speed mixing environment, and high Reynolds number turbulent flows. In our work, we intend to evaluate PANS in case of separated turbulent flows past bluff bodies -which is of broad aerodynamic research and industrial application. PANS equations, being derived from base RANS, continue to inherit the inadequacies from the parent RANS model based on linear eddy-viscosity model (LEVM) closure. To enhance PANS’ capabilities for simulating separated flows, the shortcomings of the LEVM closure need to be addressed. Inabilities of the LEVMs have inspired the development of non-linear eddy viscosity models (NLEVM). To explore the potential improvement in PANS performance, in our study we evaluate the PANS behavior in conjugation with NLEVM. Our work can be categorized into three significant steps: (i) Extraction of PANS version of NLEVM from RANS model, (ii) testing the model in the homogeneous turbulence environment and (iii) application and evaluation of the model in the canonical case of separated non-homogeneous flow field (flow past prismatic bodies and bodies of revolution at high Reynolds number). PANS version of NLEVM shall be derived and implemented in OpenFOAM -an open source solver. Homogeneous flows evaluation will comprise the study of the influence of the PANS’ filter-width control parameter on the turbulent stresses; the homogeneous analysis performed over typical velocity fields and asymptotic analysis of Reynolds stress tensor. Non-homogeneous flow case will include the study of mean integrated quantities and various instantaneous flow field features including wake structures. Performance of PANS + NLEVM shall be compared against the LEVM based PANS and LEVM based RANS. This assessment will contribute to significant improvement of the predictive ability of the computational fluid dynamics (CFD) tools in massively separated turbulent flows past bluff bodies.

Keywords: bridging methods of turbulence, high Re-CFD, non-linear PANS, separated turbulent flows

Procedia PDF Downloads 143
19980 An Enhanced Floor Estimation Algorithm for Indoor Wireless Localization Systems Using Confidence Interval Approach

Authors: Kriangkrai Maneerat, Chutima Prommak

Abstract:

Indoor wireless localization systems have played an important role to enhance context-aware services. Determining the position of mobile objects in complex indoor environments, such as those in multi-floor buildings, is very challenging problems. This paper presents an effective floor estimation algorithm, which can accurately determine the floor where mobile objects located. The proposed algorithm is based on the confidence interval of the summation of online Received Signal Strength (RSS) obtained from the IEEE 802.15.4 Wireless Sensor Networks (WSN). We compare the performance of the proposed algorithm with those of other floor estimation algorithms in literature by conducting a real implementation of WSN in our facility. The experimental results and analysis showed that the proposed floor estimation algorithm outperformed the other algorithms and provided highest percentage of floor accuracy up to 100% with 95-percent confidence interval.

Keywords: floor estimation algorithm, floor determination, multi-floor building, indoor wireless systems

Procedia PDF Downloads 411
19979 Efficient Feature Fusion for Noise Iris in Unconstrained Environment

Authors: Yao-Hong Tsai

Abstract:

This paper presents an efficient fusion algorithm for iris images to generate stable feature for recognition in unconstrained environment. Recently, iris recognition systems are focused on real scenarios in our daily life without the subject’s cooperation. Under large variation in the environment, the objective of this paper is to combine information from multiple images of the same iris. The result of image fusion is a new image which is more stable for further iris recognition than each original noise iris image. A wavelet-based approach for multi-resolution image fusion is applied in the fusion process. The detection of the iris image is based on Adaboost algorithm and then local binary pattern (LBP) histogram is then applied to texture classification with the weighting scheme. Experiment showed that the generated features from the proposed fusion algorithm can improve the performance for verification system through iris recognition.

Keywords: image fusion, iris recognition, local binary pattern, wavelet

Procedia PDF Downloads 364
19978 A Numerical Study of the Tidal Currents in the Persian Gulf and Oman Sea

Authors: Fatemeh Sadat Sharifi, A. A. Bidokhti, M. Ezam, F. Ahmadi Givi

Abstract:

This study focuses on the tidal oscillation and its speed to create a general pattern in seas. The purpose of the analysis is to find out the amplitude and phase for several important tidal components. Therefore, Regional Ocean Models (ROMS) was rendered to consider the correlation and accuracy of this pattern. Finding tidal harmonic components allows us to predict tide at this region. Better prediction of these tides, making standard platform, making suitable wave breakers, helping coastal building, navigation, fisheries, port management and tsunami research. Result shows a fair accuracy in the SSH. It reveals tidal currents are highest in Hormuz Strait and the narrow and shallow region between Kish Island. To investigate flow patterns of the region, the results of limited size model of FVCOM were utilized. Many features of the present day view of ocean circulation have some precedent in tidal and long- wave studies. Tidal waves are categorized to be among the long waves. So that tidal currents studies have indeed effects in subsequent studies of sea and ocean circulations.

Keywords: barotropic tide, FVCOM, numerical model, OTPS, ROMS

Procedia PDF Downloads 223
19977 Modelling of Heat Transfer during Controlled Cooling of Thermo-Mechanically Treated Rebars Using Computational Fluid Dynamics Approach

Authors: Rohit Agarwal, Mrityunjay K. Singh, Soma Ghosh, Ramesh Shankar, Biswajit Ghosh, Vinay V. Mahashabde

Abstract:

Thermo-mechanical treatment (TMT) of rebars is a critical process to impart sufficient strength and ductility to rebar. TMT rebars are produced by the Tempcore process, involves an 'in-line' heat treatment in which hot rolled bar (temperature is around 1080°C) is passed through water boxes where it is quenched under high pressure water jets (temperature is around 25°C). The quenching rate dictates composite structure consisting (four non-homogenously distributed phases of rebar microstructure) pearlite-ferrite, bainite, and tempered martensite (from core to rim). The ferrite and pearlite phases present at core induce ductility to rebar while martensitic rim induces appropriate strength. The TMT process is difficult to model as it brings multitude of complex physics such as heat transfer, highly turbulent fluid flow, multicomponent and multiphase flow present in the control volume. Additionally the presence of film boiling regime (above Leidenfrost point) due to steam formation adds complexity to domain. A coupled heat transfer and fluid flow model based on computational fluid dynamics (CFD) has been developed at product technology division of Tata Steel, India which efficiently predicts temperature profile and percentage martensite rim thickness of rebar during quenching process. The model has been validated with 16 mm rolling of New Bar mill (NBM) plant of Tata Steel Limited, India. Furthermore, based on the scenario analyses, optimal configuration of nozzles was found which helped in subsequent increase in rolling speed.

Keywords: boiling, critical heat flux, nozzles, thermo-mechanical treatment

Procedia PDF Downloads 206
19976 Contrasting Infrastructure Sharing and Resource Substitution Synergies Business Models

Authors: Robin Molinier

Abstract:

Industrial symbiosis (I.S) rely on two modes of cooperation that are infrastructure sharing and resource substitution to obtain economic and environmental benefits. The former consists in the intensification of use of an asset while the latter is based on the use of waste, fatal energy (and utilities) as alternatives to standard inputs. Both modes, in fact, rely on the shift from a business-as-usual functioning towards an alternative production system structure so that in a business point of view the distinction is not clear. In order to investigate the way those cooperation modes can be distinguished, we consider the stakeholders' interplay in the business model structure regarding their resources and requirements. For infrastructure sharing (following economic engineering literature) the cost function of capacity induces economies of scale so that demand pooling reduces global expanses. Grassroot investment sizing decision and the ex-post pricing strongly depends on the design optimization phase for capacity sizing whereas ex-post operational cost sharing minimizing budgets are less dependent upon production rates. Value is then mainly design driven. For resource substitution, synergies value stems from availability and is at risk regarding both supplier and user load profiles and market prices of the standard input. Baseline input purchasing cost reduction is thus more driven by the operational phase of the symbiosis and must be analyzed within the whole sourcing policy (including diversification strategies and expensive back-up replacement). Moreover, while resource substitution involves a chain of intermediate processors to match quality requirements, the infrastructure model relies on a single operator whose competencies allow to produce non-rival goods. Transaction costs appear higher in resource substitution synergies due to the high level of customization which induces asset specificity, and non-homogeneity following transaction costs economics arguments.

Keywords: business model, capacity, sourcing, synergies

Procedia PDF Downloads 170
19975 Effect of Cellulase Pretreatment for n-Hexane Extraction of Oil from Garden Cress Seeds

Authors: Boutemak Khalida, Dahmani Siham

Abstract:

Garden cress (Lepidium Sativum L.) belonging to the family Brassicaceae, is edible growing annual herb. Its various parts (roots, leaves and seeds) have been used to treat various human ailments. Its seed extracts have been screened for various biological activities like hypotensive, antimicrobial, bronchodilator, hypoglycaemic and antianemic. The aim of the present study is to optimize the process parameters (cellulase concentration and incubation time) of enzymatic pre-treatment of the garden cress seeds and to evaluate the effect of cellulase pre-treatment of the crushed seeds on the oil yield, physico-chemical properties and antibacterial activity and comparing to non-enzymatic method. The optimum parameters of cellulase pre-treatment were as follows: cellulase of 0,1% w/w and incubation time of 2h. After enzymatic pre-treatment, the oil was extracted by n-hexane for 1.5 h, the oil yield was 4,01% for cellulase pre-treatment as against 10,99% in the control sample. The decrease in yield might be caused a result of mucilage. Garden cress seeds are covered with a layer of mucilage which gels on contact with water. At the same time, the antibacterial activity was carried out using agar diffusion method against 4 food-borne pathogens (Escherichia coli, Salmonella typhi,Staphylococcus aureus, Bacillus subtilis). The results showed that bacterial strains are very sensitive to the oil with cellulase pre-treatment. Staphylococcus aureus is extremely sensitive with the largest zone of inhibition (40 mm), Escherichia coli and salmonella typhi had a very sensitive to the oil with a zone of inhibition (26 mm). Bacillus subtilizes is averagely sensitive which gave an inhibition of 16 mm. But it does not exhibit sensivity to the oil without enzymatic pre-treatment with a zone inhibition (< 8 mm). Enzymatic pre-treatment could be useful for antimicrobial activity of the oil, and hold a good potential for use in food and pharmaceutical industries.

Keywords: Lepidium sativum L., cellulase, enzymatic pretreatment, antibacterial activity.

Procedia PDF Downloads 455
19974 The Nonlinear Research on Rotational Stiffness of Cuplock Joint

Authors: Liuyu Zhang, Di Mo, Qiang Yan, Min Liu

Abstract:

As the important equipment in the construction field, cuplock scaffold plays an important role in the construction process. As a scaffold connecting member, cuplock joint is of great importance. In order to explore the rotational stiffness nonlinear characteristics changing features of different structural forms of cuplock joint in different tightening torque condition under different conditions of load, ANSYS is used to establish four kinds of cuplock joint models with different forces to simulate the real force situation. By setting the different load conditions which means the cuplock is loaded at a certain distance from the cuplock joint in a certain direction until the cuplock is damaged and considering the gap between the cross bar joint and the vertical bar, the differences in the influence of the structural form and tightening torque on the rotation stiffness of the cuplock under different load conditions are compared. It is significantly important to improve the accuracy of calculating bearing capacity and stability of the cuplock steel pipe scaffold.

Keywords: cuplock joint, highway tunnel, non-linear characteristics, rotational stiffness, scaffold stability, theoretical analysis

Procedia PDF Downloads 118
19973 Material Characterization of Medical Grade Woven Bio-Fabric for Use in ABAQUS *FABRIC Material Model

Authors: Lewis Wallace, William Dempster, David Nash, Alexandros Boukis, Craig Maclean

Abstract:

This paper, through traditional test methods and close adherence to international standards, presents a characterization study of a woven Polyethylene Terephthalate (PET). Testing is undergone in the axial, shear, and out-of-plane (bend) directions, and the results are fitted to the *FABRIC material model with ABAQUS FEA. The non-linear behaviors of the fabric in the axial and shear directions and behaviors on the macro scale are explored at the meso scale level. The medical grade bio-fabric is tested in untreated and heat-treated forms, and deviations are closely analyzed at the micro, meso, and macro scales to determine the effects of the process. The heat-treatment process was found to increase the stiffness of the fabric during axial and bending stiffness testing but had a negligible effect on the shear response. The ability of *FABRIC to capture behaviors unique to fabric deformation is discussed, whereby the unique phenomenological input can accurately represent the experimentally derived inputs.

Keywords: experimental techniques, FEA modelling, materials characterization, post-processing techniques

Procedia PDF Downloads 90
19972 Relation between Tourism and Health: Case Study AIDS in Lebanon

Authors: Viana Hassan

Abstract:

Each year, 600 million tourists travelled abroad to practice several types of tourism. Nowadays, whatever is the type of tourism practiced it considered as a real public health problem which can contribute the spread of several diseases such as AIDS, H1N1, NDM1 With regard to HIV/AIDS, Lebanon is always considered as a low HIV prevalence country. However, the potential risks associated with the mobility of the population, migration and tourism. The total number of cases reported by the ministry of health since 1989 until the end of 2011 is of 1455 cases, with an average of 85 new cases per year over the last three years. The main reason of the increased number is Travel and migration which represent 50% of the risks reported by cumulative cases. Given the interest of this kind of epidemic it would be interesting to study the Evolution of HIV/ AIDS and its relation with travel and tourism The main aim of this research is to study in general the relation between tourism and health, more specific to understand the relation between Tourism and AIDS, the problem of the transmission of HIV in Lebanon, the ways of contamination and the countries in which these people are contaminated.

Keywords: AIDS, tourism, health, Lebanon

Procedia PDF Downloads 331
19971 Neural Network Based Approach of Software Maintenance Prediction for Laboratory Information System

Authors: Vuk M. Popovic, Dunja D. Popovic

Abstract:

Software maintenance phase is started once a software project has been developed and delivered. After that, any modification to it corresponds to maintenance. Software maintenance involves modifications to keep a software project usable in a changed or a changing environment, to correct discovered faults, and modifications, and to improve performance or maintainability. Software maintenance and management of software maintenance are recognized as two most important and most expensive processes in a life of a software product. This research is basing the prediction of maintenance, on risks and time evaluation, and using them as data sets for working with neural networks. The aim of this paper is to provide support to project maintenance managers. They will be able to pass the issues planned for the next software-service-patch to the experts, for risk and working time evaluation, and afterward to put all data to neural networks in order to get software maintenance prediction. This process will lead to the more accurate prediction of the working hours needed for the software-service-patch, which will eventually lead to better planning of budget for the software maintenance projects.

Keywords: laboratory information system, maintenance engineering, neural networks, software maintenance, software maintenance costs

Procedia PDF Downloads 351