Search results for: reduce order aeroelastic model (ROAM)
30516 Minimalism in Product Packaging: Alternatives to Bubble Wrap
Authors: Anusha Chanda, Reenu Singh
Abstract:
Packaging is one of the major contributors to global waste. While efforts are being made to switch to more sustainable types of packaging, such as switching from single use plastics to paper, not all polluting materials, have been rethought in terms of recycling. Minimalism in packaging design can help reduce the amount of waste produced greatly. While online companies have shifted to using cardboard boxes for packages, a large amount of waste in still generated from other materials affiliated with cardboard packaging, such as tape, bubble wrap, plastic wrap, among others. Minimalism also works by reducing extra packaging and increasing the reusability of the material. This paper looks at research related to minimalism in packaging design, minimalism, and sustainability. A survey was conducted in order to find out the different ways in which minimalism can be implemented in packaging design. Information gathered from the research and responses from the survey was used to ideate product design alternatives for sustainable substitutes for bubble wrap in packaging. This would help greatly reduce the amount of packaging waste and improve environmental quality.Keywords: environment, minimalism, packaging, product design, sustainable
Procedia PDF Downloads 25630515 A Model of Knowledge Management Culture Change
Authors: Reza Davoodi, Hamid Abbasi, Heidar Norouzi, Gholamabbas Alipourian
Abstract:
A dynamic model shaping a process of knowledge management (KM) culture change is suggested. It is aimed at providing effective KM of employees for obtaining desired results in an organization. The essential requirements for obtaining KM culture change are determined. The proposed model realizes these requirements. Dynamics of the model are expressed by a change of its parameters. It is adjusted to the dynamic process of KM culture change. Building the model includes elaboration and integration of interconnected components. The “Result” is a central component of the model. This component determines a desired organizational goal and possible directions of its attainment. The “Confront” component engenders constructive confrontation in an organization. For this reason, the employees are prompted toward KM culture change with the purpose of attaining the desired result. The “Assess” component realizes complex assessments of employee proposals by management and peers. The proposals are directed towards attaining the desired result in an organization. The “Reward” component sets the order of assigning rewards to employees based on the assessments of their proposals.Keywords: knowledge management, organizational culture change, employee, result
Procedia PDF Downloads 40930514 Cationic Solid Lipid Nanoparticles Conjugated with Anti-Melantransferrin and Apolipoprotein E for Delivering Doxorubicin to U87MG Cells
Authors: Yung-Chih Kuo, Yung-I Lou
Abstract:
Cationic solid lipid nanoparticles (CSLNs) with anti-melanotransferrin (AMT) and apolipoprotein E (ApoE) were used to carry antimitotic doxorubicin (Dox) across the blood–brain barrier (BBB) for glioblastoma multiforme (GBM) treatment. Dox-loaded CSLNs were prepared in microemulsion, grafted covalently with AMT and ApoE, and applied to human brain microvascular endothelial cells (HBMECs), human astrocytes, and U87MG cells. Experimental results revealed that an increase in the weight percentage of stearyl amine (SA) from 0% to 20% increased the size of AMT-ApoE-Dox-CSLNs. In addition, an increase in the stirring rate from 150 rpm to 450 rpm decreased the size of AMT-ApoE-Dox-CSLNs. An increase in the weight percentage of SA from 0% to 20% enhanced the zeta potential of AMT-ApoE-Dox-CSLNs. Moreover, an increase in the stirring rate from 150 rpm to 450 rpm reduced the zeta potential of AMT-ApoE-Dox-CSLNs. AMT-ApoE-Dox-CSLNs exhibited a spheroid-like geometry, a minor irregular boundary deviating from spheroid, and a somewhat distorted surface with a few zigzags and sharp angles. The encapsulation efficiency of Dox in CSLNs decreased with increasing weight percentage of Dox and the order in the encapsulation efficiency of Dox was 10% SA > 20% SA > 0% SA. However, the reverse order was true for the release rate of Dox, suggesting that AMT-ApoE-Dox-CSLNs containing 10% SA had better-sustained release characteristics. An increase in the concentration of AMT from 2.5 to 7.5 μg/mL slightly decreased the grafting efficiency of AMT and an increase in that from 7.5 to 10 μg/mL significantly decreased the grafting efficiency. Furthermore, an increase in the concentration of ApoE from 2.5 to 5 μg/mL slightly reduced the grafting efficiency of ApoE and an increase in that from 5 to 10 μg/mL significantly reduced the grafting efficiency. Also, AMT-ApoE-Dox-CSLNs at 10 μg/mL of ApoE could slightly reduce the transendothelial electrical resistance (TEER) and increase the permeability of propidium iodide (PI). An incorporation of 10 μg/mL of ApoE could reduce the TEER and increase the permeability of PI. AMT-ApoE-Dox-CSLNs at 10 μg/mL of AMT and 5-10 μg/mL of ApoE could significantly enhance the permeability of Dox across the BBB. AMT-ApoE-Dox-CSLNs did not induce serious cytotoxicity to HBMECs. The viability of HBMECs was in the following order: AMT-ApoE-Dox-CSLNs = AMT-Dox-CSLNs = Dox-CSLNs > Dox. The order in the efficacy of inhibiting U87MG cells was AMT-ApoE-Dox-CSLNs > AMT-Dox-CSLNs > Dox-CSLNs > Dox. A surface modification of AMT and ApoE could promote the delivery of AMT-ApoE-Dox-CSLNs to cross the BBB via melanotransferrin and low density lipoprotein receptor. Thus, AMT-ApoE-Dox-CSLNs have appropriate physicochemical properties and can be a potential colloidal delivery system for brain tumor chemotherapy.Keywords: anti-melanotransferrin, apolipoprotein E, cationic catanionic solid lipid nanoparticle, doxorubicin, U87MG cells
Procedia PDF Downloads 28530513 A Non-Linear Eddy Viscosity Model for Turbulent Natural Convection in Geophysical Flows
Authors: J. P. Panda, K. Sasmal, H. V. Warrior
Abstract:
Eddy viscosity models in turbulence modeling can be mainly classified as linear and nonlinear models. Linear formulations are simple and require less computational resources but have the disadvantage that they cannot predict actual flow pattern in complex geophysical flows where streamline curvature and swirling motion are predominant. A constitutive equation of Reynolds stress anisotropy is adopted for the formulation of eddy viscosity including all the possible higher order terms quadratic in the mean velocity gradients, and a simplified model is developed for actual oceanic flows where only the vertical velocity gradients are important. The new model is incorporated into the one dimensional General Ocean Turbulence Model (GOTM). Two realistic oceanic test cases (OWS Papa and FLEX' 76) have been investigated. The new model predictions match well with the observational data and are better in comparison to the predictions of the two equation k-epsilon model. The proposed model can be easily incorporated in the three dimensional Princeton Ocean Model (POM) to simulate a wide range of oceanic processes. Practically, this model can be implemented in the coastal regions where trasverse shear induces higher vorticity, and for prediction of flow in estuaries and lakes, where depth is comparatively less. The model predictions of marine turbulence and other related data (e.g. Sea surface temperature, Surface heat flux and vertical temperature profile) can be utilized in short term ocean and climate forecasting and warning systems.Keywords: Eddy viscosity, turbulence modeling, GOTM, CFD
Procedia PDF Downloads 20230512 Artificial Intelligence Methods for Returns Expectations in Financial Markets
Authors: Yosra Mefteh Rekik, Younes Boujelbene
Abstract:
We introduce in this paper a new conceptual model representing the stock market dynamics. This model is essentially based on cognitive behavior of the intelligence investors. In order to validate our model, we build an artificial stock market simulation based on agent-oriented methodologies. The proposed simulator is composed of market supervisor agent essentially responsible for executing transactions via an order book and various kinds of investor agents depending to their profile. The purpose of this simulation is to understand the influence of psychological character of an investor and its neighborhood on its decision-making and their impact on the market in terms of price fluctuations. Therefore, the difficulty of the prediction is due to several features: the complexity, the non-linearity and the dynamism of the financial market system, as well as the investor psychology. The Artificial Neural Networks learning mechanism take on the role of traders, who from their futures return expectations and place orders based on their expectations. The results of intensive analysis indicate that the existence of agents having heterogeneous beliefs and preferences has provided a better understanding of price dynamics in the financial market.Keywords: artificial intelligence methods, artificial stock market, behavioral modeling, multi-agent based simulation
Procedia PDF Downloads 44630511 Design and Evaluation of Corrective Orthosis Knee for Hyperextension
Authors: Valentina Narvaez Gaitan, Paula K. Rodriguez Ramirez, Derian D. Espinosa
Abstract:
Corrective orthosis has great importance in orthopedic treatments providing assistance in improving mobility and stability in order to improve the quality of life for a different patient. The corrective orthosis studied in this article can correct deformities, reduce pain, and improve the ability to perform daily activities. This work describes the design and evaluation of a corrective orthosis for knee hyperextension. This orthosis is capable of generating a progressive and variable alignment of the joint, limiting the range of motion according to medical criteria. The main objective was to design a corrective knee orthosis capable of correcting knee hyperextension progressively to return to its natural angle with greater economic affordability and adjustable size. The limiting mechanism is based on a goniometer to determine the desired angles. The orthosis was made of acrylic to reduce costs and maintenance; neoprene is also used to make comfortable contact; additionally, Velcro was used in order to adjust the orthosis for various sizes. Simulations of static and fatigue analysis of the mechanism were performed to verify its resistance and durability under normal conditions. A biomechanical gait study of gait was carried out on 10 healthy subjects without the orthosis and limiting their knee extension capacity in a normal gait cycle with the orthosis to observe the efficiency of the proposed system. In the results obtained, the knee angle curves show that the maximum extension angle was the established angle by the orthosis. Showing the efficiency of the proposed design for different leg sizes.Keywords: biomechanical study, corrective orthosis, efficiency, goniometer, knee hyperextension.
Procedia PDF Downloads 8130510 Analytics Model in a Telehealth Center Based on Cloud Computing and Local Storage
Authors: L. Ramirez, E. Guillén, J. Sánchez
Abstract:
Some of the main goals about telecare such as monitoring, treatment, telediagnostic are deployed with the integration of applications with specific appliances. In order to achieve a coherent model to integrate software, hardware, and healthcare systems, different telehealth models with Internet of Things (IoT), cloud computing, artificial intelligence, etc. have been implemented, and their advantages are still under analysis. In this paper, we propose an integrated model based on IoT architecture and cloud computing telehealth center. Analytics module is presented as a solution to control an ideal diagnostic about some diseases. Specific features are then compared with the recently deployed conventional models in telemedicine. The main advantage of this model is the availability of controlling the security and privacy about patient information and the optimization on processing and acquiring clinical parameters according to technical characteristics.Keywords: analytics, telemedicine, internet of things, cloud computing
Procedia PDF Downloads 32530509 NOx Prediction by Quasi-Dimensional Combustion Model of Hydrogen Enriched Compressed Natural Gas Engine
Authors: Anas Rao, Hao Duan, Fanhua Ma
Abstract:
The dependency on the fossil fuels can be minimized by using the hydrogen enriched compressed natural gas (HCNG) in the transportation vehicles. However, the NOx emissions of HCNG engines are significantly higher, and this turned to be its major drawback. Therefore, the study of NOx emission of HCNG engines is a very important area of research. In this context, the experiments have been performed at the different hydrogen percentage, ignition timing, air-fuel ratio, manifold-absolute pressure, load and engine speed. Afterwards, the simulation has been accomplished by the quasi-dimensional combustion model of HCNG engine. In order to investigate the NOx emission, the NO mechanism has been coupled to the quasi-dimensional combustion model of HCNG engine. The three NOx mechanism: the thermal NOx, prompt NOx and N2O mechanism have been used to predict NOx emission. For the validation purpose, NO curve has been transformed into NO packets based on the temperature difference of 100 K for the lean-burn and 60 K for stoichiometric condition. While, the width of the packet has been taken as the ratio of crank duration of the packet to the total burnt duration. The combustion chamber of the engine has been divided into three zones, with the zone equal to the product of summation of NO packets and space. In order to check the accuracy of the model, the percentage error of NOx emission has been evaluated, and it lies in the range of ±6% and ±10% for the lean-burn and stoichiometric conditions respectively. Finally, the percentage contribution of each NO formation has been evaluated.Keywords: quasi-dimensional combustion , thermal NO, prompt NO, NO packet
Procedia PDF Downloads 25230508 Supplier Selection and Order Allocation Using a Stochastic Multi-Objective Programming Model and Genetic Algorithm
Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh
Abstract:
In this paper, we develop a supplier selection and order allocation multi-objective model in stochastic environment in which purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. To do so, we use dependent chance programming (DCP) that maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. After transforming the above mentioned stochastic multi-objective programming problem into a stochastic single objective problem using minimum deviation method, we apply a genetic algorithm to get the later single objective problem solved. The employed genetic algorithm performs a simulation process in order to calculate the stochastic objective function as its fitness function. At the end, we explore the impact of stochastic parameters on the given solution via a sensitivity analysis exploiting coefficient of variation. The results show that as stochastic parameters have greater coefficients of variation, the value of objective function in the stochastic single objective programming problem is worsened.Keywords: dependent chance programming, genetic algorithm, minimum deviation method, order allocation, supplier selection
Procedia PDF Downloads 25630507 Generation of High-Quality Synthetic CT Images from Cone Beam CT Images Using A.I. Based Generative Networks
Authors: Heeba A. Gurku
Abstract:
Introduction: Cone Beam CT(CBCT) images play an integral part in proper patient positioning in cancer patients undergoing radiation therapy treatment. But these images are low in quality. The purpose of this study is to generate high-quality synthetic CT images from CBCT using generative models. Material and Methods: This study utilized two datasets from The Cancer Imaging Archive (TCIA) 1) Lung cancer dataset of 20 patients (with full view CBCT images) and 2) Pancreatic cancer dataset of 40 patients (only 27 patients having limited view images were included in the study). Cycle Generative Adversarial Networks (GAN) and its variant Attention Guided Generative Adversarial Networks (AGGAN) models were used to generate the synthetic CTs. Models were evaluated by visual evaluation and on four metrics, Structural Similarity Index Measure (SSIM), Peak Signal Noise Ratio (PSNR) Mean Absolute Error (MAE) and Root Mean Square Error (RMSE), to compare the synthetic CT and original CT images. Results: For pancreatic dataset with limited view CBCT images, our study showed that in Cycle GAN model, MAE, RMSE, PSNR improved from 12.57to 8.49, 20.94 to 15.29 and 21.85 to 24.63, respectively but structural similarity only marginally increased from 0.78 to 0.79. Similar, results were achieved with AGGAN with no improvement over Cycle GAN. However, for lung dataset with full view CBCT images Cycle GAN was able to reduce MAE significantly from 89.44 to 15.11 and AGGAN was able to reduce it to 19.77. Similarly, RMSE was also decreased from 92.68 to 23.50 in Cycle GAN and to 29.02 in AGGAN. SSIM and PSNR also improved significantly from 0.17 to 0.59 and from 8.81 to 21.06 in Cycle GAN respectively while in AGGAN SSIM increased to 0.52 and PSNR increased to 19.31. In both datasets, GAN models were able to reduce artifacts, reduce noise, have better resolution, and better contrast enhancement. Conclusion and Recommendation: Both Cycle GAN and AGGAN were significantly able to reduce MAE, RMSE and PSNR in both datasets. However, full view lung dataset showed more improvement in SSIM and image quality than limited view pancreatic dataset.Keywords: CT images, CBCT images, cycle GAN, AGGAN
Procedia PDF Downloads 8430506 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model
Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin
Abstract:
Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.Keywords: anomaly detection, autoencoder, data centers, deep learning
Procedia PDF Downloads 19430505 Development of a Numerical Model to Predict Wear in Grouted Connections for Offshore Wind Turbine Generators
Authors: Paul Dallyn, Ashraf El-Hamalawi, Alessandro Palmeri, Bob Knight
Abstract:
In order to better understand the long term implications of the grout wear failure mode in large-diameter plain-sided grouted connections, a numerical model has been developed and calibrated that can take advantage of existing operational plant data to predict the wear accumulation for the actual load conditions experienced over a given period, thus limiting the need for expensive monitoring systems. This model has been derived and calibrated based on site structural condition monitoring (SCM) data and supervisory control and data acquisition systems (SCADA) data for two operational wind turbine generator substructures afflicted with this challenge, along with experimentally derived wear rates.Keywords: grouted connection, numerical model, offshore structure, wear, wind energy
Procedia PDF Downloads 45630504 Apps Reduce the Cost of Construction
Authors: Ali Mohammadi
Abstract:
Every construction that is done, the most important part of attention for employers and contractors is its cost, and they always try to reduce costs so that they can compete in the market, so they estimate the cost of construction before starting their activities. The costs can be generally divided into four parts: the materials used, the equipment used, the manpower required, and the time required. In this article, we are trying to talk about the three items of equipment, manpower, and time, and examine how the use of apps can reduce the cost of construction, while due to various reasons, it has received less attention in the field of app design. Also, because we intend to use these apps in construction and they are used by engineers and experts, we define these apps as engineering apps because the idea of their design must be by an engineer who works in that field. Also, considering that most engineers are familiar with programming during their studies, they can design the apps they need using simple programming software.Keywords: layout, as-bilt, monitoring, maps
Procedia PDF Downloads 6730503 Association Rules Mining and NOSQL Oriented Document in Big Data
Authors: Sarra Senhadji, Imene Benzeguimi, Zohra Yagoub
Abstract:
Big Data represents the recent technology of manipulating voluminous and unstructured data sets over multiple sources. Therefore, NOSQL appears to handle the problem of unstructured data. Association rules mining is one of the popular techniques of data mining to extract hidden relationship from transactional databases. The algorithm for finding association dependencies is well-solved with Map Reduce. The goal of our work is to reduce the time of generating of frequent itemsets by using Map Reduce and NOSQL database oriented document. A comparative study is given to evaluate the performances of our algorithm with the classical algorithm Apriori.Keywords: Apriori, Association rules mining, Big Data, Data Mining, Hadoop, MapReduce, MongoDB, NoSQL
Procedia PDF Downloads 16330502 Error Correction Method for 2D Ultra-Wideband Indoor Wireless Positioning System Using Logarithmic Error Model
Authors: Phornpat Chewasoonthorn, Surat Kwanmuang
Abstract:
Indoor positioning technologies have been evolved rapidly. They augment the Global Positioning System (GPS) which requires line-of-sight to the sky to track the location of people or objects. This study developed an error correction method for an indoor real-time location system (RTLS) based on an ultra-wideband (UWB) sensor from Decawave. Multiple stationary nodes (anchor) were installed throughout the workspace. The distance between stationary and moving nodes (tag) can be measured using a two-way-ranging (TWR) scheme. The result has shown that the uncorrected ranging error from the sensor system can be as large as 1 m. To reduce ranging error and thus increase positioning accuracy, This study purposes an online correction algorithm using the Kalman filter. The results from experiments have shown that the system can reduce ranging error down to 5 cm.Keywords: indoor positioning, ultra-wideband, error correction, Kalman filter
Procedia PDF Downloads 16030501 Comparative Life Cycle Analysis of Selected Modular Timber Construction and Assembly Typologies
Authors: Benjamin Goldsmith, Felix Heisel
Abstract:
The building industry must reduce its emissions in order to meet 2030 neutrality targets, and modular and/or offsite construction is seen as an alternative to conventional construction methods which could help achieve this goal. Modular construction has previously been shown to be less wasteful and has a lower global warming potential (GWP). While many studies have been conducted investigating the life cycle impacts of modular and conventional construction, few studies have compared different types of modular assembly and construction in order to determine which offer the greatest environmental benefits over their whole life cycle. This study seeks to investigate three different modular construction types -infill frame, core, and podium- in order to determine environmental impacts such as GWP as well as circularity indicators. The study will focus on the emissions of the production, construction, and end-of-life phases. The circularity of the various approaches will be taken into consideration in order to acknowledge the potential benefits of the ability to reuse and/or reclaim materials, products, and assemblies. The study will conduct hypothetical case studies for the three different modular construction types, and in doing so, control the parameters of location, climate, program, and client. By looking in-depth at the GWP of the beginning and end phases of various simulated modular buildings, it will be possible to make suggestions on which type of construction has the lowest global warming potential.Keywords: modular construction, offsite construction, life cycle analysis, global warming potential, environmental impact, circular economy
Procedia PDF Downloads 16730500 Convergence Results of Two-Dimensional Homogeneous Elastic Plates from Truncation of Potential Energy
Authors: Erick Pruchnicki, Nikhil Padhye
Abstract:
Plates are important engineering structures which have attracted extensive research since the 19th century. The subject of this work is statical analysis of a linearly elastic homogenous plate under small deformations. A 'thin plate' is a three-dimensional structure comprising of a small transverse dimension with respect to a flat mid-surface. The general aim of any plate theory is to deduce a two-dimensional model, in terms of mid-surface quantities, to approximately and accurately describe the plate's deformation in terms of mid-surface quantities. In recent decades, a common starting point for this purpose is to utilize series expansion of a displacement field across the thickness dimension in terms of the thickness parameter (h). These attempts are mathematically consistent in deriving leading-order plate theories based on certain a priori scaling between the thickness and the applied loads; for example, asymptotic methods which are aimed at generating leading-order two-dimensional variational problems by postulating formal asymptotic expansion of the displacement fields. Such methods rigorously generate a hierarchy of two-dimensional models depending on the order of magnitude of the applied load with respect to the plate-thickness. However, in practice, applied loads are external and thus not directly linked or dependent on the geometry/thickness of the plate; thus, rendering any such model (based on a priori scaling) of limited practical utility. In other words, the main limitation of these approaches is that they do not furnish a single plate model for all orders of applied loads. Following analogy of recent efforts of deploying Fourier-series expansion to study convergence of reduced models, we propose two-dimensional model(s) resulting from truncation of the potential energy and rigorously prove the convergence of these two-dimensional plate models to the parent three-dimensional linear elasticity with increasing truncation order of the potential energy.Keywords: plate theory, Fourier-series expansion, convergence result, Legendre polynomials
Procedia PDF Downloads 11330499 Variability Management of Contextual Feature Model in Multi-Software Product Line
Authors: Muhammad Fezan Afzal, Asad Abbas, Imran Khan, Salma Imtiaz
Abstract:
Software Product Line (SPL) paradigm is used for the development of the family of software products that share common and variable features. Feature model is a domain of SPL that consists of common and variable features with predefined relationships and constraints. Multiple SPLs consist of a number of similar common and variable features, such as mobile phones and Tabs. Reusability of common and variable features from the different domains of SPL is a complex task due to the external relationships and constraints of features in the feature model. To increase the reusability of feature model resources from domain engineering, it is required to manage the commonality of features at the level of SPL application development. In this research, we have proposed an approach that combines multiple SPLs into a single domain and converts them to a common feature model. Extracting the common features from different feature models is more effective, less cost and time to market for the application development. For extracting features from multiple SPLs, the proposed framework consists of three steps: 1) find the variation points, 2) find the constraints, and 3) combine the feature models into a single feature model on the basis of variation points and constraints. By using this approach, reusability can increase features from the multiple feature models. The impact of this research is to reduce the development of cost, time to market and increase products of SPL.Keywords: software product line, feature model, variability management, multi-SPLs
Procedia PDF Downloads 7030498 Use of Two-Dimensional Hydraulics Modeling for Design of Erosion Remedy
Authors: Ayoub. El Bourtali, Abdessamed.Najine, Amrou Moussa. Benmoussa
Abstract:
One of the main goals of river engineering is river training, which is defined as controlling and predicting the behavior of a river. It is taking effective measurements to eliminate all related risks and thus improve the river system. In some rivers, the riverbed continues to erode and degrade; therefore, equilibrium will never be reached. Generally, river geometric characteristics and riverbed erosion analysis are some of the most complex but critical topics in river engineering and sediment hydraulics; riverbank erosion is the second answering process in hydrodynamics, which has a major impact on the ecological chain and socio-economic process. This study aims to integrate the new computer technology that can analyze erosion and hydraulic problems through computer simulation and modeling. Choosing the right model remains a difficult and sensitive job for field engineers. This paper makes use of the 5.0.4 version of the HEC-RAS model. The river section is adopted according to the gauged station and the proximity of the adjustment. In this work, we will demonstrate how 2D hydraulic modeling helped clarify the design and cover visuals to set up depth and velocities at riverbanks and throughout advanced structures. The hydrologic engineering center's-river analysis system (HEC-RAS) 2D model was used to create a hydraulic study of the erosion model. The geometric data were generated from the 12.5-meter x 12.5-meter resolution digital elevation model. In addition to showing eroded or overturned river sections, the model output also shows patterns of riverbank changes, which can help us reduce problems caused by erosion.Keywords: 2D hydraulics model, erosion, floodplain, hydrodynamic, HEC-RAS, riverbed erosion, river morphology, resolution digital data, sediment
Procedia PDF Downloads 19130497 In Silico Modeling of Drugs Milk/Plasma Ratio in Human Breast Milk Using Structures Descriptors
Authors: Navid Kaboudi, Ali Shayanfar
Abstract:
Introduction: Feeding infants with safe milk from the beginning of their life is an important issue. Drugs which are used by mothers can affect the composition of milk in a way that is not only unsuitable, but also toxic for infants. Consuming permeable drugs during that sensitive period by mother could lead to serious side effects to the infant. Due to the ethical restrictions of drug testing on humans, especially women, during their lactation period, computational approaches based on structural parameters could be useful. The aim of this study is to develop mechanistic models to predict the M/P ratio of drugs during breastfeeding period based on their structural descriptors. Methods: Two hundred and nine different chemicals with their M/P ratio were used in this study. All drugs were categorized into two groups based on their M/P value as Malone classification: 1: Drugs with M/P>1, which are considered as high risk 2: Drugs with M/P>1, which are considered as low risk Thirty eight chemical descriptors were calculated by ACD/labs 6.00 and Data warrior software in order to assess the penetration during breastfeeding period. Later on, four specific models based on the number of hydrogen bond acceptors, polar surface area, total surface area, and number of acidic oxygen were established for the prediction. The mentioned descriptors can predict the penetration with an acceptable accuracy. For the remaining compounds (N= 147, 158, 160, and 174 for models 1 to 4, respectively) of each model binary regression with SPSS 21 was done in order to give us a model to predict the penetration ratio of compounds. Only structural descriptors with p-value<0.1 remained in the final model. Results and discussion: Four different models based on the number of hydrogen bond acceptors, polar surface area, and total surface area were obtained in order to predict the penetration of drugs into human milk during breastfeeding period About 3-4% of milk consists of lipids, and the amount of lipid after parturition increases. Lipid soluble drugs diffuse alongside with fats from plasma to mammary glands. lipophilicity plays a vital role in predicting the penetration class of drugs during lactation period. It was shown in the logistic regression models that compounds with number of hydrogen bond acceptors, PSA and TSA above 5, 90 and 25 respectively, are less permeable to milk because they are less soluble in the amount of fats in milk. The pH of milk is acidic and due to that, basic compounds tend to be concentrated in milk than plasma while acidic compounds may consist lower concentrations in milk than plasma. Conclusion: In this study, we developed four regression-based models to predict the penetration class of drugs during the lactation period. The obtained models can lead to a higher speed in drug development process, saving energy, and costs. Milk/plasma ratio assessment of drugs requires multiple steps of animal testing, which has its own ethical issues. QSAR modeling could help scientist to reduce the amount of animal testing, and our models are also eligible to do that.Keywords: logistic regression, breastfeeding, descriptors, penetration
Procedia PDF Downloads 7230496 Probabilistic Graphical Model for the Web
Authors: M. Nekri, A. Khelladi
Abstract:
The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.Keywords: clustering coefficient, preferential attachment, small world, web community
Procedia PDF Downloads 27230495 Study of the Phenomenon Nature of Order and Disorder in BaMn(Fe/V)F7 Fluoride Glass by the Hybrid Reverse Monte Carlo Method
Authors: Sidi Mohamed Mesli, Mohamed Habchi, Mohamed Kotbi, Rafik Benallal, Abdelali Derouiche
Abstract:
Fluoride glasses with a nominal composition of BaMnMF7 (M = FeV assuming isomorphous replacement) have been structurally modelled through the simultaneous simulation of their neutron diffraction patterns by a reverse Monte Carlo (RMC) model and by a Rietveld for disordered materials (RDM) method. Model is consistent with an expected network of interconnected [MF6] polyhedra. The RMC results are accompanied by artificial satellite peaks. To remedy this problem, we use an extension of the RMC algorithm, which introduces an energy penalty term in acceptance criteria. This method is called the Hybrid Reverse Monte Carlo (HRMC) method. The idea of this paper is to apply the (HRMC) method to the title glasses, in order to make a study of the phenomenon nature of order and disorder by displaying and discussing the partial pair distribution functions (PDFs) g(r). We suggest that this method can be used to describe average correlations between components of fluoride glass or similar system.Keywords: fluoride glasses, RMC simulation, neutron scattering, hybrid RMC simulation, Lennard-Jones potential, partial pair distribution functions
Procedia PDF Downloads 53730494 Model Driven Architecture Methodologies: A Review
Authors: Arslan Murtaza
Abstract:
Model Driven Architecture (MDA) is technique presented by OMG (Object Management Group) for software development in which different models are proposed and converted them into code. The main plan is to identify task by using PIM (Platform Independent Model) and transform it into PSM (Platform Specific Model) and then converted into code. In this review paper describes some challenges and issues that are faced in MDA, type and transformation of models (e.g. CIM, PIM and PSM), and evaluation of MDA-based methodologies.Keywords: OMG, model driven rrchitecture (MDA), computation independent model (CIM), platform independent model (PIM), platform specific model(PSM), MDA-based methodologies
Procedia PDF Downloads 45930493 Artificial intelligence and Law
Authors: Mehrnoosh Abouzari, Shahrokh Shahraei
Abstract:
With the development of artificial intelligence in the present age, intelligent machines and systems have proven their actual and potential capabilities and are mindful of increasing their presence in various fields of human life in the fields of industry, financial transactions, marketing, manufacturing, service affairs, politics, economics and various branches of the humanities .Therefore, despite the conservatism and prudence of law enforcement, the traces of artificial intelligence can be seen in various areas of law. Including judicial robotics capability estimation, intelligent judicial decision making system, intelligent defender and attorney strategy adjustment, dissemination and regulation of different and scattered laws in each case to achieve judicial coherence and reduce opinion, reduce prolonged hearing and discontent compared to the current legal system with designing rule-based systems, case-based, knowledge-based systems, etc. are efforts to apply AI in law. In this article, we will identify the ways in which AI is applied in its laws and regulations, identify the dominant concerns in this area and outline the relationship between these two areas in order to answer the question of how artificial intelligence can be used in different areas of law and what the implications of this application will be. The authors believe that the use of artificial intelligence in the three areas of legislative, judiciary and executive power can be very effective in governments' decisions and smart governance, and helping to reach smart communities across human and geographical boundaries that humanity's long-held dream of achieving is a global village free of violence and personalization and human error. Therefore, in this article, we are going to analyze the dimensions of how to use artificial intelligence in the three legislative, judicial and executive branches of government in order to realize its application.Keywords: artificial intelligence, law, intelligent system, judge
Procedia PDF Downloads 11930492 Somatosensory Detection Wristbands Applied Research of Baby
Authors: Chang Ting, Wu Chun Kuan
Abstract:
Wireless sensing technology is increasingly developed, in order to avoid caregiver neglect children in poor physiological condition, so there are more and more products into the wireless sensor-related technologies, in order to reduce the risk of infants. In view of this, the study will focus on Somatosensory detection wristbands Applied Research of Baby, and to explore through observation and literature, to find design criteria which conform baby products, as well as the advantages and disadvantages of existing products. This study will focus on 0-2 years of infant research and product design, to provide 2-3 new design concepts and products to identify weaknesses through the use of the actual product, further provide future baby wristbands design reference.Keywords: infants, observation, design criteria, wireless sensing
Procedia PDF Downloads 31130491 Control of an SIR Model for Basic Reproduction Number Regulation
Authors: Enrique Barbieri
Abstract:
The basic disease-spread model described by three states denoting the susceptible (S), infectious (I), and removed (recovered and deceased) (R) sub-groups of the total population N, or SIR model, has been considered. Heuristic mitigating action profiles of the pharmaceutical and non-pharmaceutical types may be developed in a control design setting for the purpose of reducing the transmission rate or improving the recovery rate parameters in the model. Even though the transmission and recovery rates are not control inputs in the traditional sense, a linear observer and feedback controller can be tuned to generate an asymptotic estimate of the transmission rate for a linearized, discrete-time version of the SIR model. Then, a set of mitigating actions is suggested to steer the basic reproduction number toward unity, in which case the disease does not spread, and the infected population state does not suffer from multiple waves. The special case of piecewise constant transmission rate is described and applied to a seventh-order SEIQRDP model, which segments the population into four additional states. The offline simulations in discrete time may be used to produce heuristic policies implemented by public health and government organizations.Keywords: control of SIR, observer, SEIQRDP, disease spread
Procedia PDF Downloads 11230490 Social Business Model: Leveraging Business and Social Value of Social Enterprises
Authors: Miriam Borchardt, Agata M. Ritter, Macaliston G. da Silva, Mauricio N. de Carvalho, Giancarlo M. Pereira
Abstract:
This paper aims to analyze the barriers faced by social enterprises and based on that to propose a social business model framework that helps them to leverage their businesses and the social value delivered. A business model for social enterprises should amplify the value perception including social value for the beneficiaries while generating enough profit to escalate the business. Most of the social value beneficiaries are people from the base of the economic pyramid (BOP) or the ones that have specific needs. Because of this, products and services should be affordable to consumers while solving social needs of the beneficiaries. Developing products and services with social value require tie relationship among the social enterprises and universities, public institutions, accelerators, and investors. Despite being focused on social value and contributing to the beneficiaries’ quality of life as well as contributing to the governments that cannot properly guarantee public services and infrastructure to the BOP, many barriers are faced by the social enterprises to escalate their businesses. This is a work in process and five micro- and small-sized social enterprises in Brazil have been studied: (i) one has developed a kit for cervical uterine cancer detection to allow the BOP women to collect their own material and deliver to a laboratory for U$1,00; (ii) other has developed special products without lactose and it is about 70% cheaper than the traditional brands in the market; (iii) the third has developed prosthesis and orthosis to surplus needs that health public system have not done efficiently; (iv) the fourth has produced and commercialized menstrual panties aiming to reduce the consumption of dischargeable ones while saving money to the consumers; (v) the fifth develops and commercializes clothes from fabric wastes in a partnership with BOP artisans. The preliminary results indicate that the main barriers are related to the public system to recognize these products as public money that could be saved if they bought products from these enterprises instead of the multinational pharmaceutical companies, to the traditional distribution system (e.g. pharmacies) that avoid these products because of the low or non-existing profit, to the difficulty buying raw material in small quantities, to leverage investment by the investors, to cultural barriers and taboos. Interesting strategies to reduce the costs have been observed: some enterprises have focused on simplifying products, others have invested in partnerships with local producers and have developed their machines focusing on process efficiency to leverage investment by the investors.Keywords: base of the pyramid, business model, social business, social business model, social enterprises
Procedia PDF Downloads 10230489 Comparative Analysis of Dissimilarity Detection between Binary Images Based on Equivalency and Non-Equivalency of Image Inversion
Authors: Adnan A. Y. Mustafa
Abstract:
Image matching is a fundamental problem that arises frequently in many aspects of robot and computer vision. It can become a time-consuming process when matching images to a database consisting of hundreds of images, especially if the images are big. One approach to reducing the time complexity of the matching process is to reduce the search space in a pre-matching stage, by simply removing dissimilar images quickly. The Probabilistic Matching Model for Binary Images (PMMBI) showed that dissimilarity detection between binary images can be accomplished quickly by random pixel mapping and is size invariant. The model is based on the gamma binary similarity distance that recognizes an image and its inverse as containing the same scene and hence considers them to be the same image. However, in many applications, an image and its inverse are not treated as being the same but rather dissimilar. In this paper, we present a comparative analysis of dissimilarity detection between PMMBI based on the gamma binary similarity distance and a modified PMMBI model based on a similarity distance that does distinguish between an image and its inverse as being dissimilar.Keywords: binary image, dissimilarity detection, probabilistic matching model for binary images, image mapping
Procedia PDF Downloads 15530488 Green Function and Eshelby Tensor Based on Mindlin’s 2nd Gradient Model: An Explicit Study of Spherical Inclusion Case
Authors: A. Selmi, A. Bisharat
Abstract:
Using Fourier transform and based on the Mindlin's 2nd gradient model that involves two length scale parameters, the Green's function, the Eshelby tensor, and the Eshelby-like tensor for a spherical inclusion are derived. It is proved that the Eshelby tensor consists of two parts; the classical Eshelby tensor and a gradient part including the length scale parameters which enable the interpretation of the size effect. When the strain gradient is not taken into account, the obtained Green's function and Eshelby tensor reduce to its analogue based on the classical elasticity. The Eshelby tensor in and outside the inclusion, the volume average of the gradient part and the Eshelby-like tensor are explicitly obtained. Unlike the classical Eshelby tensor, the results show that the components of the new Eshelby tensor vary with the position and the inclusion dimensions. It is demonstrated that the contribution of the gradient part should not be neglected.Keywords: Eshelby tensor, Eshelby-like tensor, Green’s function, Mindlin’s 2nd gradient model, spherical inclusion
Procedia PDF Downloads 27130487 Effect of Three Desensitizers on Dentinal Tubule Occlusion and Bond Strength of Dentin Adhesives
Authors: Zou Xuan, Liu Hongchen
Abstract:
The ideal dentin desensitizing agent should not only have good biological safety, simple clinical operation mode, the superior treatment effect, but also should have a durable effect to resist the oral environmental temperature change and oral mechanical abrasion, so as to achieve a persistent desensitization effect. Also, when using desensitizing agent to prevent the post-operative hypersensitivity, we should not only prevent it from affecting crowns’ retention, but must understand its effects on bond strength of dentin adhesives. There are various of desensitizers and dentin adhesives in clinical treatment. They have different chemical or physical properties. Whether the use of desensitizing agent would affect the bond strength of dentin adhesives still need further research. In this in vitro study, we built the hypersensitive dentin model and post-operative dentin model, to evaluate the sealing effects and durability on exposed tubule by three different dentin desensitizers and to evaluate the sealing effects and the bond strength of dentin adhesives after using three different dentin desensitizers on post-operative dentin. The result of this study could provide some important references for clinical use of dentin desensitizing agent. 1. As to the three desensitizers, the hypersensitive dentin model was built to evaluate their sealing effects on exposed tubule by SEM observation and dentin permeability analysis. All of them could significantly reduce the dentin permeability. 2. Test specimens of three groups treated by desensitizers were subjected to aging treatment with 5000 times thermal cycling and toothbrush abrasion, and then dentin permeability was measured to evaluate the sealing durability of these three desensitizers on exposed tubule. The sealing durability of three groups were different. 3. The post-operative dentin model was built to evaluate the sealing effects of the three desensitizers on post-operative dentin by SEM and methylene blue. All of three desensitizers could reduce the dentin permeability significantly. 4. The influences of three desensitizers on the bonding efficiency of total-etch and self-etch adhesives were evaluated with the micro-tensile bond strength study and bond interface morphology observation. The dentin bond strength for Green or group was significantly lower than the other two groups (P<0.05).Keywords: dentin, desensitizer, dentin permeability, thermal cycling, micro-tensile bond strength
Procedia PDF Downloads 394