Search results for: Simulated annealing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1038

Search results for: Simulated annealing

18 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: Metagenomics, phenotype prediction, deep learning, embeddings, multiple instance learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913
17 Corrosion Study of Magnetically Driven Components in Spinal Implants by Immersion Testing in Simulated Body Fluids

Authors: Benjawan Saengwichian, Alasdair E. Charles, Philip J. Hyde

Abstract:

Magnetically controlled growing rods (MCGRs) have been used to stabilise and correct spinal curvature in children to support non-invasive scoliosis adjustment. Although the encapsulated driving components are intended to be isolated from body fluid contact, in vivo corrosion was observed on these components due to sealing mechanism damage. Consequently, a corrosion circuit is created with the body fluids, resulting in malfunction of the lengthening mechanism. Particularly, the chloride ions in blood plasma or cerebrospinal fluid (CSF) may corrode the MCGR alloys, possibly resulting in metal ion release in long-term use. However, there is no data available on the corrosion resistance of spinal implant alloys in CSF. In this study, an in vitro immersion configuration was designed to simulate in vivo corrosion of 440C SS-Ti6Al4V couples. The 440C stainless steel (SS) was heat-treated to investigate the effect of tempering temperature on intergranular corrosion (IGC), while crevice and galvanic corrosion were studied by limiting the clearance of dissimilar couples. Tests were carried out in a neutral artificial cerebrospinal fluid (ACSF) and phosphate-buffered saline (PBS) under aeration and deaeration for 2 months. The composition of the passive films and metal ion release were analysed. The effect of galvanic coupling, pH, dissolved oxygen and anion species on corrosion rates and corrosion mechanisms are discussed based on quantitative and qualitative measurements. The results suggest that ACSF is more aggressive than PBS due to the combination of aggressive chlorides and sulphate anions, while phosphate in PBS acts as an inhibitor to delay corrosion. The presence of Vivianite on the SS surface in PBS lowered the corrosion rate (CR) more than 5 times for aeration and nearly 2 times for deaeration, compared with ACSF. The CR of 440C is dependent on passive film properties varied by tempering temperature and anion species. Although the CR of Ti6Al4V is insignificant, it tends to release more Ti ions in deaerated ACSF than under aeration, about 6 µg/L. It seems the crevice-like design has more effect on macroscopic corrosion than combining the dissimilar couple, whereas IGC is dominantly observed on sensitized microstructure.

Keywords: Cerebrospinal fluid, crevice corrosion, intergranular corrosion, magnetically controlled growing rods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 704
16 Water Quality Trading with Equitable Total Maximum Daily Loads

Authors: S. Jamshidi, E. Feizi Ashtiani, M. Ardestani

Abstract:

Waste Load Allocation (WLA) strategies usually intend to find economic policies for water resource management. Water quality trading (WQT) is an approach that uses discharge permit market to reduce total environmental protection costs. This primarily requires assigning discharge limits known as total maximum daily loads (TMDLs). These are determined by monitoring organizations with respect to the receiving water quality and remediation capabilities. The purpose of this study is to compare two approaches of TMDL assignment for WQT policy in small catchment area of Haraz River, in north of Iran. At first, TMDLs are assigned uniformly for the whole point sources to keep the concentrations of BOD and dissolved oxygen (DO) at the standard level at checkpoint (terminus point). This was simply simulated and controlled by Qual2kw software. In the second scenario, TMDLs are assigned using multi objective particle swarm optimization (MOPSO) method in which the environmental violation at river basin and total treatment costs are minimized simultaneously. In both scenarios, the equity index and the WLA based on trading discharge permits (TDP) are calculated. The comparative results showed that using economically optimized TMDLs (2nd scenario) has slightly more cost savings rather than uniform TMDL approach (1st scenario). The former annually costs about 1 M$ while the latter is 1.15 M$. WQT can decrease these annual costs to 0.9 and 1.1 M$, respectively. In other word, these approaches may save 35 and 45% economically in comparison with command and control policy. It means that using multi objective decision support systems (DSS) may find more economical WLA, however its outcome is not necessarily significant in comparison with uniform TMDLs. This may be due to the similar impact factors of dischargers in small catchments. Conversely, using uniform TMDLs for WQT brings more equity that makes stakeholders not feel that much envious of difference between TMDL and WQT allocation. In addition, for this case, determination of TMDLs uniformly would be much easier for monitoring. Consequently, uniform TMDL for TDP market is recommended as a sustainable approach. However, economical TMDLs can be used for larger watersheds.

Keywords: Waste load allocation (WLA), Water quality trading (WQT), Total maximum daily loads (TMDLs), Haraz River, Multi objective particle swarm optimization (MOPSO), Equity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2065
15 Enhancement Effect of Superparamagnetic Iron Oxide Nanoparticle-Based MRI Contrast Agent at Different Concentrations and Magnetic Field Strengths

Authors: Bimali Sanjeevani Weerakoon, Toshiaki Osuga, Takehisa Konishi

Abstract:

Magnetic Resonance Imaging Contrast Agents (MRI-CM) are significant in the clinical and biological imaging as they have the ability to alter the normal tissue contrast, thereby affecting the signal intensity to enhance the visibility and detectability of images. Superparamagnetic Iron Oxide (SPIO) nanoparticles, coated with dextran or carboxydextran are currently available for clinical MR imaging of the liver. Most SPIO contrast agents are T2 shortening agents and Resovist (Ferucarbotran) is one of a clinically tested, organ-specific, SPIO agent which has a low molecular carboxydextran coating. The enhancement effect of Resovist depends on its relaxivity which in turn depends on factors like magnetic field strength, concentrations, nanoparticle properties, pH and temperature. Therefore, this study was conducted to investigate the impact of field strength and different contrast concentrations on enhancement effects of Resovist. The study explored the MRI signal intensity of Resovist in the physiological range of plasma from T2-weighted spin echo sequence at three magnetic field strengths: 0.47 T (r1=15, r2=101), 1.5 T (r1=7.4, r2=95), and 3 T (r1=3.3, r2=160) and the range of contrast concentrations by a mathematical simulation. Relaxivities of r1 and r2 (L mmol-1 Sec-1) were obtained from a previous study and the selected concentrations were 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 2.0, and 3.0 mmol/L. T2-weighted images were simulated using TR/TE ratio as 2000 ms /100 ms. According to the reference literature, with increasing magnetic field strengths, the r1 relaxivity tends to decrease while the r2 did not show any systematic relationship with the selected field strengths. In parallel, this study results revealed that the signal intensity of Resovist at lower concentrations tends to increase than the higher concentrations. The highest reported signal intensity was observed in the low field strength of 0.47 T. The maximum signal intensities for 0.47 T, 1.5 T and 3 T were found at the concentration levels of 0.05, 0.06 and 0.05 mmol/L, respectively. Furthermore, it was revealed that, the concentrations higher than the above, the signal intensity was decreased exponentially. An inverse relationship can be found between the field strength and T2 relaxation time, whereas, the field strength was increased, T2 relaxation time was decreased accordingly. However, resulted T2 relaxation time was not significantly different between 0.47 T and 1.5 T in this study. Moreover, a linear correlation of transverse relaxation rates (1/T2, s–1) with the concentrations of Resovist can be observed. According to these results, it can conclude that the concentration of SPIO nanoparticle contrast agents and the field strengths of MRI are two important parameters which can affect the signal intensity of T2-weighted SE sequence. Therefore, when MR imaging those two parameters should be considered prudently.

Keywords: Concentration, Resovist, Field strength, Relaxivity, Signal intensity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1997
14 Predicting the Effect of Vibro Stone Column Installation on Performance of Reinforced Foundations

Authors: K. Al Ammari, B. G. Clarke

Abstract:

Soil improvement using vibro stone column techniques consists of two main parts: (1) the installed load bearing columns of well-compacted, coarse-grained material and (2) the improvements to the surrounding soil due to vibro compaction. Extensive research work has been carried out over the last 20 years to understand the improvement in the composite foundation performance due to the second part mentioned above. Nevertheless, few of these studies have tried to quantify some of the key design parameters, namely the changes in the stiffness and stress state of the treated soil, or have consider these parameters in the design and calculation process. Consequently, empirical and conservative design methods are still being used by ground improvement companies with a significant variety of results in engineering practice. Two-dimensional finite element study to develop an axisymmetric model of a single stone column reinforced foundation was performed using PLAXIS 2D AE to quantify the effect of the vibro installation of this column in soft saturated clay. Settlement and bearing performance were studied as an essential part of the design and calculation of the stone column foundation. Particular attention was paid to the large deformation in the soft clay around the installed column caused by the lateral expansion. So updated mesh advanced option was taken in the analysis. In this analysis, different degrees of stone column lateral expansions were simulated and numerically analyzed, and then the changes in the stress state, stiffness, settlement performance and bearing capacity were quantified. It was found that application of radial expansion will produce a horizontal stress in the soft clay mass that gradually decrease as the distance from the stone column axis increases. The excess pore pressure due to the undrained conditions starts to dissipate immediately after finishing the column installation, allowing the horizontal stress to relax. Changes in the coefficient of the lateral earth pressure K ٭, which is very important in representing the stress state, and the new stiffness distribution in the reinforced clay mass, were estimated. More encouraging results showed that increasing the expansion during column installation has a noticeable effect on improving the bearing capacity and reducing the settlement of reinforced ground, So, a design method should include this significant effect of the applied lateral displacement during the stone column instillation in simulation and numerical analysis design.

Keywords: Bearing capacity, design, Installation, numerical analysis, settlement, stone column.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1837
13 Biomechanical Modeling, Simulation, and Comparison of Human Arm Motion to Mitigate Astronaut Task during Extra Vehicular Activity

Authors: B. Vadiraj, S. N. Omkar, B. Kapil Bharadwaj, Yash Vardhan Gupta

Abstract:

During manned exploration of space, missions will require astronaut crewmembers to perform Extra Vehicular Activities (EVAs) for a variety of tasks. These EVAs take place after long periods of operations in space, and in and around unique vehicles, space structures and systems. Considering the remoteness and time spans in which these vehicles will operate, EVA system operations should utilize common worksites, tools and procedures as much as possible to increase the efficiency of training and proficiency in operations. All of the preparations need to be carried out based on studies of astronaut motions. Until now, development and training activities associated with the planned EVAs in Russian and U.S. space programs have relied almost exclusively on physical simulators. These experimental tests are expensive and time consuming. During the past few years a strong increase has been observed in the use of computer simulations due to the fast developments in computer hardware and simulation software. Based on this idea, an effort to develop a computational simulation system to model human dynamic motion for EVA is initiated. This study focuses on the simulation of an astronaut moving the orbital replaceable units into the worksites or removing them from the worksites. Our physics-based methodology helps fill the gap in quantitative analysis of astronaut EVA by providing a multisegment human arm model. Simulation work described in the study improves on the realism of previous efforts, incorporating joint stops to account for the physiological limits of range of motion. To demonstrate the utility of this approach human arm model is simulated virtually using ADAMS/LifeMOD® software. Kinematic mechanism for the astronaut’s task is studied from joint angles and torques. Simulation results obtained is validated with numerical simulation based on the principles of Newton-Euler method. Torques determined using mathematical model are compared among the subjects to know the grace and consistency of the task performed. We conclude that due to uncertain nature of exploration-class EVA, a virtual model developed using multibody dynamics approach offers significant advantages over traditional human modeling approaches.

Keywords: Extra vehicular activity, biomechanics, inverse kinematics, human body modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2843
12 An Analysis of Gamification in the Post-Secondary Classroom

Authors: F. Saccucci

Abstract:

Gamification has now started to take root in the post-secondary classroom. Educators have learned much about gamification to date but there is still a great deal to learn. One definition of gamification is the ability to engage post-secondary students with games that are fun and correlate to class room curriculum. There is no shortage of literature illustrating the advantages of gamification in the class room. This study is an extension of similar thought as well as an extension of a previous study where in class testing proved with the used of paired T-test that gamification did significantly improve the students’ understanding of subject material. Gamification itself in the class room can range from high end computer simulated software to paper based games of which both have advantages and disadvantages. This analysis used a paper based game to highlight certain qualitative advantages of gamification. The paper based game in this analysis was inexpensive, required low preparation time for the faculty member and consumed approximately 20 minutes of class room time. Data for the study was collected through in class student feedback surveys and narrative from the faculty member moderating the game. Students were randomly selected into groups of four. Qualitative advantages identified in this analysis included: 1. Students had a chance to meet, connect and know other students. 2. Students enjoyed the gamification process given there was a sense of fun and competition. 3. The post assessment that followed the simulation game was not part of their grade calculation therefore it was an opportunity to participate in a low risk activity whereby students could subsequently self-assess their understanding of the subject material. 4. In the view of the student, content knowledge did increase after the gamification process. These qualitative advantages identified in this analysis contribute to the argument that there should be an attempt to use gamification in today’s post-secondary class room. The analysis also highlighted that eighty (80) percent of the respondents believe twenty minutes devoted to the gamification process was appropriate, however twenty (20) percentage of respondents believed that rather than scheduling a gamification process and its post quiz in the last week, a review for the final exam may have been more useful. An additional study to this hopes to determine if the scheduling of the gamification had any correlation to a percentage of the students not wanting to be engaged in the process. As well, the additional study hopes to determine at what incremental level of time invested in class room gamification produce no material incremental benefits to the student as well as determine if any correlation exist between respondents preferring not to have it at the end of the semester to students not believing the gamification process added to the increase of their curricular knowledge.

Keywords: Gamification, inexpensive, qualitative advantages, post-secondary.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 870
11 Sustainability Impact Assessment of Construction Ecology to Engineering Systems and Climate Change

Authors: Moustafa Osman Mohammed

Abstract:

Construction industry, as one of the main contributor in depletion of natural resources, influences climate change. This paper discusses incremental and evolutionary development of the proposed models for optimization of a life-cycle analysis to explicit strategy for evaluation systems. The main categories are virtually irresistible for introducing uncertainties, uptake composite structure model (CSM) as environmental management systems (EMSs) in a practice science of evaluation small and medium-sized enterprises (SMEs). The model simplified complex systems to reflect nature systems’ input, output and outcomes mode influence “framework measures” and give a maximum likelihood estimation of how elements are simulated over the composite structure. The traditional knowledge of modeling is based on physical dynamic and static patterns regarding parameters influence environment. It unified methods to demonstrate how construction systems ecology interrelated from management prospective in procedure reflects the effect of the effects of engineering systems to ecology as ultimately unified technologies in extensive range beyond constructions impact so as, - energy systems. Sustainability broadens socioeconomic parameters to practice science that meets recovery performance, engineering reflects the generic control of protective systems. When the environmental model employed properly, management decision process in governments or corporations could address policy for accomplishment strategic plans precisely. The management and engineering limitation focuses on autocatalytic control as a close cellular system to naturally balance anthropogenic insertions or aggregation structure systems to pound equilibrium as steady stable conditions. Thereby, construction systems ecology incorporates engineering and management scheme, as a midpoint stage between biotic and abiotic components to predict constructions impact. The later outcomes’ theory of environmental obligation suggests either a procedures of method or technique that is achieved in sustainability impact of construction system ecology (SICSE), as a relative mitigation measure of deviation control, ultimately.

Keywords: Sustainability, constructions ecology, composite structure model, design structure matrix, environmental impact assessment, life cycle analysis, climate change.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436
10 A Grid Synchronization Method Based on Adaptive Notch Filter for SPV System with Modified MPPT

Authors: Priyanka Chaudhary, M. Rizwan

Abstract:

This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.

Keywords: Solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1970
9 Modeling Aerosol Formation in an Electrically Heated Tobacco Product

Authors: Markus Nordlund, Arkadiusz K. Kuczaj

Abstract:

Philip Morris International (PMI) is developing a range of novel tobacco products with the potential to reduce individual risk and population harm in comparison to smoking cigarettes. One of these products is the Tobacco Heating System 2.2 (THS 2.2), (named as the Electrically Heated Tobacco System (EHTS) in this paper), already commercialized in a number of countries (e.g., Japan, Italy, Switzerland, Russia, Portugal and Romania). During use, the patented EHTS heats a specifically designed tobacco product (Electrically Heated Tobacco Product (EHTP)) when inserted into a Holder (heating device). The EHTP contains tobacco material in the form of a porous plug that undergoes a controlled heating process to release chemical compounds into vapors, from which an aerosol is formed during cooling. The aim of this work was to investigate the aerosol formation characteristics for realistic operating conditions of the EHTS as well as for relevant gas mixture compositions measured in the EHTP aerosol consisting mostly of water, glycerol and nicotine, but also other compounds at much lower concentrations. The nucleation process taking place in the EHTP during use when operated in the Holder has therefore been modeled numerically using an extended Classical Nucleation Theory (CNT) for multicomponent gas mixtures. Results from the performed simulations demonstrate that aerosol droplets are formed only in the presence of an aerosol former being mainly glycerol. Minor compounds in the gas mixture were not able to reach a supersaturated state alone and therefore could not generate aerosol droplets from the multicomponent gas mixture at the operating conditions simulated. For the analytically characterized aerosol composition and estimated operating conditions of the EHTS and EHTP, glycerol was shown to be the main aerosol former triggering the nucleation process in the EHTP. This implies that according to the CNT, an aerosol former, such as glycerol needs to be present in the gas mixture for an aerosol to form under the tested operating conditions. To assess if these conclusions are sensitive to the initial amount of the minor compounds and to include and represent the total mass of the aerosol collected during the analytical aerosol characterization, simulations were carried out with initial masses of the minor compounds increased by as much as a factor of 500. Despite this extreme condition, no aerosol droplets were generated when glycerol, nicotine and water were treated as inert species and therefore not actively contributing to the nucleation process. This implies that according to the CNT, an aerosol cannot be generated without the help of an aerosol former, from the multicomponent gas mixtures at the compositions and operating conditions estimated for the EHTP, even if all minor compounds are released or generated in a single puff.

Keywords: Aerosol, Classical Nucleation Theory (CNT), Electrically Heated Tobacco Product (EHTP), Electrically Heated Tobacco System (EHTS), modeling, multicomponent, nucleation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2445
8 Impact of Liquidity Crunch on Interbank Network

Authors: I. Lucas, N. Schomberg, F-A. Couturier

Abstract:

Most empirical studies have analyzed how liquidity risks faced by individual institutions turn into systemic risk. Recent banking crisis has highlighted the importance of grasping and controlling the systemic risk, and the acceptance by Central Banks to ease their monetary policies for saving default or illiquid banks. This last point shows that banks would pay less attention to liquidity risk which, in turn, can become a new important channel of loss. The financial regulation focuses on the most important and “systemic” banks in the global network. However, to quantify the expected loss associated with liquidity risk, it is worth to analyze sensitivity to this channel for the various elements of the global bank network. A small bank is not considered as potentially systemic; however the interaction of small banks all together can become a systemic element. This paper analyzes the impact of medium and small banks interaction on a set of banks which is considered as the core of the network. The proposed method uses the structure of agent-based model in a two-class environment. In first class, the data from actual balance sheets of 22 large and systemic banks (such as BNP Paribas or Barclays) are collected. In second one, to model a network as closely as possible to actual interbank market, 578 fictitious banks smaller than the ones belonging to first class have been split into two groups of small and medium ones. All banks are active on the European interbank network and have deposit and market activity. A simulation of 12 three month periods representing a midterm time interval three years is projected. In each period, there is a set of behavioral descriptions: repayment of matured loans, liquidation of deposits, income from securities, collection of new deposits, new demands of credit, and securities sale. The last two actions are part of refunding process developed in this paper. To strengthen reliability of proposed model, random parameters dynamics are managed with stochastic equations as rates the variations of which are generated by Vasicek model. The Central Bank is considered as the lender of last resort which allows banks to borrow at REPO rate and some ejection conditions of banks from the system are introduced.

Liquidity crunch due to exogenous crisis is simulated in the first class and the loss impact on other bank classes is analyzed though aggregate values representing the aggregate of loans and/or the aggregate of borrowing between classes. It is mainly shown that the three groups of European interbank network do not have the same response, and that intermediate banks are the most sensitive to liquidity risk.

Keywords: Systemic Risk, Financial Contagion, Liquidity Risk, Interbank Market, Network Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2027
7 Exploring the Correlation between Population Distribution and Urban Heat Island under Urban Data: Taking Shenzhen Urban Heat Island as an Example

Authors: Wang Yang

Abstract:

Shenzhen is a modern city of China's reform and opening-up policy, the development of urban morphology has been established on the administration of the Chinese government. This city`s planning paradigm is primarily affected by the spatial structure and human behavior. The subjective urban agglomeration center is divided into several groups and centers. In comparisons of this effect, the city development law has better to be neglected. With the continuous development of the internet, extensive data technology has been introduced in China. Data mining and data analysis has become important tools in municipal research. Data mining has been utilized to improve data cleaning such as receiving business data, traffic data and population data. Prior to data mining, government data were collected by traditional means, then were analyzed using city-relationship research, delaying the timeliness of urban development, especially for the contemporary city. Data update speed is very fast and based on the Internet. The city's point of interest (POI) in the excavation serves as data source affecting the city design, while satellite remote sensing is used as a reference object, city analysis is conducted in both directions, the administrative paradigm of government is broken and urban research is restored. Therefore, the use of data mining in urban analysis is very important. The satellite remote sensing data of the Shenzhen city in July 2018 were measured by the satellite Modis sensor and can be utilized to perform land surface temperature inversion, and analyze city heat island distribution of Shenzhen. This article acquired and classified the data from Shenzhen by using Data crawler technology. Data of Shenzhen heat island and interest points were simulated and analyzed in the GIS platform to discover the main features of functional equivalent distribution influence. Shenzhen is located in the east-west area of China. The city’s main streets are also determined according to the direction of city development. Therefore, it is determined that the functional area of the city is also distributed in the east-west direction. The urban heat island can express the heat map according to the functional urban area. Regional POI has correspondence. The research result clearly explains that the distribution of the urban heat island and the distribution of urban POIs are one-to-one correspondence. Urban heat island is primarily influenced by the properties of the underlying surface, avoiding the impact of urban climate. Using urban POIs as analysis object, the distribution of municipal POIs and population aggregation are closely connected, so that the distribution of the population corresponded with the distribution of the urban heat island.

Keywords: POI, satellite remote sensing, the population distribution, urban heat island thermal map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 930
6 Temporal Variation of Surface Runoff and Inter-Rill Erosion in Different Soil Textures of a Semi-Arid Region, Iran

Authors: Ali Reza Vaezi, Naser Fakori Ivand, Fereshteh Azarifam

Abstract:

Inter-rill erosion is the detachment and transfer of soil particles between the rills which occurs due to the impact of raindrops and the shear stress of shallow surface runoff. This erosion can be affected by some soil properties such as texture, amount of organic matter and stability of soil aggregates. Information on the temporal variation of inter-rill erosion during a rainfall event and the effect of soil properties on it can help develop better methods to soil conservation in the hillslopes. The importance of this study is especially grate in semi-arid regions, where the soil is weakly aggregated and vegetation cover is mostly poor. Therefore, this research was conducted to investigate the temporal variation of surface flow and inter-rill erosion and the effect of soil properties on it in some semi-arid soils. A field experiment was done in eight different soil textures under simulated rainfalls with uniform intensity. A total of twenty four plots were installed for eight study soils with three replicates in the form of a random complete block design along the land. The plots were 1.2 m (length) × 1 m (width) in dimensions which designed with a distance of 3 m from each other across the slope. Then, soil samples were purred into the plots. Rainfall simulation experiments were done using a designed portable simulator with an intensity of 60 mm per hour for 60 minutes. Runoff production and soil loss were measured during 1 hour time with 5-min intervals. Soil properties including particle size distribution, aggregate stability, bulk density, exchangeable sodium percentages (ESP) and hydraulic conductivity (Ks) were determined in the soil samples. Correlation and regression analysis was done to determine the effect of soil properties on runoff and inter-rill erosion. Results indicated that the study soils have lower both organic matter content and aggregate stability. The soils, except for coarse textured textures, are calcareous and with relatively higher ESP. Runoff production and soil loss did not occur in sand texture, which was associated with higher infiltration and drainage rates. A strong relationship was found between inter-rill erosion and surface runoff (R2 = 0.75, p < 0.01). The correlation analysis showed that surface runoff was significantly affected by some soil properties consisting of sand, silt, clay, bulk density, gravel, Ks, lime (calcium carbonate), and ESP. The soils with lower Ks such as fine-textured soils, produced higher surface runoff and more inter-rill erosion. In the soils, surface runoff production temporally increased during rainfall and finally reached a peak after about 25-35 min. Time to peak was very short (30 min) in fine-textured soils, especially clay, which was related to their lower infiltration rate.

Keywords: Erosion plot, rainfall simulator, soil properties, surface flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 72
5 Sand Production Modelled with Darcy Fluid Flow Using Discrete Element Method

Authors: M. N. Nwodo, Y. P. Cheng, N. H. Minh

Abstract:

In the process of recovering oil in weak sandstone formations, the strength of sandstones around the wellbore is weakened due to the increase of effective stress/load from the completion activities around the cavity. The weakened and de-bonded sandstone may be eroded away by the produced fluid, which is termed sand production. It is one of the major trending subjects in the petroleum industry because of its significant negative impacts, as well as some observed positive impacts. For efficient sand management therefore, there has been need for a reliable study tool to understand the mechanism of sanding. One method of studying sand production is the use of the widely recognized Discrete Element Method (DEM), Particle Flow Code (PFC3D) which represents sands as granular individual elements bonded together at contact points. However, there is limited knowledge of the particle-scale behavior of the weak sandstone, and the parameters that affect sanding. This paper aims to investigate the reliability of using PFC3D and a simple Darcy flow in understanding the sand production behavior of a weak sandstone. An isotropic tri-axial test on a weak oil sandstone sample was first simulated at a confining stress of 1MPa to calibrate and validate the parallel bond models of PFC3D using a 10m height and 10m diameter solid cylindrical model. The effect of the confining stress on the number of bonds failure was studied using this cylindrical model. With the calibrated data and sample material properties obtained from the tri-axial test, simulations without and with fluid flow were carried out to check on the effect of Darcy flow on bonds failure using the same model geometry. The fluid flow network comprised of every four particles connected with tetrahedral flow pipes with a central pore or flow domain. Parametric studies included the effects of confining stress, and fluid pressure; as well as validating flow rate – permeability relationship to verify Darcy’s fluid flow law. The effect of model size scaling on sanding was also investigated using 4m height, 2m diameter model. The parallel bond model successfully calibrated the sample’s strength of 4.4MPa, showing a sharp peak strength before strain-softening, similar to the behavior of real cemented sandstones. There seems to be an exponential increasing relationship for the bigger model, but a curvilinear shape for the smaller model. The presence of the Darcy flow induced tensile forces and increased the number of broken bonds. For the parametric studies, flow rate has a linear relationship with permeability at constant pressure head. The higher the fluid flow pressure, the higher the number of broken bonds/sanding. The DEM PFC3D is a promising tool to studying the micromechanical behavior of cemented sandstones.

Keywords: Discrete Element Method, fluid flow, parametric study, sand production/bonds failure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1793
4 Generative Design of Acoustical Diffuser and Absorber Elements Using Large-Scale Additive Manufacturing

Authors: S. Aziz, B. Alexander, C. Gengnagel, S. Weinzierl

Abstract:

This paper explores a generative design, simulation, and optimization workflow for the integration of acoustical diffuser and/or absorber geometry with embedded coupled Helmholtz-resonators for full scale 3D printed building components. Large-scale additive manufacturing in conjunction with algorithmic CAD design tools enables a vast amount of control when creating geometry. This is advantageous regarding the increasing demands of comfort standards for indoor spaces and the use of more resourceful and sustainable construction methods and materials. The presented methodology highlights these new technological advancements and offers a multimodal and integrative design solution with the potential for an immediate application in the AEC-Industry. In principle, the methodology can be applied to a wide range of structural elements that can be manufactured by additive manufacturing processes. The current paper focuses on a case study of an application for a biaxial load-bearing beam grillage made of reinforced concrete, which allows for a variety of applications through the combination of additive prefabricated semi-finished parts and in-situ concrete supplementation. The semi-prefabricated parts or formwork bodies form the basic framework of the supporting structure and at the same time have acoustic absorption and diffusion properties that are precisely acoustically programmed for the space underneath the structure. To this end, a hybrid validation strategy is being explored using a digital and cross-platform simulation environment, verified with physical prototyping. The iterative workflow starts with the generation of a parametric design model for the acoustical geometry using the algorithmic visual scripting editor Grasshopper3D inside the Building Information Modeling (BIM) software Revit. Various geometric attributes (i.e., bottleneck and cavity dimensions) of the resonator are parameterized and fed to a numerical optimization algorithm which can modify the geometry with the goal of increasing absorption at resonance and increasing the bandwidth of the effective absorption range. Using Rhino.Inside and LiveLink for Revit the generative model was imported directly into the Multiphysics simulation environment COMSOL. The geometry was further modified and prepared for simulation in a semi-automated process. The incident and scattered pressure fields were simulated from which the surface normal absorption coefficients were calculated. This reciprocal process was repeated to further optimize the geometric parameters. Subsequently the numerical models were compared to a set of 3D concrete printed physical twin models which were tested in a .25 m x .25 m impedance tube. The empirical results served to improve the starting parameter settings of the initial numerical model. The geometry resulting from the numerical optimization was finally returned to grasshopper for further implementation in an interdisciplinary study.

Keywords: Acoustical design, additive manufacturing, computational design, multimodal optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 605
3 A Simulation Study of Direct Injection Compressed Natural Gas Spark Ignition Engine Performance Utilizing Turbulent Jet Ignition with Controlled Air Charge

Authors: Siyamak Ziyaei, Siti Khalijah Mazlan, Petros Lappas

Abstract:

Compressed natural gas (CNG) is primarily composed of methane (CH4), and has a lower carbon to hydrogen ratio than other hydrocarbon fuels such as gasoline (C8H18) and diesel (C12H23). Consequently, it has the potential to reduce CO2 emissions compared to conventional fuels. Although Natural Gas (NG) has environmental advantages compared to other hydrocarbon fuels, its main component, CH4, burns at a slower rate compared to the conventional fuels. A higher pressure and leaner cylinder environment will unravel the slow burn characteristic of CH4. Lean combustion and high compression ratios are well-known methods for increasing the efficiency of internal combustion engines. In order to achieve successful a CNG lean combustion in Spark Ignition (SI) engines, a strong ignition system is essential to avoid engine misfires, especially in ultra-lean conditions. Turbulent Jet Ignition (TJI) is an ignition system that employs a pre-combustion chamber to ignite the lean fuel mixture in the main combustion chamber using a fraction of the total fuel per cycle. TJI enables ultra-lean combustion by providing distributed ignition sites through orifices. The fast burn rate provided by TJI enables the ordinary SI engine to be comparable to other combustion systems such as Homogeneous Charge Compression Ignition (HCCI) or Controlled Auto-Ignition (CAI) in terms of thermal efficiency, through the increased levels of dilution without the need of sophisticated control systems. Due to the physical geometry of TJI, which contains small orifices that connect the pre-chamber to the main chamber, providing the right mixture of fuel and air has been identified as a key challenge due to the insufficient amount of air that is pushed into the pre-chamber during each compression stroke. There is also the problem of scavenging which contributed to the factors that reduces the TJI performance. Combustion residual gases such as CO2, CO and NOx from the previous combustion cycle dilute the pre-chamber fuel-air mixture preventing rapid combustion in the pre-chamber. An air-controlled active TJI is presented in this paper in order to address these issues. By supplying air into the pre-chamber at a sufficient pressure, residual gases are exhausted, and the air-fuel ratio is controlled within the pre-chamber, thereby improving the quality of the combustion. An investigation of the 3D combustion characteristics of a CNG-fueled SI engine using a direct injection fuelling strategy employing an air channel in the prechamber is presented in this paper. Experiments and simulations were performed at the Worldwide Mapping Point (WWMP) at 1500 revolutions per minute (rpm), 3.3 bar Indicated Mean Effective Pressure (IMEP), using only conventional spark plugs as a baseline. With a validated baseline engine simulation, the settings were set for all simulation scenarios at λ=1. Following that, the pre-chambers with and without an auxiliary fuel supply were simulated. In the study of (DI-CNG) SI engine, active TJI was observed to perform better than passive TJI and conventional  spark plug ignition. In conclusion, the active pre-chamber with an air channel demonstrated an improved thermal efficiency (ηth) over other counterparts and conventional spark ignition systems.

Keywords: Turbulent Jet Ignition, Active Air Control Turbulent Jet Ignition, Pre-chamber ignition system, Active and Passive Pre-chamber, thermal efficiency, methane combustion, internal combustion engine combustion emissions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 177
2 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network

Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman

Abstract:

We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.

Keywords: Autonomous surveillance, Bayesian reasoning, decision-support, interventions, patterns-of-life, predictive analytics, predictive insights.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 543
1 Auto Rickshaw Impacts with Pedestrians: A Computational Analysis of Post-Collision Kinematics and Injury Mechanics

Authors: A. J. Al-Graitti, G. A. Khalid, P. Berthelson, A. Mason-Jones, R. Prabhu, M. D. Jones

Abstract:

Motor vehicle related pedestrian road traffic collisions are a major road safety challenge, since they are a leading cause of death and serious injury worldwide, contributing to a third of the global disease burden. The auto rickshaw, which is a common form of urban transport in many developing countries, plays a major transport role, both as a vehicle for hire and for private use. The most common auto rickshaws are quite unlike ‘typical’ four-wheel motor vehicle, being typically characterised by three wheels, a non-tilting sheet-metal body or open frame construction, a canvas roof and side curtains, a small drivers’ cabin, handlebar controls and a passenger space at the rear. Given the propensity, in developing countries, for auto rickshaws to be used in mixed cityscapes, where pedestrians and vehicles share the roadway, the potential for auto rickshaw impacts with pedestrians is relatively high. Whilst auto rickshaws are used in some Western countries, their limited number and spatial separation from pedestrian walkways, as a result of city planning, has not resulted in significant accident statistics. Thus, auto rickshaws have not been subject to the vehicle impact related pedestrian crash kinematic analyses and/or injury mechanics assessment, typically associated with motor vehicle development in Western Europe, North America and Japan. This study presents a parametric analysis of auto rickshaw related pedestrian impacts by computational simulation, using a Finite Element model of an auto rickshaw and an LS-DYNA 50th percentile male Hybrid III Anthropometric Test Device (dummy). Parametric variables include auto rickshaw impact velocity, auto rickshaw impact region (front, centre or offset) and relative pedestrian impact position (front, side and rear). The output data of each impact simulation was correlated against reported injury metrics, Head Injury Criterion (front, side and rear), Neck injury Criterion (front, side and rear), Abbreviated Injury Scale and reported risk level and adds greater understanding to the issue of auto rickshaw related pedestrian injury risk. The parametric analyses suggest that pedestrians are subject to a relatively high risk of injury during impacts with an auto rickshaw at velocities of 20 km/h or greater, which during some of the impact simulations may even risk fatalities. The present study provides valuable evidence for informing a series of recommendations and guidelines for making the auto rickshaw safer during collisions with pedestrians. Whilst it is acknowledged that the present research findings are based in the field of safety engineering and may over represent injury risk, compared to “Real World” accidents, many of the simulated interactions produced injury response values significantly greater than current threshold curves and thus, justify their inclusion in the study. To reduce the injury risk level and increase the safety of the auto rickshaw, there should be a reduction in the velocity of the auto rickshaw and, or, consideration of engineering solutions, such as retro fitting injury mitigation technologies to those auto rickshaw contact regions which are the subject of the greatest risk of producing pedestrian injury.

Keywords: Auto Rickshaw, finite element analysis, injury risk level, LS-DYNA, pedestrian impact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1322