Search results for: step drill
2544 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market
Authors: Taylan Kabbani, Ekrem Duman
Abstract:
The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent
Procedia PDF Downloads 1782543 Synthesis and Characterization of Some Novel Carbazole Schiff Bases (OLED)
Authors: Baki Cicek, Umit Calisir
Abstract:
Carbazoles have been replaced lots of studies from 1960's to present and also still continues. In 1987, the first diode device had been developed. Thanks to that study, light emitting devices have been investigated and developed and also have been used on commercial applications. Nowadays, OLED (Organic Light Emitting Diodes) technology is using on lots of electronic screen such as (mobile phone, computer monitors, televisions, etc.) Carbazoles were subject a lot of study as a semiconductor material. Although this technology is used commen and widely, it is still development stage. Metal complexes of these compounds are using at pigment dyes because of colored substances, polymer technology, medicine industry, agriculture area, preparing rocket fuel-oil, determine some of biological events, etc. Becides all of these to preparing of schiff base synthesis is going on intensely. In this study, some of novel carbazole schiff bases were synthesized starting from carbazole. For that purpose, firstly, carbazole was alkylated. After purification of N-substituted-carbazole was nitrated to sythesized 3-nitro-N-substituted and 3,6-dinitro-N-substituted carbazoles. At next step, nitro group/groups were reduced to amines. Purified with using a type of silica gel-column chromatography. At the last step of our study, with sythesized 3,6-diamino-N-substituted carbazoles and 3-amino-N-substituted carbazoles were reacted with aldehydes to condensation reactions. 3-(imino-p-hydroxybenzyl)-N-isobutyl -carbazole, 3-(imino-2,3,4-trimethoxybenzene)-N-butylcarbazole, 3-(imino-3,4-dihydroxybenzene)-N-octylcarbazole, 3-(imino-2,3-dihydroxybenzene)-N-octylkarbazole and 3,6-di(α-imino-β-naphthol) -N-hexylcarbazole compounds were synthesized. All of synthesized compounds were characterized with FT-IR, 1H-NMR, 13C-NMR, and LC-MS.Keywords: carbazole, carbazol schiff base, condensation reactions, OLED
Procedia PDF Downloads 4432542 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire
Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan
Abstract:
Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer
Procedia PDF Downloads 1692541 A Critical Discourse Analysis of Protesters in the Debates of Al Jazeera Channel of the Yemeni Revolution
Authors: Raya Sulaiman
Abstract:
Critical discourse analysis investigates how discourse is used to abuse power relationships. Political debates constitute discourses which mirror aspects of ideologies. The Arab world has been one of the most unsettled zones in the world and has dominated global politics due to the Arab revolutions which started in 2010. This study aimed at uncovering the ideological intentions in the formulation and circulation of hegemonic political ideology in the TV political debates of the 2011 to 2012 Yemen revolution, how ideology was used as a tool of hegemony. The study specifically examined the ideologies associated with the use of protesters as a social actor. Data of the study consisted of four debates (17350 words) from four live debate programs: The Opposite Direction, In Depth, Behind the News and the Revolution Talk that were staged at Al Jazeera TV channel between 2011 and 2012. Data was readily transcribed by Al Jazeera online. Al Jazeera was selected for the study because it is the most popular TV network in the Arab world and has a strong presence, especially during the Arab revolutions. Al Jazeera has also been accused of inciting protests across the Arab region. Two debate sites were identified in the data: government and anti-government. The government side represented the president Ali Abdullah Saleh and his regime while the anti-government side represented the gathering squares who demanded the president to ‘step down’. The study analysed verbal discourse aspects of the debates using critical discourse analysis: aspects from the Social Actor Network model of van Leeuwen. This framework provides a step-by-step analysis model, and analyses discourse from specific grammatical processes into broader semantic issues. It also provides representative findings since it considers discourse as representative and reconstructed in social practice. Study findings indicated that Al Jazeera and the anti-government had similarities in terms of the ideological intentions related to the protesters. Al Jazeera victimized and incited the protesters which were similar to the anti-government. Al Jazeera used assimilation, nominalization, and active role allocation as the linguistic aspects in order to reach its ideological intentions related to the protesters. Government speakers did not share the same ideological intentions with Al Jazeera. Study findings indicated that Al Jazeera had excluded the government from its debates causing a violation to its slogan, the opinion, and the other opinion. This study implies the powerful role of discourse in shaping ideological media intentions and influencing the media audience.Keywords: Al Jazeera network, critical discourse analysis, ideology, Yemeni revolution
Procedia PDF Downloads 2242540 Using Machine Learning to Build a Real-Time COVID-19 Mask Safety Monitor
Authors: Yash Jain
Abstract:
The US Center for Disease Control has recommended wearing masks to slow the spread of the virus. The research uses a video feed from a camera to conduct real-time classifications of whether or not a human is correctly wearing a mask, incorrectly wearing a mask, or not wearing a mask at all. Utilizing two distinct datasets from the open-source website Kaggle, a mask detection network had been trained. The first dataset that was used to train the model was titled 'Face Mask Detection' on Kaggle, where the dataset was retrieved from and the second dataset was titled 'Face Mask Dataset, which provided the data in a (YOLO Format)' so that the TinyYoloV3 model could be trained. Based on the data from Kaggle, two machine learning models were implemented and trained: a Tiny YoloV3 Real-time model and a two-stage neural network classifier. The two-stage neural network classifier had a first step of identifying distinct faces within the image, and the second step was a classifier to detect the state of the mask on the face and whether it was worn correctly, incorrectly, or no mask at all. The TinyYoloV3 was used for the live feed as well as for a comparison standpoint against the previous two-stage classifier and was trained using the darknet neural network framework. The two-stage classifier attained a mean average precision (MAP) of 80%, while the model trained using TinyYoloV3 real-time detection had a mean average precision (MAP) of 59%. Overall, both models were able to correctly classify stages/scenarios of no mask, mask, and incorrectly worn masks.Keywords: datasets, classifier, mask-detection, real-time, TinyYoloV3, two-stage neural network classifier
Procedia PDF Downloads 1632539 An Investigation into Why Liquefaction Charts Work: A Necessary Step toward Integrating the States of Art and Practice
Authors: Tarek Abdoun, Ricardo Dobry
Abstract:
This paper is a systematic effort to clarify why field liquefaction charts based on Seed and Idriss’ Simplified Procedure work so well. This is a necessary step toward integrating the states of the art (SOA) and practice (SOP) for evaluating liquefaction and its effects. The SOA relies mostly on laboratory measurements and correlations with void ratio and relative density of the sand. The SOP is based on field measurements of penetration resistance and shear wave velocity coupled with empirical or semi-empirical correlations. This gap slows down further progress in both SOP and SOA. The paper accomplishes its objective through: a literature review of relevant aspects of the SOA including factors influencing threshold shear strain and pore pressure buildup during cyclic strain-controlled tests; a discussion of factors influencing field penetration resistance and shear wave velocity; and a discussion of the meaning of the curves in the liquefaction charts separating liquefaction from no liquefaction, helped by recent full-scale and centrifuge results. It is concluded that the charts are curves of constant cyclic strain at the lower end (Vs1 < 160 m/s), with this strain being about 0.03 to 0.05% for earthquake magnitude, Mw ≈ 7. It is also concluded, in a more speculative way, that the curves at the upper end probably correspond to a variable increasing cyclic strain and Ko, with this upper end controlled by over consolidated and preshaken sands, and with cyclic strains needed to cause liquefaction being as high as 0.1 to 0.3%. These conclusions are validated by application to case histories corresponding to Mw ≈ 7, mostly in the San Francisco Bay Area of California during the 1989 Loma Prieta earthquake.Keywords: permeability, lateral spreading, liquefaction, centrifuge modeling, shear wave velocity charts
Procedia PDF Downloads 2972538 A Grid Synchronization Method Based On Adaptive Notch Filter for SPV System with Modified MPPT
Authors: Priyanka Chaudhary, M. Rizwan
Abstract:
This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.Keywords: solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique
Procedia PDF Downloads 5942537 Progressive Collapse of Cooling Towers
Authors: Esmaeil Asadzadeh, Mehtab Alam
Abstract:
Well documented records of the past failures of the structures reveals that the progressive collapse of structures is one of the major reasons for dramatic human loss and economical consequences. Progressive collapse is the failure mechanism in which the structure fails gradually due to the sudden removal of the structural elements. The sudden removal of some structural elements results in the excessive redistributed loads on the others. This sudden removal may be caused by any sudden loading resulted from local explosion, impact loading and terrorist attacks. Hyperbolic thin walled concrete shell structures being an important part of nuclear and thermal power plants are always prone to such terrorist attacks. In concrete structures, the gradual failure would take place by generation of initial cracks and its propagation in the supporting columns along with the tower shell leading to the collapse of the entire structure. In this study the mechanism of progressive collapse for such high raised towers would be simulated employing the finite element method. The aim of this study would be providing clear conceptual step-by-step descriptions of various procedures for progressive collapse analysis using commercially available finite element structural analysis software’s, with the aim that the explanations would be clear enough that they will be readily understandable and will be used by practicing engineers. The study would be carried out in the following procedures: 1. Provide explanations of modeling, simulation and analysis procedures including input screen snapshots; 2. Interpretation of the results and discussions; 3. Conclusions and recommendations.Keywords: progressive collapse, cooling towers, finite element analysis, crack generation, reinforced concrete
Procedia PDF Downloads 4812536 Molecular Simulation of NO, NH3 Adsorption in MFI and H-ZSM5
Authors: Z. Jamalzadeh, A. Niaei, H. Erfannia, S. G. Hosseini, A. S. Razmgir
Abstract:
Due to developing the industries, the emission of pollutants such as NOx, SOx, and CO2 are rapidly increased. Generally, NOx is attributed to the mono nitrogen oxides of NO and NO2 that is one of the most important atmospheric contaminants. Hence, controlling the emission of nitrogen oxides is urgent environmentally. Selective Catalytic Reduction of NOx is one of the most common techniques for NOx removal in which Zeolites have wide application due to their high performance. In zeolitic processes, the catalytic reaction occurs mostly in the pores. Therefore, investigation the adsorption phenomena of the molecules in order to gain an insight and understand the catalytic cycle is of important. Hence, in current study, molecular simulations is applied for studying the adsorption phenomena in nanocatalysts applied for SCR of NOx process. The effect of cation addition to the support in the catalysts’ behavior through adsorption step was explored by Mont Carlo (MC). Simulation time of 1 Ns accompanying 1 fs time step, COMPASS27 Force Field and the cut off radios of 12.5 Ȧ was applied for performed runs. It was observed that the adsorption capacity increases in the presence of cations. The sorption isotherms demonstrated the behavior of type I isotherm categories and sorption capacity diminished with increase in temperature whereas an increase was observed at high pressures. Besides, NO sorption showed higher sorption capacity than NH3 in H–ZSM5. In this respect, the Energy distributions signified that the molecules could adsorb in just one sorption site at the catalyst and the sorption energy of NO was stronger than the NH3 in H-ZSM5. Furthermore, the isosteric heat of sorption data showed nearly same values for the molecules; however, it indicated stronger interactions of NO molecules with H-ZSM5 Zeolite compared to the isosteric heat of NH3 which was low in value.Keywords: Monte Carlo simulation, adsorption, NOx, ZSM5
Procedia PDF Downloads 3782535 An Improved Total Variation Regularization Method for Denoising Magnetocardiography
Authors: Yanping Liao, Congcong He, Ruigang Zhao
Abstract:
The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation
Procedia PDF Downloads 1532534 Evaluation of Synthesis and Structure Elucidation of Some Benzimidazoles as Antimicrobial Agents
Authors: Ozlem Temiz Arpaci, Meryem Tasci, Hakan Goker
Abstract:
Benzimidazole, a structural isostere of indol and purine nuclei that can interact with biopolymers, can be identified as master key. So that benzimidazole compounds are important fragments in medicinal chemistry because of their wide range of biological activities including antimicrobial activity. We planned to synthesize some benzimidazole compounds for developing new antimicrobial drug candidates. In this study, we put some heterocyclic rings on second position and an amidine group on the fifth position of benzimidazole ring and synthesized them using a multiple step procedure. For the synthesis of the compounds, as the first step, 4-chloro-3-nitrobenzonitrile was reacted with cyclohexylamine in dimethyl formamide. Imidate esters (compound 2) were then prepared with absolute ethanol saturated with dry HCl gas. These imidate esters which were not too stable were converted to compound 3 by passing ammonia gas through ethanol. At the Pd / C catalyst, the nitro group is reduced to the amine group (compound 4). Finally, various aldehyde derivatives were reacted with sodium metabisulfite addition products to give compound 5-20. Melting points were determined on a Buchi B-540 melting point apparatus in open capillary tubes and are uncorrected. Elemental analyses were done a Leco CHNS 932 elemental analyzer. 1H-NMR and 13C-NMR spectra were recorded on a Varian Mercury 400 MHz spectrometer using DMSO-d6. Mass spectra were acquired on a Waters Micromass ZQ using the ESI(+) method. The structures of them were supported by spectral data. The 1H-NMR, 13C NMR and mass spectra and elemental analysis results agree with those of the proposed structures. Antimicrobial activity studies of the synthesized compounds are under the investigation.Keywords: benzimidazoles, synthesis, structure elucidation, antimicrobial
Procedia PDF Downloads 1562533 Generating a Functional Grammar for Architectural Design from Structural Hierarchy in Combination of Square and Equal Triangle
Authors: Sanaz Ahmadzadeh Siyahrood, Arghavan Ebrahimi, Mohammadjavad Mahdavinejad
Abstract:
Islamic culture was accountable for a plethora of development in astronomy and science in the medieval term, and in geometry likewise. Geometric patterns are reputable in a considerable number of cultures, but in the Islamic culture the patterns have specific features that connect the Islamic faith to mathematics. In Islamic art, three fundamental shapes are generated from the circle shape: triangle, square and hexagon. Originating from their quiddity, each of these geometric shapes has its own specific structure. Even though the geometric patterns were generated from such simple forms as the circle and the square, they can be combined, duplicated, interlaced, and arranged in intricate combinations. So in order to explain geometrical interaction principles between square and equal triangle, in the first definition step, all types of their linear forces individually and in the second step, between them, would be illustrated. In this analysis, some angles will be created from intersection of their directions. All angles are categorized to some groups and the mathematical expressions among them are analyzed. Since the most geometric patterns in Islamic art and architecture are based on the repetition of a single motif, the evaluation results which are obtained from a small portion, is attributable to a large-scale domain while the development of infinitely repeating patterns can represent the unchanging laws. Geometric ornamentation in Islamic art offers the possibility of infinite growth and can accommodate the incorporation of other types of architectural layout as well, so the logic and mathematical relationships which have been obtained from this analysis are applicable in designing some architecture layers and developing the plan design.Keywords: angle, equal triangle, square, structural hierarchy
Procedia PDF Downloads 1962532 The Metabolism of Built Environment: Energy Flow and Greenhouse Gas Emissions in Nigeria
Authors: Yusuf U. Datti
Abstract:
It is becoming increasingly clear that the consumption of resources now enjoyed in the developed nations will be impossible to be sustained worldwide. While developing countries still have the advantage of low consumption and a smaller ecological footprint per person, they cannot simply develop in the same way as other western cities have developed in the past. The severe reality of population and consumption inequalities makes it contentious whether studies done in developed countries can be translated and applied to developing countries. Additional to this disparities, there are few or no metabolism of energy studies in Nigeria. Rather more contentious majority of energy metabolism studies have been done only in developed countries. While researches in Nigeria concentrate on other aspects/principles of sustainability such as water supply, sewage disposal, energy supply, energy efficiency, waste disposal, etc., which will not accurately capture the environmental impact of energy flow in Nigeria, this research will set itself apart by examining the flow of energy in Nigeria and the impact that the flow will have on the environment. The aim of the study is to examine and quantify the metabolic flows of energy in Nigeria and its corresponding environmental impact. The study will quantify the level and pattern of energy inflow and the outflow of greenhouse emissions in Nigeria. This study will describe measures to address the impact of existing energy sources and suggest alternative renewable energy sources in Nigeria that will lower the emission of greenhouse gas emissions. This study will investigate the metabolism of energy in Nigeria through a three-part methodology. The first step involved selecting and defining the study area and some variables that would affect the output of the energy (time of the year, stability of the country, income level, literacy rate and population). The second step involves analyzing, categorizing and quantifying the amount of energy generated by the various energy sources in the country. The third step involves analyzing what effect the variables would have on the environment. To ensure a representative sample of the study area, Africa’s most populous country, with economy that is the second biggest and that is among the top largest oil producing countries in the world is selected. This is due to the understanding that countries with large economy and dense populations are ideal places to examine sustainability strategies; hence, the choice of Nigeria for the study. National data will be utilized unless where such data cannot be found, then local data will be employed which will be aggregated to reflect the national situation. The outcome of the study will help policy-makers better target energy conservation and efficiency programs and enables early identification and mitigation of any negative effects in the environment.Keywords: built environment, energy metabolism, environmental impact, greenhouse gas emissions and sustainability
Procedia PDF Downloads 1832531 Adjustment of the Whole-Body Center of Mass during Trunk-Flexed Walking across Uneven Ground
Authors: Soran Aminiaghdam, Christian Rode, Reinhard Blickhan, Astrid Zech
Abstract:
Despite considerable studies on the impact of imposed trunk posture on human walking, less is known about such locomotion while negotiating changes in ground level. The aim of this study was to investigate the behavior of the VBCOM in response to a two-fold expected perturbation, namely alterations in body posture and in ground level. To this end, the kinematic data and ground reaction forces of twelve able participants were collected. We analyzed the vertical position of the body center of mass (VBCOM) from the ground determined by the body segmental analysis method relative to the laboratory coordinate system at touchdown and toe-off instants during walking across uneven ground — characterized by perturbation contact (a 10-cm visible drop) and pre- and post-perturbation contacts — in comparison to unperturbed level contact while maintaining three postures (regular erect, ~30° and ~50° of trunk flexion from the vertical). The VBCOM was normalized to the distance between the greater trochanter marker and the lateral malleoli marker at the instant of TD. Moreover, we calculated the backward rotation during step-down as the difference of the maximum of the trunk angle in the pre-perturbation contact and the minimal trunk angle in the perturbation contact. Two-way repeated measures ANOVAs revealed contact-specific effects of posture on the VBCOM at touchdown (F = 5.96, p = 0.00). As indicated by the analysis of simple main effects, during unperturbed level and pre-perturbation contacts, no between-posture differences for the VBCOM at touchdown were found. In the perturbation contact, trunk-flexed gaits showed a significant increase of VBCOM as compared to the pre-perturbation contact. In the post-perturbation contact, the VBCOM demonstrated a significant decrease in all gait postures relative to the preceding corresponding contacts with no between-posture differences. Main effects of posture revealed that the VBCOM at toe-off significantly decreased in trunk-flexed gaits relative to the regular erect gait. For the main effect of contact, the VBCOM at toe-off demonstrated changes across perturbation and post-perturbation contacts as compared to the unperturbed level contact. Furthermore, participants exhibited a backward trunk rotation during step-down possibly to control the angular momentum of their whole body. A more pronounced backward trunk rotation (2- to 3-fold compared with level contacts) in trunk-flexed walking contributed to the observed elevated VBCOM during the step-down which may have facilitated drop negotiation. These results may shed light on the interaction between posture and locomotion in able gait, and specifically on the behavior of the body center of mass during perturbed locomotion.Keywords: center of mass, perturbation, posture, uneven ground, walking
Procedia PDF Downloads 1822530 Anesthetic Considerations for Carotid Endarterectomy: Prospective Study Based on Clinical Trials
Authors: Ahmed Yousef A. Al Sultan
Abstract:
Introduction: The aim of this review is based on clinical research that studies the changes in middle cerebral artery velocity using Transcranial Doppler (TCD) and cerebral oxygen saturation using cerebral oximetry in patients undergoing carotid endarterectomy (CEA) surgery under local anesthesia (LA). Patients with or without neurological symptoms during the surgery are taking a role in this study using triplet method of cerebral oximetry, transcranial doppler and awake test in detecting any cerebral ischemic symptoms. Methods: about one hundred patients took part during their CEA surgeries under local anesthesia, using triple assessment mentioned method, Patients requiring general anesthesia be excluded from analysis. All data were recorded at eight surgery stages separately to serve this study. Results: In total regional cerebral oxygen saturation (rSO2), middle cerebral artery (MCA) velocity, and pulsatility index were significantly decreased during carotid artery clamping step in CEA procedures on the targeted carotid side. With most observed changes in MCA velocity during the study. Discussion: Cerebral oxygen saturation and middle cerebral artery velocity were significantly decreased during clamping step of the procedures on the targeted side. The team with neurological symptoms during the procedures showed higher changes of rSO2 and MCA velocity than the team without neurological symptoms. Cerebral rSO2 and MCA velocity significantly increased directly after de-clamping of the internal carotid artery on the affected side.Keywords: awake testing, carotid endarterectomy, cerebral oximetry, Tanscranial Doppler
Procedia PDF Downloads 1692529 Gas While Drilling (GWD) Classification in Betara Complex; An Effective Approachment to Optimize Future Candidate of Gumai Reservoir
Authors: I. Gusti Agung Aditya Surya Wibawa, Andri Syafriya, Beiruny Syam
Abstract:
Gumai Formation which acts as regional seal for Talang Akar Formation becomes one of the most prolific reservoir in South Sumatra Basin and the primary exploration target in this area. Marine conditions were eventually established during the continuation of transgression sequence leads an open marine facies deposition in Early Miocene. Marine clastic deposits where calcareous shales, claystone and siltstones interbedded with fine-grained calcareous and glauconitic sandstones are the domination of lithology which targeted as the hydrocarbon reservoir. All this time, the main objective of PetroChina’s exploration and production in Betara area is only from Lower Talang Akar Formation. Successful testing in some exploration wells which flowed gas & condensate from Gumai Formation, opened the opportunity to optimize new reservoir objective in Betara area. Limitation of conventional wireline logs data in Gumai interval is generating technical challenge in term of geological approach. A utilization of Gas While Drilling indicator initiated with the objective to determine the next Gumai reservoir candidate which capable to increase Jabung hydrocarbon discoveries. This paper describes how Gas While Drilling indicator is processed to generate potential and non-potential zone by cut-off analysis. Validation which performed by correlation and comparison with well logs, Drill Stem Test (DST), and Reservoir Performance Monitor (RPM) data succeed to observe Gumai reservoir in Betara Complex. After we integrated all of data, we are able to generate a Betara Complex potential map and overlaid with reservoir characterization distribution as a part of risk assessment in term of potential zone presence. Mud log utilization and geophysical data information successfully covered the geological challenges in this study.Keywords: Gumai, gas while drilling, classification, reservoir, potential
Procedia PDF Downloads 3562528 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based on Local Color Histograms
Authors: Mawloud Mosbah, Bachir Boucheham
Abstract:
Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.Keywords: CBIR, color global histogram, color local histogram, weak segmentation, Euclidean distance
Procedia PDF Downloads 3602527 Evaluating the Effectiveness of Plantar Sensory Insoles and Remote Patient Monitoring for Early Intervention in Diabetic Foot Ulcer Prevention in Patients with Peripheral Neuropathy
Authors: Brock Liden, Eric Janowitz
Abstract:
Introduction: Diabetic peripheral neuropathy (DPN) affects 70% of individuals with diabetes1. DPN causes a loss of protective sensation, which can lead to tissue damage and diabetic foot ulcer (DFU) formation2. These ulcers can result in infections and lower-extremity amputations of toes, the entire foot, and the lower leg. Even after a DFU is healed, recurrence is common, with 49% of DFU patients developing another ulcer within a year and 68% within 5 years3. This case series examines the use of sensory insoles and newly available plantar data (pressure, temperature, step count, adherence) and remote patient monitoring in patients at risk of DFU. Methods: Participants were provided with custom-made sensory insoles to monitor plantar pressure, temperature, step count, and daily use and were provided with real-time cues for pressure offloading as they went about their daily activities. The sensory insoles were used to track subject compliance, ulceration, and response to feedback from real-time alerts. Patients were remotely monitored by a qualified healthcare professional and were contacted when areas of concern were seen and provided coaching on reducing risk factors and overall support to improve foot health. Results: Of the 40 participants provided with the sensory insole system, 4 presented with a DFU. Based on flags generated from the available plantar data, patients were contacted by the remote monitor to address potential concerns. A standard clinical escalation protocol detailed when and how concerns should be escalated to the provider by the remote monitor. Upon escalation to the provider, patients were brought into the clinic as needed, allowing for any issues to be addressed before more serious complications might arise. Conclusion: This case series explores the use of innovative sensory technology to collect plantar data (pressure, temperature, step count, and adherence) for DFU detection and early intervention. The results from this case series suggest the importance of sensory technology and remote patient monitoring in providing proactive, preventative care for patients at risk of DFU. This robust plantar data, with the addition of remote patient monitoring, allow for patients to be seen in the clinic when concerns arise, giving providers the opportunity to intervene early and prevent more serious complications, such as wounds, from occurring.Keywords: diabetic foot ulcer, DFU prevention, digital therapeutics, remote patient monitoring
Procedia PDF Downloads 772526 Research the Causes of Defects and Injuries of Reinforced Concrete and Stone Construction
Authors: Akaki Qatamidze
Abstract:
Implementation of the project will be a step forward in terms of reliability in Georgia and the improvement of the construction and the development of construction. Completion of the project is expected to result in a complete knowledge, which is expressed in concrete and stone structures of assessing the technical condition of the processing. This method is based on a detailed examination of the structure, in order to establish the injuries and the elimination of the possibility of changing the structural scheme of the new requirements and architectural preservationists. Reinforced concrete and stone structures research project carried out in a systematic analysis of the important approach is to optimize the process of research and development of new knowledge in the neighboring areas. In addition, the problem of physical and mathematical models of rational consent, the main pillar of the physical (in-situ) data and mathematical calculation models and physical experiments are used only for the calculation model specification and verification. Reinforced concrete and stone construction defects and failures the causes of the proposed research to enhance the effectiveness of their maximum automation capabilities and expenditure of resources to reduce the recommended system analysis of the methodological concept-based approach, as modern science and technology major particularity of one, it will allow all family structures to be identified for the same work stages and procedures, which makes it possible to exclude subjectivity and addresses the problem of the optimal direction. It discussed the methodology of the project and to establish a major step forward in the construction trades and practical assistance to engineers, supervisors, and technical experts in the construction of the settlement of the problem.Keywords: building, reinforced concrete, expertise, stone structures
Procedia PDF Downloads 3362525 A Randomized Control Trial Intervention to Combat Childhood Obesity in Negeri Sembilan: The Hebat! Program
Authors: Siti Sabariah Buhari, Ruzita Abdul Talib, Poh Bee Koon
Abstract:
This study aims to develop and evaluate an intervention to improve eating habits, active lifestyle and weight status of overweight and obese children in Negeri Sembilan. The H.E.B.A.T! Program involved children, parents, and school and focused on behaviour and environment modification to achieve its goal. The intervention consists of H.E.B.A.T! Camp, parent’s workshop and school-based activities. A total of 21 children from intervention school and 22 children from control school who had BMI for age Z-score ≥ +1SD participated in the study. Mean age of subjects was 10.8 ± 0.3 years old. Four phases were included in the development of the intervention. Evaluation of intervention was conducted through process, impact and outcome evaluation. Process evaluation found that intervention program was implemented successfully with minimal modification and without having any technical problems. Impact and outcome evaluation was assessed based on dietary intake, average step counts, BMI for age z-score, body fat percentage and waist circumference at pre-intervention (T0), post-intervention 1 (T1) and post-intervention 2 (T2). There was significant reduction in energy (14.8%) and fat (21.9%) intakes (at p < 0.05) at post-intervention 1 (T1) in intervention group. By controlling for sex as covariate, there was significant intervention effect for average step counts, BMI for age z-score and waist circumference (p < 0.05). In conclusion, the intervention made an impact on positive behavioural intentions and improves weight status of the children. It is expected that the HEBAT! Program could be adopted and implemented by the government and private sector as well as policy-makers in formulating childhood obesity intervention.Keywords: childhood obesity, diet, obesity intervention, physical activity
Procedia PDF Downloads 2922524 In-Silico Fusion of Bacillus Licheniformis Chitin Deacetylase with Chitin Binding Domains from Chitinases
Authors: Keyur Raval, Steffen Krohn, Bruno Moerschbacher
Abstract:
Chitin, the biopolymer of the N-acetylglucosamine, is the most abundant biopolymer on the planet after cellulose. Industrially, chitin is isolated and purified from the shell residues of shrimps. A deacetylated derivative of chitin i.e. chitosan has more market value and applications owing to it solubility and overall cationic charge compared to the parent polymer. This deacetylation on an industrial scale is performed chemically using alkalis like sodium hydroxide. This reaction not only is hazardous to the environment owing to negative impact on the marine ecosystem. A greener option to this process is the enzymatic process. In nature, the naïve chitin is converted to chitosan by chitin deacetylase (CDA). This enzymatic conversion on the industrial scale is however hampered by the crystallinity of chitin. Thus, this enzymatic action requires the substrate i.e. chitin to be soluble which is technically difficult and an energy consuming process. We in this project wanted to address this shortcoming of CDA. In lieu of this, we have modeled a fusion protein with CDA and an auxiliary protein. The main interest being to increase the accessibility of the enzyme towards crystalline chitin. A similar fusion work with chitinases had improved the catalytic ability towards insoluble chitin. In the first step, suitable partners were searched through the protein data bank (PDB) wherein the domain architecture were sought. The next step was to create the models of the fused product using various in silico techniques. The models were created by MODELLER and evaluated for properties such as the energy or the impairment of the binding sites. A fusion PCR has been designed based on the linker sequences generated by MODELLER and would be tested for its activity towards insoluble chitin.Keywords: chitin deacetylase, modeling, chitin binding domain, chitinases
Procedia PDF Downloads 2422523 Sports Business Services Model: A Research Model Study in Reginal Sport Authority of Thailand
Authors: Siriraks Khawchaimaha, Sangwian Boonto
Abstract:
Sport Authority of Thailand (SAT) is the state enterprise, promotes and supports all sports kind both professional and athletes for competitions, and administer under government policy and government officers and therefore, all financial supports whether cash inflows and cash outflows are strictly committed to government budget and limited to the planned projects at least 12 to 16 months ahead of reality, as results of ineffective in sport events, administration and competitions. In order to retain in the sports challenges around the world, SAT need to has its own sports business services model by each stadium, region and athletes’ competencies. Based on the HMK model of Khawchaimaha, S. (2007), this research study is formalized into each 10 regional stadiums to details into the characteristics root of fans, athletes, coaches, equipments and facilities, and stadiums. The research designed is firstly the evaluation of external factors: hardware whereby competition or practice of stadiums, playground, facilities, and equipments. Secondly, to understand the software of the organization structure, staffs and management, administrative model, rules and practices. In addition, budget allocation and budget administration with operating plan and expenditure plan. As results for the third step, issues and limitations which require action plan for further development and support, or to cease that unskilled sports kind. The final step, based on the HMK model and modeling canvas by Alexander O and Yves P (2010) are those of template generating Sports Business Services Model for each 10 SAT’s regional stadiums.Keywords: HMK model, not for profit organization, sport business model, sport services model
Procedia PDF Downloads 3072522 Aluminum Matrix Composites Reinforced by Glassy Carbon-Titanium Spatial Structure
Authors: B. Hekner, J. Myalski, P. Wrzesniowski
Abstract:
This study presents aluminum matrix composites reinforced by glassy carbon (GC) and titanium (Ti). In the first step, the heterophase (GC+Ti), spatial form (similar to skeleton) of reinforcement was obtained via own method. The polyurethane foam (with spatial, open-cells structure) covered by suspension of Ti particles in phenolic resin was pyrolyzed. In the second step, the prepared heterogeneous foams were infiltrated by aluminium alloy. The manufactured composites are designated to industrial application, especially as a material used in tribological field. From this point of view, the glassy carbon was applied to stabilise a coefficient of friction on the required value 0.6 and reduce wear. Furthermore, the wear can be limited due to titanium phase application, which reveals high mechanical properties. Moreover, fabrication of thin titanium layer on the carbon skeleton leads to reduce contact between aluminium alloy and carbon and thus aluminium carbide phase creation. However, the main modification involves the manufacturing of reinforcement in the form of 3D, skeleton foam. This kind on reinforcement reveals a few important advantages compared to classical form of reinforcement-particles: possibility to control homogeneity of reinforcement phase in composite material; low-advanced technique of composite manufacturing- infiltration; possibility to application the reinforcement only in required places of material; strict control of phase composition; High quality of bonding between components of material. This research is founded by NCN in the UMO-2016/23/N/ST8/00994.Keywords: metal matrix composites, MMC, glassy carbon, heterophase composites, tribological application
Procedia PDF Downloads 1182521 Thermodynamics of Water Condensation on an Aqueous Organic-Coated Aerosol Aging via Chemical Mechanism
Authors: Yuri S. Djikaev
Abstract:
A large subset of aqueous aerosols can be initially (immediately upon formation) coated with various organic amphiphilic compounds whereof the hydrophilic moieties are attached to the aqueous aerosol core while the hydrophobic moieties are exposed to the air thus forming a hydrophobic coating thereupon. We study the thermodynamics of water condensation on such an aerosol whereof the hydrophobic organic coating is being concomitantly processed by chemical reactions with atmospheric reactive species. Such processing (chemical aging) enables the initially inert aerosol to serve as a nucleating center for water condensation. The most probable pathway of such aging involves atmospheric hydroxyl radicals that abstract hydrogen atoms from hydrophobic moieties of surface organics (first step), the resulting radicals being quickly oxidized by ubiquitous atmospheric oxygen molecules to produce surface-bound peroxyl radicals (second step). Taking these two reactions into account, we derive an expression for the free energy of formation of an aqueous droplet on an organic-coated aerosol. The model is illustrated by numerical calculations. The results suggest that the formation of aqueous cloud droplets on such aerosols is most likely to occur via Kohler activation rather than via nucleation. The model allows one to determine the threshold parameters necessary for their Kohler activation. Numerical results also corroborate previous suggestions that one can neglect some details of aerosol chemical composition in investigating aerosol effects on climate.Keywords: aqueous aerosols, organic coating, chemical aging, cloud condensation nuclei, Kohler activation, cloud droplets
Procedia PDF Downloads 3952520 Reinforced Concrete Foundation for Turbine Generators
Authors: Siddhartha Bhattacharya
Abstract:
Steam Turbine-Generators (STG) and Combustion Turbine-Generator (CTG) are used in almost all modern petrochemical, LNG plants and power plant facilities. The reinforced concrete table top foundations are required to support these high speed rotating heavy machineries and is one of the most critical and challenging structures on any industrial project. The paper illustrates through a practical example, the step by step procedure adopted in designing a table top foundation supported on piles for a steam turbine generator with operating speed of 60 Hz. Finite element model of a table top foundation is generated in ANSYS. Piles are modeled as springs-damper elements (COMBIN14). Basic loads are adopted in analysis and design of the foundation based on the vendor requirements, industry standards, and relevant ASCE & ACI codal provisions. Static serviceability checks are performed with the help of Misalignment Tolerance Matrix (MTM) method in which the percentage of misalignment at a given bearing due to displacement at another bearing is calculated and kept within the stipulated criteria by the vendor so that the machine rotor can sustain the stresses developed due to this misalignment. Dynamic serviceability checks are performed through modal and forced vibration analysis where the foundation is checked for resonance and allowable amplitudes, as stipulated by the machine manufacturer. Reinforced concrete design of the foundation is performed by calculating the axial force, bending moment and shear at each of the critical sections. These values are calculated through area integral of the element stresses at these critical locations. Design is done as per ACI 318-05.Keywords: steam turbine generator foundation, finite element, static analysis, dynamic analysis
Procedia PDF Downloads 2972519 Massive Open Online Course about Content Language Integrated Learning: A Methodological Approach for Content Language Integrated Learning Teachers
Authors: M. Zezou
Abstract:
This paper focuses on the design of a Massive Open Online Course (MOOC) about Content Language Integrated Learning (CLIL) and more specifically about how teachers can use CLIL as an educational approach incorporating technology in their teaching as well. All the four weeks of the MOOC will be presented and a step-by-step analysis of each lesson will be offered. Additionally, the paper includes detailed lesson plans about CLIL lessons with proposed CLIL activities and games in which technology plays a central part. The MOOC is structured based on certain criteria, in order to ensure success, as well as a positive experience that the learners need to have after completing this MOOC. It addresses to all language teachers who would like to implement CLIL into their teaching. In other words, it presents the methodology that needs to be followed so as to successfully carry out a CLIL lesson and achieve the learning objectives set at the beginning of the course. Firstly, in this paper, it is very important to give the definitions of MOOCs and LMOOCs, as well as to explore the difference between a structure-based MOOC (xMOOC) and a connectivist MOOC (cMOOC) and present the criteria of a successful MOOC. Moreover, the notion of CLIL will be explored, as it is necessary to fully understand this concept before moving on to the design of the MOOC. Onwards, the four weeks of the MOOC will be introduced as well as lesson plans will be presented: The type of the activities, the aims of each activity and the methodology that teachers have to follow. Emphasis will be placed on the role of technology in foreign language learning and on the ways in which we can involve technology in teaching a foreign language. Final remarks will be made and a summary of the main points will be offered at the end.Keywords: CLIL, cMOOC, lesson plan, LMOOC, MOOC criteria, MOOC, technology, xMOOC
Procedia PDF Downloads 1942518 Enhancing Single Channel Minimum Quantity Lubrication through Bypass Controlled Design for Deep Hole Drilling with Small Diameter Tool
Authors: Yongrong Li, Ralf Domroes
Abstract:
Due to significant energy savings, enablement of higher machining speed as well as environmentally friendly features, Minimum Quantity Lubrication (MQL) has been used for many machining processes efficiently. However, in the deep hole drilling field (small tool diameter D < 5 mm) and long tool (length L > 25xD) it is always a bottle neck for a single channel MQL system. The single channel MQL, based on the Venturi principle, faces a lack of enough oil quantity caused by dropped pressure difference during the deep hole drilling process. In this paper, a system concept based on a bypass design has explored its possibility to dynamically reach the required pressure difference between the air inlet and the inside of aerosol generator, so that the deep hole drilling demanded volume of oil can be generated and delivered to tool tips. The system concept has been investigated in static and dynamic laboratory testing. In the static test, the oil volume with and without bypass control were measured. This shows an oil quantity increasing potential up to 1000%. A spray pattern test has demonstrated the differences of aerosol particle size, aerosol distribution and reaction time between single channel and bypass controlled single channel MQL systems. A dynamic trial machining test of deep hole drilling (drill tool D=4.5mm, L= 40xD) has been carried out with the proposed system on a difficult machining material AlSi7Mg. The tool wear along a 100 meter drilling was tracked and analyzed. The result shows that the single channel MQL with a bypass control can overcome the limitation and enhance deep hole drilling with a small tool. The optimized combination of inlet air pressure and bypass control results in a high quality oil delivery to tool tips with a uniform and continuous aerosol flow.Keywords: deep hole drilling, green production, Minimum Quantity Lubrication (MQL), near dry machining
Procedia PDF Downloads 2062517 Don't Just Guess and Slip: Estimating Bayesian Knowledge Tracing Parameters When Observations Are Scant
Authors: Michael Smalenberger
Abstract:
Intelligent tutoring systems (ITS) are computer-based platforms which can incorporate artificial intelligence to provide step-by-step guidance as students practice problem-solving skills. ITS can replicate and even exceed some benefits of one-on-one tutoring, foster transactivity in collaborative environments, and lead to substantial learning gains when used to supplement the instruction of a teacher or when used as the sole method of instruction. A common facet of many ITS is their use of Bayesian Knowledge Tracing (BKT) to estimate parameters necessary for the implementation of the artificial intelligence component, and for the probability of mastery of a knowledge component relevant to the ITS. While various techniques exist to estimate these parameters and probability of mastery, none directly and reliably ask the user to self-assess these. In this study, 111 undergraduate students used an ITS in a college-level introductory statistics course for which detailed transaction-level observations were recorded, and users were also routinely asked direct questions that would lead to such a self-assessment. Comparisons were made between these self-assessed values and those obtained using commonly used estimation techniques. Our findings show that such self-assessments are particularly relevant at the early stages of ITS usage while transaction level data are scant. Once a user’s transaction level data become available after sufficient ITS usage, these can replace the self-assessments in order to eliminate the identifiability problem in BKT. We discuss how these findings are relevant to the number of exercises necessary to lead to mastery of a knowledge component, the associated implications on learning curves, and its relevance to instruction time.Keywords: Bayesian Knowledge Tracing, Intelligent Tutoring System, in vivo study, parameter estimation
Procedia PDF Downloads 1742516 Analyzing Transit Network Design versus Urban Dispersion
Authors: Hugo Badia
Abstract:
This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.Keywords: analytical network design model, network structure, public transport, urban dispersion
Procedia PDF Downloads 2312515 Microfluidic Plasmonic Bio-Sensing of Exosomes by Using a Gold Nano-Island Platform
Authors: Srinivas Bathini, Duraichelvan Raju, Simona Badilescu, Muthukumaran Packirisamy
Abstract:
A bio-sensing method, based on the plasmonic property of gold nano-islands, has been developed for detection of exosomes in a clinical setting. The position of the gold plasmon band in the UV-Visible spectrum depends on the size and shape of gold nanoparticles as well as on the surrounding environment. By adsorbing various chemical entities, or binding them, the gold plasmon band will shift toward longer wavelengths and the shift is proportional to the concentration. Exosomes transport cargoes of molecules and genetic materials to proximal and distal cells. Presently, the standard method for their isolation and quantification from body fluids is by ultracentrifugation, not a practical method to be implemented in a clinical setting. Thus, a versatile and cutting-edge platform is required to selectively detect and isolate exosomes for further analysis at clinical level. The new sensing protocol, instead of antibodies, makes use of a specially synthesized polypeptide (Vn96), to capture and quantify the exosomes from different media, by binding the heat shock proteins from exosomes. The protocol has been established and optimized by using a glass substrate, in order to facilitate the next stage, namely the transfer of the protocol to a microfluidic environment. After each step of the protocol, the UV-Vis spectrum was recorded and the position of gold Localized Surface Plasmon Resonance (LSPR) band was measured. The sensing process was modelled, taking into account the characteristics of the nano-island structure, prepared by thermal convection and annealing. The optimal molar ratios of the most important chemical entities, involved in the detection of exosomes were calculated as well. Indeed, it was found that the results of the sensing process depend on the two major steps: the molar ratios of streptavidin to biotin-PEG-Vn96 and, the final step, the capture of exosomes by the biotin-PEG-Vn96 complex. The microfluidic device designed for sensing of exosomes consists of a glass substrate, sealed by a PDMS layer that contains the channel and a collecting chamber. In the device, the solutions of linker, cross-linker, etc., are pumped over the gold nano-islands and an Ocean Optics spectrometer is used to measure the position of the Au plasmon band at each step of the sensing. The experiments have shown that the shift of the Au LSPR band is proportional to the concentration of exosomes and, thereby, exosomes can be accurately quantified. An important advantage of the method is the ability to discriminate between exosomes having different origins.Keywords: exosomes, gold nano-islands, microfluidics, plasmonic biosensing
Procedia PDF Downloads 174