Search results for: step leaching
2559 Surface Sunctionalization Strategies for the Design of Thermoplastic Microfluidic Devices for New Analytical Diagnostics
Authors: Camille Perréard, Yoann Ladner, Fanny D'Orlyé, Stéphanie Descroix, Vélan Taniga, Anne Varenne, Cédric Guyon, Michael. Tatoulian, Frédéric Kanoufi, Cyrine Slim, Sophie Griveau, Fethi Bedioui
Abstract:
The development of micro total analysis systems is of major interest for contaminant and biomarker analysis. As a lab-on-chip integrates all steps of an analysis procedure in a single device, analysis can be performed in an automated format with reduced time and cost, while maintaining performances comparable to those of conventional chromatographic systems. Moreover, these miniaturized systems are either compatible with field work or glovebox manipulations. This work is aimed at developing an analytical microsystem for trace and ultra trace quantitation in complex matrices. The strategy consists in the integration of a sample pretreatment step within the lab-on-chip by a confinement zone where selective ligands are immobilized for target extraction and preconcentration. Aptamers were chosen as selective ligands, because of their high affinity for all types of targets (from small ions to viruses and cells) and their ease of synthesis and functionalization. This integrated target extraction and concentration step will be followed in the microdevice by an electrokinetic separation step and an on-line detection. Polymers consisting of cyclic olefin copolymer (COC) or fluoropolymer (Dyneon THV) were selected as they are easy to mold, transparent in UV-visible and have high resistance towards solvents and extreme pH conditions. However, because of their low chemical reactivity, surface treatments are necessary. For the design of this miniaturized diagnostics, we aimed at modifying the microfluidic system at two scales : (1) on the entire surface of the microsystem to control the surface hydrophobicity (so as to avoid any sample wall adsorption) and the fluid flows during electrokinetic separation, or (2) locally so as to immobilize selective ligands (aptamers) on restricted areas for target extraction and preconcentration. We developed different novel strategies for the surface functionalization of COC and Dyneon, based on plasma, chemical and /or electrochemical approaches. In a first approach, a plasma-induced immobilization of brominated derivatives was performed on the entire surface. Further substitution of the bromine by an azide functional group led to covalent immobilization of ligands through “click” chemistry reaction between azides and terminal alkynes. COC and Dyneon materials were characterized at each step of the surface functionalization procedure by various complementary techniques to evaluate the quality and homogeneity of the functionalization (contact angle, XPS, ATR). With the objective of local (micrometric scale) aptamer immobilization, we developed an original electrochemical strategy on engraved Dyneon THV microchannel. Through local electrochemical carbonization followed by adsorption of azide-bearing diazonium moieties and covalent linkage of alkyne-bearing aptamers through click chemistry reaction, typical dimensions of immobilization zones reached the 50 µm range. Other functionalization strategies, such as sol-gel encapsulation of aptamers, are currently investigated and may also be suitable for the development of the analytical microdevice. The development of these functionalization strategies is the first crucial step in the design of the entire microdevice. These strategies allow the grafting of a large number of molecules for the development of new analytical tools in various domains like environment or healthcare.Keywords: alkyne-azide click chemistry (CuAAC), electrochemical modification, microsystem, plasma bromination, surface functionalization, thermoplastic polymers
Procedia PDF Downloads 4422558 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach
Authors: Kanika Gupta, Ashok Kumar
Abstract:
Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database
Procedia PDF Downloads 1702557 Gas Phase Extraction: An Environmentally Sustainable and Effective Method for The Extraction and Recovery of Metal from Ores
Authors: Kolela J Nyembwe, Darlington C. Ashiegbu, Herman J. Potgieter
Abstract:
Over the past few decades, the demand for metals has increased significantly. This has led to a decrease and decline of high-grade ore over time and an increase in mineral complexity and matrix heterogeneity. In addition to that, there are rising concerns about greener processes and a sustainable environment. Due to these challenges, the mining and metal industry has been forced to develop new technologies that are able to economically process and recover metallic values from low-grade ores, materials having a metal content locked up in industrially processed residues (tailings and slag), and complex matrix mineral deposits. Several methods to address these issues have been developed, among which are ionic liquids (IL), heap leaching, and bioleaching. Recently, the gas phase extraction technique has been gaining interest because it eliminates many of the problems encountered in conventional mineral processing methods. The technique relies on the formation of volatile metal complexes, which can be removed from the residual solids by a carrier gas. The complexes can then be reduced using the appropriate method to obtain the metal and regenerate-recover the organic extractant. Laboratory work on the gas phase have been conducted for the extraction and recovery of aluminium (Al), iron (Fe), copper (Cu), chrome (Cr), nickel (Ni), lead (Pb), and vanadium V. In all cases the extraction revealed to depend of temperature and mineral surface area. The process technology appears very promising, offers the feasibility of recirculation, organic reagent regeneration, and has the potential to deliver on all promises of a “greener” process.Keywords: gas-phase extraction, hydrometallurgy, low-grade ore, sustainable environment
Procedia PDF Downloads 1332556 Changing Skills with the Transformation of Procurement Function
Authors: Ömer Faruk Ada, Türker Baş, M. Yaman Öztek
Abstract:
In this study, we aim to investigate the skills to be owned by procurement professionals in order to fulfill their developing and changing role completely. Market conditions, competitive pressure, and high financial costs make it more important than ever for organizations to be able to use resources more efficiently. Research shows that procurement expenses consist more than 50 % of the operating expenses. With increasing profit impact of procurement, reviewing the position of the procurement function within the organization has become inevitable. This study is significant as it indicates the necessary skills that procurement professionals must have to keep in step with the transformation of procurement units from transaction oriented to value chain oriented. In this study, the transformation of procurement is investigated from the perspective of procurement professionals and we aim to answer following research questions: • How do procurement professionals perceive their role within the organization? • How has their role changed and what challenges have they had to face? • What portfolio of skills do they believe will enable them to fulfill their role effectively? Literature review consists of the first part of the study by investigating the changing role of procurement from different perspectives. In the second part, we present the results of the in-depth interviews with 15 procurement professionals and we used descriptive analysis as a methodology. In the light of these results, we classified procurement skills under operational, tactical and strategic levels and Procurement Skills Framework has been developed. This study shows the differences in the perception of purchasing by professionals and the organizations. The differences in the perception are considered as an important barrier beyond the procurement transformation. Although having the necessary skills has a significant effect for procurement professionals to fulfill their role completely and keep in step with the transformation of the procurement function, It is not the only factor and the degree of high-level management and organizational support has also a direct impact during this transformation.Keywords: procuement skills, procurement transformation, strategic procurement, value chain
Procedia PDF Downloads 4162555 Biomimicked Nano-Structured Coating Elaboration by Soft Chemistry Route for Self-Cleaning and Antibacterial Uses
Authors: Elodie Niemiec, Philippe Champagne, Jean-Francois Blach, Philippe Moreau, Anthony Thuault, Arnaud Tricoteaux
Abstract:
Hygiene of equipment in contact with users is an important issue in the railroad industry. The numerous cleanings to eliminate bacteria and dirt cost a lot. Besides, mechanical solicitations on contact parts are observed daily. It should be interesting to elaborate on a self-cleaning and antibacterial coating with sufficient adhesion and good resistance against mechanical and chemical solicitations. Thus, a Hauts-de-France and Maubeuge Val-de-Sambre conurbation authority co-financed Ph.D. thesis has been set up since October 2017 based on anterior studies carried by the Laboratory of Ceramic Materials and Processing. To accomplish this task, a soft chemical route has been implemented to bring a lotus effect on metallic substrates. It involves nanometric liquid zinc oxide synthesis under 100°C. The originality here consists in a variation of surface texturing by modification of the synthesis time of the species in solution. This helps to adjust wettability. Nanostructured zinc oxide has been chosen because of the inherent photocatalytic effect, which can activate organic substance degradation. Two methods of heating have been compared: conventional and microwave assistance. Tested subtracts are made of stainless steel to conform to transport uses. Substrate preparation was the first step of this protocol: a meticulous cleaning of the samples is applied. The main goal of the elaboration protocol is to fix enough zinc-based seeds to make them grow during the next step as desired (nanorod shaped). To improve this adhesion, a silica gel has been formulated and optimized to ensure chemical bonding between substrate and zinc seeds. The last step consists of deposing a wide carbonated organosilane to improve the superhydrophobic property of the coating. The quasi-proportionality between the reaction time and the nanorod length will be demonstrated. Water Contact (superior to 150°) and Roll-off Angle at different steps of the process will be presented. The antibacterial effect has been proved with Escherichia Coli, Staphylococcus Aureus, and Bacillus Subtilis. The mortality rate is found to be four times superior to a non-treated substrate. Photocatalytic experiences were carried out from different dyed solutions in contact with treated samples under UV irradiation. Spectroscopic measurements allow to determinate times of degradation according to the zinc quantity available on the surface. The final coating obtained is, therefore, not a monolayer but rather a set of amorphous/crystalline/amorphous layers that have been characterized by spectroscopic ellipsometry. We will show that the thickness of the nanostructured oxide layer depends essentially on the synthesis time set in the hydrothermal growth step. A green, easy-to-process and control coating with self-cleaning and antibacterial properties has been synthesized with a satisfying surface structuration.Keywords: antibacterial, biomimetism, soft-chemistry, zinc oxide
Procedia PDF Downloads 1432554 Consensus Reaching Process and False Consensus Effect in a Problem of Portfolio Selection
Authors: Viviana Ventre, Giacomo Di Tollo, Roberta Martino
Abstract:
The portfolio selection problem includes the evaluation of many criteria that are difficult to compare directly and is characterized by uncertain elements. The portfolio selection problem can be modeled as a group decision problem in which several experts are invited to present their assessment. In this context, it is important to study and analyze the process of reaching a consensus among group members. Indeed, due to the various diversities among experts, reaching consensus is not necessarily always simple and easily achievable. Moreover, the concept of consensus is accompanied by the concept of false consensus, which is particularly interesting in the dynamics of group decision-making processes. False consensus can alter the evaluation and selection phase of the alternative and is the consequence of the decision maker's inability to recognize that his preferences are conditioned by subjective structures. The present work aims to investigate the dynamics of consensus attainment in a group decision problem in which equivalent portfolios are proposed. In particular, the study aims to analyze the impact of the subjective structure of the decision-maker during the evaluation and selection phase of the alternatives. Therefore, the experimental framework is divided into three phases. In the first phase, experts are sent to evaluate the characteristics of all portfolios individually, without peer comparison, arriving independently at the selection of the preferred portfolio. The experts' evaluations are used to obtain individual Analytical Hierarchical Processes that define the weight that each expert gives to all criteria with respect to the proposed alternatives. This step provides insight into how the decision maker's decision process develops, step by step, from goal analysis to alternative selection. The second phase includes the description of the decision maker's state through Markov chains. In fact, the individual weights obtained in the first phase can be reviewed and described as transition weights from one state to another. Thus, with the construction of the individual transition matrices, the possible next state of the expert is determined from the individual weights at the end of the first phase. Finally, the experts meet, and the process of reaching consensus is analyzed by considering the single individual state obtained at the previous stage and the false consensus bias. The work contributes to the study of the impact of subjective structures, quantified through the Analytical Hierarchical Process, and how they combine with the false consensus bias in group decision-making dynamics and the consensus reaching process in problems involving the selection of equivalent portfolios.Keywords: analytical hierarchical process, consensus building, false consensus effect, markov chains, portfolio selection problem
Procedia PDF Downloads 932553 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market
Authors: Taylan Kabbani, Ekrem Duman
Abstract:
The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent
Procedia PDF Downloads 1782552 Synthesis and Characterization of Some Novel Carbazole Schiff Bases (OLED)
Authors: Baki Cicek, Umit Calisir
Abstract:
Carbazoles have been replaced lots of studies from 1960's to present and also still continues. In 1987, the first diode device had been developed. Thanks to that study, light emitting devices have been investigated and developed and also have been used on commercial applications. Nowadays, OLED (Organic Light Emitting Diodes) technology is using on lots of electronic screen such as (mobile phone, computer monitors, televisions, etc.) Carbazoles were subject a lot of study as a semiconductor material. Although this technology is used commen and widely, it is still development stage. Metal complexes of these compounds are using at pigment dyes because of colored substances, polymer technology, medicine industry, agriculture area, preparing rocket fuel-oil, determine some of biological events, etc. Becides all of these to preparing of schiff base synthesis is going on intensely. In this study, some of novel carbazole schiff bases were synthesized starting from carbazole. For that purpose, firstly, carbazole was alkylated. After purification of N-substituted-carbazole was nitrated to sythesized 3-nitro-N-substituted and 3,6-dinitro-N-substituted carbazoles. At next step, nitro group/groups were reduced to amines. Purified with using a type of silica gel-column chromatography. At the last step of our study, with sythesized 3,6-diamino-N-substituted carbazoles and 3-amino-N-substituted carbazoles were reacted with aldehydes to condensation reactions. 3-(imino-p-hydroxybenzyl)-N-isobutyl -carbazole, 3-(imino-2,3,4-trimethoxybenzene)-N-butylcarbazole, 3-(imino-3,4-dihydroxybenzene)-N-octylcarbazole, 3-(imino-2,3-dihydroxybenzene)-N-octylkarbazole and 3,6-di(α-imino-β-naphthol) -N-hexylcarbazole compounds were synthesized. All of synthesized compounds were characterized with FT-IR, 1H-NMR, 13C-NMR, and LC-MS.Keywords: carbazole, carbazol schiff base, condensation reactions, OLED
Procedia PDF Downloads 4422551 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire
Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan
Abstract:
Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer
Procedia PDF Downloads 1682550 A Critical Discourse Analysis of Protesters in the Debates of Al Jazeera Channel of the Yemeni Revolution
Authors: Raya Sulaiman
Abstract:
Critical discourse analysis investigates how discourse is used to abuse power relationships. Political debates constitute discourses which mirror aspects of ideologies. The Arab world has been one of the most unsettled zones in the world and has dominated global politics due to the Arab revolutions which started in 2010. This study aimed at uncovering the ideological intentions in the formulation and circulation of hegemonic political ideology in the TV political debates of the 2011 to 2012 Yemen revolution, how ideology was used as a tool of hegemony. The study specifically examined the ideologies associated with the use of protesters as a social actor. Data of the study consisted of four debates (17350 words) from four live debate programs: The Opposite Direction, In Depth, Behind the News and the Revolution Talk that were staged at Al Jazeera TV channel between 2011 and 2012. Data was readily transcribed by Al Jazeera online. Al Jazeera was selected for the study because it is the most popular TV network in the Arab world and has a strong presence, especially during the Arab revolutions. Al Jazeera has also been accused of inciting protests across the Arab region. Two debate sites were identified in the data: government and anti-government. The government side represented the president Ali Abdullah Saleh and his regime while the anti-government side represented the gathering squares who demanded the president to ‘step down’. The study analysed verbal discourse aspects of the debates using critical discourse analysis: aspects from the Social Actor Network model of van Leeuwen. This framework provides a step-by-step analysis model, and analyses discourse from specific grammatical processes into broader semantic issues. It also provides representative findings since it considers discourse as representative and reconstructed in social practice. Study findings indicated that Al Jazeera and the anti-government had similarities in terms of the ideological intentions related to the protesters. Al Jazeera victimized and incited the protesters which were similar to the anti-government. Al Jazeera used assimilation, nominalization, and active role allocation as the linguistic aspects in order to reach its ideological intentions related to the protesters. Government speakers did not share the same ideological intentions with Al Jazeera. Study findings indicated that Al Jazeera had excluded the government from its debates causing a violation to its slogan, the opinion, and the other opinion. This study implies the powerful role of discourse in shaping ideological media intentions and influencing the media audience.Keywords: Al Jazeera network, critical discourse analysis, ideology, Yemeni revolution
Procedia PDF Downloads 2242549 Using Machine Learning to Build a Real-Time COVID-19 Mask Safety Monitor
Authors: Yash Jain
Abstract:
The US Center for Disease Control has recommended wearing masks to slow the spread of the virus. The research uses a video feed from a camera to conduct real-time classifications of whether or not a human is correctly wearing a mask, incorrectly wearing a mask, or not wearing a mask at all. Utilizing two distinct datasets from the open-source website Kaggle, a mask detection network had been trained. The first dataset that was used to train the model was titled 'Face Mask Detection' on Kaggle, where the dataset was retrieved from and the second dataset was titled 'Face Mask Dataset, which provided the data in a (YOLO Format)' so that the TinyYoloV3 model could be trained. Based on the data from Kaggle, two machine learning models were implemented and trained: a Tiny YoloV3 Real-time model and a two-stage neural network classifier. The two-stage neural network classifier had a first step of identifying distinct faces within the image, and the second step was a classifier to detect the state of the mask on the face and whether it was worn correctly, incorrectly, or no mask at all. The TinyYoloV3 was used for the live feed as well as for a comparison standpoint against the previous two-stage classifier and was trained using the darknet neural network framework. The two-stage classifier attained a mean average precision (MAP) of 80%, while the model trained using TinyYoloV3 real-time detection had a mean average precision (MAP) of 59%. Overall, both models were able to correctly classify stages/scenarios of no mask, mask, and incorrectly worn masks.Keywords: datasets, classifier, mask-detection, real-time, TinyYoloV3, two-stage neural network classifier
Procedia PDF Downloads 1632548 An Investigation into Why Liquefaction Charts Work: A Necessary Step toward Integrating the States of Art and Practice
Authors: Tarek Abdoun, Ricardo Dobry
Abstract:
This paper is a systematic effort to clarify why field liquefaction charts based on Seed and Idriss’ Simplified Procedure work so well. This is a necessary step toward integrating the states of the art (SOA) and practice (SOP) for evaluating liquefaction and its effects. The SOA relies mostly on laboratory measurements and correlations with void ratio and relative density of the sand. The SOP is based on field measurements of penetration resistance and shear wave velocity coupled with empirical or semi-empirical correlations. This gap slows down further progress in both SOP and SOA. The paper accomplishes its objective through: a literature review of relevant aspects of the SOA including factors influencing threshold shear strain and pore pressure buildup during cyclic strain-controlled tests; a discussion of factors influencing field penetration resistance and shear wave velocity; and a discussion of the meaning of the curves in the liquefaction charts separating liquefaction from no liquefaction, helped by recent full-scale and centrifuge results. It is concluded that the charts are curves of constant cyclic strain at the lower end (Vs1 < 160 m/s), with this strain being about 0.03 to 0.05% for earthquake magnitude, Mw ≈ 7. It is also concluded, in a more speculative way, that the curves at the upper end probably correspond to a variable increasing cyclic strain and Ko, with this upper end controlled by over consolidated and preshaken sands, and with cyclic strains needed to cause liquefaction being as high as 0.1 to 0.3%. These conclusions are validated by application to case histories corresponding to Mw ≈ 7, mostly in the San Francisco Bay Area of California during the 1989 Loma Prieta earthquake.Keywords: permeability, lateral spreading, liquefaction, centrifuge modeling, shear wave velocity charts
Procedia PDF Downloads 2972547 A Grid Synchronization Method Based On Adaptive Notch Filter for SPV System with Modified MPPT
Authors: Priyanka Chaudhary, M. Rizwan
Abstract:
This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.Keywords: solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique
Procedia PDF Downloads 5942546 Progressive Collapse of Cooling Towers
Authors: Esmaeil Asadzadeh, Mehtab Alam
Abstract:
Well documented records of the past failures of the structures reveals that the progressive collapse of structures is one of the major reasons for dramatic human loss and economical consequences. Progressive collapse is the failure mechanism in which the structure fails gradually due to the sudden removal of the structural elements. The sudden removal of some structural elements results in the excessive redistributed loads on the others. This sudden removal may be caused by any sudden loading resulted from local explosion, impact loading and terrorist attacks. Hyperbolic thin walled concrete shell structures being an important part of nuclear and thermal power plants are always prone to such terrorist attacks. In concrete structures, the gradual failure would take place by generation of initial cracks and its propagation in the supporting columns along with the tower shell leading to the collapse of the entire structure. In this study the mechanism of progressive collapse for such high raised towers would be simulated employing the finite element method. The aim of this study would be providing clear conceptual step-by-step descriptions of various procedures for progressive collapse analysis using commercially available finite element structural analysis software’s, with the aim that the explanations would be clear enough that they will be readily understandable and will be used by practicing engineers. The study would be carried out in the following procedures: 1. Provide explanations of modeling, simulation and analysis procedures including input screen snapshots; 2. Interpretation of the results and discussions; 3. Conclusions and recommendations.Keywords: progressive collapse, cooling towers, finite element analysis, crack generation, reinforced concrete
Procedia PDF Downloads 4812545 Soil Salinity from Wastewater Irrigation in Urban Greenery
Authors: H. Nouri, S. Chavoshi Borujeni, S. Anderson, S. Beecham, P. Sutton
Abstract:
The potential risk of salt leaching through wastewater irrigation is of concern for most local governments and city councils. Despite the necessity of salinity monitoring and management in urban greenery, most attention has been on agricultural fields. This study was defined to investigate the capability and feasibility of monitoring and predicting soil salinity using near sensing and remote sensing approaches using EM38 surveys, and high-resolution multispectral image of WorldView3. Veale Gardens within the Adelaide Parklands was selected as the experimental site. The results of the near sensing investigation were validated by testing soil salinity samples in the laboratory. Over 30 band combinations forming salinity indices were tested using image processing techniques. The outcomes of the remote sensing and near sensing approaches were compared to examine whether remotely sensed salinity indicators could map and predict the spatial variation of soil salinity through a potential statistical model. Statistical analysis was undertaken using the Stata 13 statistical package on over 52,000 points. Several regression models were fitted to the data, and the mixed effect modelling was selected the most appropriate one as it takes to account the systematic observation-specific unobserved heterogeneity. Results showed that SAVI (Soil Adjusted Vegetation Index) was the only salinity index that could be considered as a predictor for soil salinity but further investigation is needed. However, near sensing was found as a rapid, practical and realistically accurate approach for salinity mapping of heterogeneous urban vegetation.Keywords: WorldView3, remote sensing, EM38, near sensing, urban green spaces, green smart cities
Procedia PDF Downloads 1622544 Molecular Simulation of NO, NH3 Adsorption in MFI and H-ZSM5
Authors: Z. Jamalzadeh, A. Niaei, H. Erfannia, S. G. Hosseini, A. S. Razmgir
Abstract:
Due to developing the industries, the emission of pollutants such as NOx, SOx, and CO2 are rapidly increased. Generally, NOx is attributed to the mono nitrogen oxides of NO and NO2 that is one of the most important atmospheric contaminants. Hence, controlling the emission of nitrogen oxides is urgent environmentally. Selective Catalytic Reduction of NOx is one of the most common techniques for NOx removal in which Zeolites have wide application due to their high performance. In zeolitic processes, the catalytic reaction occurs mostly in the pores. Therefore, investigation the adsorption phenomena of the molecules in order to gain an insight and understand the catalytic cycle is of important. Hence, in current study, molecular simulations is applied for studying the adsorption phenomena in nanocatalysts applied for SCR of NOx process. The effect of cation addition to the support in the catalysts’ behavior through adsorption step was explored by Mont Carlo (MC). Simulation time of 1 Ns accompanying 1 fs time step, COMPASS27 Force Field and the cut off radios of 12.5 Ȧ was applied for performed runs. It was observed that the adsorption capacity increases in the presence of cations. The sorption isotherms demonstrated the behavior of type I isotherm categories and sorption capacity diminished with increase in temperature whereas an increase was observed at high pressures. Besides, NO sorption showed higher sorption capacity than NH3 in H–ZSM5. In this respect, the Energy distributions signified that the molecules could adsorb in just one sorption site at the catalyst and the sorption energy of NO was stronger than the NH3 in H-ZSM5. Furthermore, the isosteric heat of sorption data showed nearly same values for the molecules; however, it indicated stronger interactions of NO molecules with H-ZSM5 Zeolite compared to the isosteric heat of NH3 which was low in value.Keywords: Monte Carlo simulation, adsorption, NOx, ZSM5
Procedia PDF Downloads 3782543 An Improved Total Variation Regularization Method for Denoising Magnetocardiography
Authors: Yanping Liao, Congcong He, Ruigang Zhao
Abstract:
The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation
Procedia PDF Downloads 1532542 Evaluation of Synthesis and Structure Elucidation of Some Benzimidazoles as Antimicrobial Agents
Authors: Ozlem Temiz Arpaci, Meryem Tasci, Hakan Goker
Abstract:
Benzimidazole, a structural isostere of indol and purine nuclei that can interact with biopolymers, can be identified as master key. So that benzimidazole compounds are important fragments in medicinal chemistry because of their wide range of biological activities including antimicrobial activity. We planned to synthesize some benzimidazole compounds for developing new antimicrobial drug candidates. In this study, we put some heterocyclic rings on second position and an amidine group on the fifth position of benzimidazole ring and synthesized them using a multiple step procedure. For the synthesis of the compounds, as the first step, 4-chloro-3-nitrobenzonitrile was reacted with cyclohexylamine in dimethyl formamide. Imidate esters (compound 2) were then prepared with absolute ethanol saturated with dry HCl gas. These imidate esters which were not too stable were converted to compound 3 by passing ammonia gas through ethanol. At the Pd / C catalyst, the nitro group is reduced to the amine group (compound 4). Finally, various aldehyde derivatives were reacted with sodium metabisulfite addition products to give compound 5-20. Melting points were determined on a Buchi B-540 melting point apparatus in open capillary tubes and are uncorrected. Elemental analyses were done a Leco CHNS 932 elemental analyzer. 1H-NMR and 13C-NMR spectra were recorded on a Varian Mercury 400 MHz spectrometer using DMSO-d6. Mass spectra were acquired on a Waters Micromass ZQ using the ESI(+) method. The structures of them were supported by spectral data. The 1H-NMR, 13C NMR and mass spectra and elemental analysis results agree with those of the proposed structures. Antimicrobial activity studies of the synthesized compounds are under the investigation.Keywords: benzimidazoles, synthesis, structure elucidation, antimicrobial
Procedia PDF Downloads 1562541 Generating a Functional Grammar for Architectural Design from Structural Hierarchy in Combination of Square and Equal Triangle
Authors: Sanaz Ahmadzadeh Siyahrood, Arghavan Ebrahimi, Mohammadjavad Mahdavinejad
Abstract:
Islamic culture was accountable for a plethora of development in astronomy and science in the medieval term, and in geometry likewise. Geometric patterns are reputable in a considerable number of cultures, but in the Islamic culture the patterns have specific features that connect the Islamic faith to mathematics. In Islamic art, three fundamental shapes are generated from the circle shape: triangle, square and hexagon. Originating from their quiddity, each of these geometric shapes has its own specific structure. Even though the geometric patterns were generated from such simple forms as the circle and the square, they can be combined, duplicated, interlaced, and arranged in intricate combinations. So in order to explain geometrical interaction principles between square and equal triangle, in the first definition step, all types of their linear forces individually and in the second step, between them, would be illustrated. In this analysis, some angles will be created from intersection of their directions. All angles are categorized to some groups and the mathematical expressions among them are analyzed. Since the most geometric patterns in Islamic art and architecture are based on the repetition of a single motif, the evaluation results which are obtained from a small portion, is attributable to a large-scale domain while the development of infinitely repeating patterns can represent the unchanging laws. Geometric ornamentation in Islamic art offers the possibility of infinite growth and can accommodate the incorporation of other types of architectural layout as well, so the logic and mathematical relationships which have been obtained from this analysis are applicable in designing some architecture layers and developing the plan design.Keywords: angle, equal triangle, square, structural hierarchy
Procedia PDF Downloads 1952540 The Metabolism of Built Environment: Energy Flow and Greenhouse Gas Emissions in Nigeria
Authors: Yusuf U. Datti
Abstract:
It is becoming increasingly clear that the consumption of resources now enjoyed in the developed nations will be impossible to be sustained worldwide. While developing countries still have the advantage of low consumption and a smaller ecological footprint per person, they cannot simply develop in the same way as other western cities have developed in the past. The severe reality of population and consumption inequalities makes it contentious whether studies done in developed countries can be translated and applied to developing countries. Additional to this disparities, there are few or no metabolism of energy studies in Nigeria. Rather more contentious majority of energy metabolism studies have been done only in developed countries. While researches in Nigeria concentrate on other aspects/principles of sustainability such as water supply, sewage disposal, energy supply, energy efficiency, waste disposal, etc., which will not accurately capture the environmental impact of energy flow in Nigeria, this research will set itself apart by examining the flow of energy in Nigeria and the impact that the flow will have on the environment. The aim of the study is to examine and quantify the metabolic flows of energy in Nigeria and its corresponding environmental impact. The study will quantify the level and pattern of energy inflow and the outflow of greenhouse emissions in Nigeria. This study will describe measures to address the impact of existing energy sources and suggest alternative renewable energy sources in Nigeria that will lower the emission of greenhouse gas emissions. This study will investigate the metabolism of energy in Nigeria through a three-part methodology. The first step involved selecting and defining the study area and some variables that would affect the output of the energy (time of the year, stability of the country, income level, literacy rate and population). The second step involves analyzing, categorizing and quantifying the amount of energy generated by the various energy sources in the country. The third step involves analyzing what effect the variables would have on the environment. To ensure a representative sample of the study area, Africa’s most populous country, with economy that is the second biggest and that is among the top largest oil producing countries in the world is selected. This is due to the understanding that countries with large economy and dense populations are ideal places to examine sustainability strategies; hence, the choice of Nigeria for the study. National data will be utilized unless where such data cannot be found, then local data will be employed which will be aggregated to reflect the national situation. The outcome of the study will help policy-makers better target energy conservation and efficiency programs and enables early identification and mitigation of any negative effects in the environment.Keywords: built environment, energy metabolism, environmental impact, greenhouse gas emissions and sustainability
Procedia PDF Downloads 1832539 Adjustment of the Whole-Body Center of Mass during Trunk-Flexed Walking across Uneven Ground
Authors: Soran Aminiaghdam, Christian Rode, Reinhard Blickhan, Astrid Zech
Abstract:
Despite considerable studies on the impact of imposed trunk posture on human walking, less is known about such locomotion while negotiating changes in ground level. The aim of this study was to investigate the behavior of the VBCOM in response to a two-fold expected perturbation, namely alterations in body posture and in ground level. To this end, the kinematic data and ground reaction forces of twelve able participants were collected. We analyzed the vertical position of the body center of mass (VBCOM) from the ground determined by the body segmental analysis method relative to the laboratory coordinate system at touchdown and toe-off instants during walking across uneven ground — characterized by perturbation contact (a 10-cm visible drop) and pre- and post-perturbation contacts — in comparison to unperturbed level contact while maintaining three postures (regular erect, ~30° and ~50° of trunk flexion from the vertical). The VBCOM was normalized to the distance between the greater trochanter marker and the lateral malleoli marker at the instant of TD. Moreover, we calculated the backward rotation during step-down as the difference of the maximum of the trunk angle in the pre-perturbation contact and the minimal trunk angle in the perturbation contact. Two-way repeated measures ANOVAs revealed contact-specific effects of posture on the VBCOM at touchdown (F = 5.96, p = 0.00). As indicated by the analysis of simple main effects, during unperturbed level and pre-perturbation contacts, no between-posture differences for the VBCOM at touchdown were found. In the perturbation contact, trunk-flexed gaits showed a significant increase of VBCOM as compared to the pre-perturbation contact. In the post-perturbation contact, the VBCOM demonstrated a significant decrease in all gait postures relative to the preceding corresponding contacts with no between-posture differences. Main effects of posture revealed that the VBCOM at toe-off significantly decreased in trunk-flexed gaits relative to the regular erect gait. For the main effect of contact, the VBCOM at toe-off demonstrated changes across perturbation and post-perturbation contacts as compared to the unperturbed level contact. Furthermore, participants exhibited a backward trunk rotation during step-down possibly to control the angular momentum of their whole body. A more pronounced backward trunk rotation (2- to 3-fold compared with level contacts) in trunk-flexed walking contributed to the observed elevated VBCOM during the step-down which may have facilitated drop negotiation. These results may shed light on the interaction between posture and locomotion in able gait, and specifically on the behavior of the body center of mass during perturbed locomotion.Keywords: center of mass, perturbation, posture, uneven ground, walking
Procedia PDF Downloads 1822538 Anesthetic Considerations for Carotid Endarterectomy: Prospective Study Based on Clinical Trials
Authors: Ahmed Yousef A. Al Sultan
Abstract:
Introduction: The aim of this review is based on clinical research that studies the changes in middle cerebral artery velocity using Transcranial Doppler (TCD) and cerebral oxygen saturation using cerebral oximetry in patients undergoing carotid endarterectomy (CEA) surgery under local anesthesia (LA). Patients with or without neurological symptoms during the surgery are taking a role in this study using triplet method of cerebral oximetry, transcranial doppler and awake test in detecting any cerebral ischemic symptoms. Methods: about one hundred patients took part during their CEA surgeries under local anesthesia, using triple assessment mentioned method, Patients requiring general anesthesia be excluded from analysis. All data were recorded at eight surgery stages separately to serve this study. Results: In total regional cerebral oxygen saturation (rSO2), middle cerebral artery (MCA) velocity, and pulsatility index were significantly decreased during carotid artery clamping step in CEA procedures on the targeted carotid side. With most observed changes in MCA velocity during the study. Discussion: Cerebral oxygen saturation and middle cerebral artery velocity were significantly decreased during clamping step of the procedures on the targeted side. The team with neurological symptoms during the procedures showed higher changes of rSO2 and MCA velocity than the team without neurological symptoms. Cerebral rSO2 and MCA velocity significantly increased directly after de-clamping of the internal carotid artery on the affected side.Keywords: awake testing, carotid endarterectomy, cerebral oximetry, Tanscranial Doppler
Procedia PDF Downloads 1692537 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based on Local Color Histograms
Authors: Mawloud Mosbah, Bachir Boucheham
Abstract:
Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.Keywords: CBIR, color global histogram, color local histogram, weak segmentation, Euclidean distance
Procedia PDF Downloads 3602536 Evaluating the Effectiveness of Plantar Sensory Insoles and Remote Patient Monitoring for Early Intervention in Diabetic Foot Ulcer Prevention in Patients with Peripheral Neuropathy
Authors: Brock Liden, Eric Janowitz
Abstract:
Introduction: Diabetic peripheral neuropathy (DPN) affects 70% of individuals with diabetes1. DPN causes a loss of protective sensation, which can lead to tissue damage and diabetic foot ulcer (DFU) formation2. These ulcers can result in infections and lower-extremity amputations of toes, the entire foot, and the lower leg. Even after a DFU is healed, recurrence is common, with 49% of DFU patients developing another ulcer within a year and 68% within 5 years3. This case series examines the use of sensory insoles and newly available plantar data (pressure, temperature, step count, adherence) and remote patient monitoring in patients at risk of DFU. Methods: Participants were provided with custom-made sensory insoles to monitor plantar pressure, temperature, step count, and daily use and were provided with real-time cues for pressure offloading as they went about their daily activities. The sensory insoles were used to track subject compliance, ulceration, and response to feedback from real-time alerts. Patients were remotely monitored by a qualified healthcare professional and were contacted when areas of concern were seen and provided coaching on reducing risk factors and overall support to improve foot health. Results: Of the 40 participants provided with the sensory insole system, 4 presented with a DFU. Based on flags generated from the available plantar data, patients were contacted by the remote monitor to address potential concerns. A standard clinical escalation protocol detailed when and how concerns should be escalated to the provider by the remote monitor. Upon escalation to the provider, patients were brought into the clinic as needed, allowing for any issues to be addressed before more serious complications might arise. Conclusion: This case series explores the use of innovative sensory technology to collect plantar data (pressure, temperature, step count, and adherence) for DFU detection and early intervention. The results from this case series suggest the importance of sensory technology and remote patient monitoring in providing proactive, preventative care for patients at risk of DFU. This robust plantar data, with the addition of remote patient monitoring, allow for patients to be seen in the clinic when concerns arise, giving providers the opportunity to intervene early and prevent more serious complications, such as wounds, from occurring.Keywords: diabetic foot ulcer, DFU prevention, digital therapeutics, remote patient monitoring
Procedia PDF Downloads 772535 Research the Causes of Defects and Injuries of Reinforced Concrete and Stone Construction
Authors: Akaki Qatamidze
Abstract:
Implementation of the project will be a step forward in terms of reliability in Georgia and the improvement of the construction and the development of construction. Completion of the project is expected to result in a complete knowledge, which is expressed in concrete and stone structures of assessing the technical condition of the processing. This method is based on a detailed examination of the structure, in order to establish the injuries and the elimination of the possibility of changing the structural scheme of the new requirements and architectural preservationists. Reinforced concrete and stone structures research project carried out in a systematic analysis of the important approach is to optimize the process of research and development of new knowledge in the neighboring areas. In addition, the problem of physical and mathematical models of rational consent, the main pillar of the physical (in-situ) data and mathematical calculation models and physical experiments are used only for the calculation model specification and verification. Reinforced concrete and stone construction defects and failures the causes of the proposed research to enhance the effectiveness of their maximum automation capabilities and expenditure of resources to reduce the recommended system analysis of the methodological concept-based approach, as modern science and technology major particularity of one, it will allow all family structures to be identified for the same work stages and procedures, which makes it possible to exclude subjectivity and addresses the problem of the optimal direction. It discussed the methodology of the project and to establish a major step forward in the construction trades and practical assistance to engineers, supervisors, and technical experts in the construction of the settlement of the problem.Keywords: building, reinforced concrete, expertise, stone structures
Procedia PDF Downloads 3362534 A Randomized Control Trial Intervention to Combat Childhood Obesity in Negeri Sembilan: The Hebat! Program
Authors: Siti Sabariah Buhari, Ruzita Abdul Talib, Poh Bee Koon
Abstract:
This study aims to develop and evaluate an intervention to improve eating habits, active lifestyle and weight status of overweight and obese children in Negeri Sembilan. The H.E.B.A.T! Program involved children, parents, and school and focused on behaviour and environment modification to achieve its goal. The intervention consists of H.E.B.A.T! Camp, parent’s workshop and school-based activities. A total of 21 children from intervention school and 22 children from control school who had BMI for age Z-score ≥ +1SD participated in the study. Mean age of subjects was 10.8 ± 0.3 years old. Four phases were included in the development of the intervention. Evaluation of intervention was conducted through process, impact and outcome evaluation. Process evaluation found that intervention program was implemented successfully with minimal modification and without having any technical problems. Impact and outcome evaluation was assessed based on dietary intake, average step counts, BMI for age z-score, body fat percentage and waist circumference at pre-intervention (T0), post-intervention 1 (T1) and post-intervention 2 (T2). There was significant reduction in energy (14.8%) and fat (21.9%) intakes (at p < 0.05) at post-intervention 1 (T1) in intervention group. By controlling for sex as covariate, there was significant intervention effect for average step counts, BMI for age z-score and waist circumference (p < 0.05). In conclusion, the intervention made an impact on positive behavioural intentions and improves weight status of the children. It is expected that the HEBAT! Program could be adopted and implemented by the government and private sector as well as policy-makers in formulating childhood obesity intervention.Keywords: childhood obesity, diet, obesity intervention, physical activity
Procedia PDF Downloads 2922533 In-Silico Fusion of Bacillus Licheniformis Chitin Deacetylase with Chitin Binding Domains from Chitinases
Authors: Keyur Raval, Steffen Krohn, Bruno Moerschbacher
Abstract:
Chitin, the biopolymer of the N-acetylglucosamine, is the most abundant biopolymer on the planet after cellulose. Industrially, chitin is isolated and purified from the shell residues of shrimps. A deacetylated derivative of chitin i.e. chitosan has more market value and applications owing to it solubility and overall cationic charge compared to the parent polymer. This deacetylation on an industrial scale is performed chemically using alkalis like sodium hydroxide. This reaction not only is hazardous to the environment owing to negative impact on the marine ecosystem. A greener option to this process is the enzymatic process. In nature, the naïve chitin is converted to chitosan by chitin deacetylase (CDA). This enzymatic conversion on the industrial scale is however hampered by the crystallinity of chitin. Thus, this enzymatic action requires the substrate i.e. chitin to be soluble which is technically difficult and an energy consuming process. We in this project wanted to address this shortcoming of CDA. In lieu of this, we have modeled a fusion protein with CDA and an auxiliary protein. The main interest being to increase the accessibility of the enzyme towards crystalline chitin. A similar fusion work with chitinases had improved the catalytic ability towards insoluble chitin. In the first step, suitable partners were searched through the protein data bank (PDB) wherein the domain architecture were sought. The next step was to create the models of the fused product using various in silico techniques. The models were created by MODELLER and evaluated for properties such as the energy or the impairment of the binding sites. A fusion PCR has been designed based on the linker sequences generated by MODELLER and would be tested for its activity towards insoluble chitin.Keywords: chitin deacetylase, modeling, chitin binding domain, chitinases
Procedia PDF Downloads 2422532 Sports Business Services Model: A Research Model Study in Reginal Sport Authority of Thailand
Authors: Siriraks Khawchaimaha, Sangwian Boonto
Abstract:
Sport Authority of Thailand (SAT) is the state enterprise, promotes and supports all sports kind both professional and athletes for competitions, and administer under government policy and government officers and therefore, all financial supports whether cash inflows and cash outflows are strictly committed to government budget and limited to the planned projects at least 12 to 16 months ahead of reality, as results of ineffective in sport events, administration and competitions. In order to retain in the sports challenges around the world, SAT need to has its own sports business services model by each stadium, region and athletes’ competencies. Based on the HMK model of Khawchaimaha, S. (2007), this research study is formalized into each 10 regional stadiums to details into the characteristics root of fans, athletes, coaches, equipments and facilities, and stadiums. The research designed is firstly the evaluation of external factors: hardware whereby competition or practice of stadiums, playground, facilities, and equipments. Secondly, to understand the software of the organization structure, staffs and management, administrative model, rules and practices. In addition, budget allocation and budget administration with operating plan and expenditure plan. As results for the third step, issues and limitations which require action plan for further development and support, or to cease that unskilled sports kind. The final step, based on the HMK model and modeling canvas by Alexander O and Yves P (2010) are those of template generating Sports Business Services Model for each 10 SAT’s regional stadiums.Keywords: HMK model, not for profit organization, sport business model, sport services model
Procedia PDF Downloads 3052531 Aluminum Matrix Composites Reinforced by Glassy Carbon-Titanium Spatial Structure
Authors: B. Hekner, J. Myalski, P. Wrzesniowski
Abstract:
This study presents aluminum matrix composites reinforced by glassy carbon (GC) and titanium (Ti). In the first step, the heterophase (GC+Ti), spatial form (similar to skeleton) of reinforcement was obtained via own method. The polyurethane foam (with spatial, open-cells structure) covered by suspension of Ti particles in phenolic resin was pyrolyzed. In the second step, the prepared heterogeneous foams were infiltrated by aluminium alloy. The manufactured composites are designated to industrial application, especially as a material used in tribological field. From this point of view, the glassy carbon was applied to stabilise a coefficient of friction on the required value 0.6 and reduce wear. Furthermore, the wear can be limited due to titanium phase application, which reveals high mechanical properties. Moreover, fabrication of thin titanium layer on the carbon skeleton leads to reduce contact between aluminium alloy and carbon and thus aluminium carbide phase creation. However, the main modification involves the manufacturing of reinforcement in the form of 3D, skeleton foam. This kind on reinforcement reveals a few important advantages compared to classical form of reinforcement-particles: possibility to control homogeneity of reinforcement phase in composite material; low-advanced technique of composite manufacturing- infiltration; possibility to application the reinforcement only in required places of material; strict control of phase composition; High quality of bonding between components of material. This research is founded by NCN in the UMO-2016/23/N/ST8/00994.Keywords: metal matrix composites, MMC, glassy carbon, heterophase composites, tribological application
Procedia PDF Downloads 1182530 Thermodynamics of Water Condensation on an Aqueous Organic-Coated Aerosol Aging via Chemical Mechanism
Authors: Yuri S. Djikaev
Abstract:
A large subset of aqueous aerosols can be initially (immediately upon formation) coated with various organic amphiphilic compounds whereof the hydrophilic moieties are attached to the aqueous aerosol core while the hydrophobic moieties are exposed to the air thus forming a hydrophobic coating thereupon. We study the thermodynamics of water condensation on such an aerosol whereof the hydrophobic organic coating is being concomitantly processed by chemical reactions with atmospheric reactive species. Such processing (chemical aging) enables the initially inert aerosol to serve as a nucleating center for water condensation. The most probable pathway of such aging involves atmospheric hydroxyl radicals that abstract hydrogen atoms from hydrophobic moieties of surface organics (first step), the resulting radicals being quickly oxidized by ubiquitous atmospheric oxygen molecules to produce surface-bound peroxyl radicals (second step). Taking these two reactions into account, we derive an expression for the free energy of formation of an aqueous droplet on an organic-coated aerosol. The model is illustrated by numerical calculations. The results suggest that the formation of aqueous cloud droplets on such aerosols is most likely to occur via Kohler activation rather than via nucleation. The model allows one to determine the threshold parameters necessary for their Kohler activation. Numerical results also corroborate previous suggestions that one can neglect some details of aerosol chemical composition in investigating aerosol effects on climate.Keywords: aqueous aerosols, organic coating, chemical aging, cloud condensation nuclei, Kohler activation, cloud droplets
Procedia PDF Downloads 395