Search results for: generate
624 Rheological and Crystallization Properties of Dark Chocolate Formulated with Essential Oil of Orange and Carotene Extracted from Pineapple Peels
Authors: Mayra Pilamunga, Edwin Vera
Abstract:
The consumption of dark chocolate is beneficial due to its high content of flavonoids, catechins, and procyanidins. To improve its properties, fortification of chocolate with polyphenols, anthocyanins, soy milk powder and other compounds has been evaluated in several studies. However, to our best knowledge, the addition of carotenes to chocolate has not been tested. Carotenoids, especially ß-carotene and lutein, are widely distributed in fruits and vegetables so that they could be extracted from agro-industrial waste, such as fruit processing. On the other hand, limonene produces crystalline changes of cocoa butter and improves its consistency and viscosity. This study aimed to evaluate the production of dark chocolate with the addition of carotenes extracted from an agro industrial waste and to improve its rheological properties and crystallization, with orange essential oil. The dried and fermented cocoa beans were purchased in Puerto Quito, Ecuador, and had a fat content of 51%. Six types of chocolates were formulated, and two formulations were chosen, one at 65% cocoa and other at 70% cocoa, both with a solid: fat ratio of 1.4:1. With the formulations selected, the influence of the addition of 0.75% and 1.5% orange essential oil was evaluated, and analysis to measure the viscosity, crystallization and sensory analysis were done. It was found that essential oil does not generate significant changes in the properties of chocolate, but has an important effect on aroma and coloration, which changed from auburn to brown. The best scores on sensory analysis were obtained for the samples formulated with 0.75% essential oil. Prior to the formulation with carotenes, the extraction of these compounds from pineapple peels were performed. The process was done with and without a previous enzymatic treatment, with three solid-solvent ratios. The best treatment was using enzymes in a solids-solvent ratio of 1:12.5; the extract obtained under these conditions had 4.503 ± 0.214 μg Eq. β-carotene/mL. This extract was encapsulated with gum arabic and maltodextrin, and the solution was dried using a freeze dryer. The encapsulated carotenes were added to the chocolate in an amount of 1.7% however 60,8 % of them were lost in the final product.Keywords: cocoa, fat crystallization, limonene, carotenoids, pineapple peels
Procedia PDF Downloads 160623 An Enzyme Technology - Metnin™ - Enables the Full Replacement of Fossil-Based Polymers by Lignin in Polymeric Composites
Authors: Joana Antunes, Thomas Levée, Barbara Radovani, Anu Suonpää, Paulina Saloranta, Liji Sobhana, Petri Ihalainen
Abstract:
Lignin is an important component in the exploitation of lignocellulosic biomass. It has been shown that within the next years, the yield of added-value lignin-based chemicals and materials will generate renewable alternatives to oil-based products (e.g. polymeric composites, resins and adhesives) and enhance the economic feasibility of biorefineries. In this paper, a novel technology for lignin valorisation (METNIN™) is presented. METNIN™ is based on the oxidative action of an alkaliphilic enzyme in aqueous alkaline conditions (pH 10-11) at mild temperature (40-50 °C) combined with a cascading membrane operation, yielding a collection of lignin fractions (from oligomeric down to mixture of tri-, di- and monomeric units) with distinct molecular weight distribution, low polydispersity and favourable physicochemical properties. The alkaline process conditions ensure the high processibility of crude lignin in an aqueous environment and the efficiency of the enzyme, yielding better compatibility of lignin towards targeted applications. The application of a selected lignin fraction produced by METNIN™ as a suitable lignopolyol to completely replace a commercial polyol in polyurethane rigid foam formulations is presented as a prototype. Liquid lignopolyols with a high lignin content were prepared by oxypropylation and their full utilization in the polyurethane rigid foam formulation was successfully demonstrated. Moreover, selected technical specifications of different foam demonstrators were determined, including closed cell count, water uptake and compression characteristics. These specifications are within industrial standards for rigid foam applications. The lignin loading in the lignopolyol was a major factor determining the properties of the foam. In addition to polyurethane foam demonstrators, other examples of lignin-based products related to resins and sizing applications will be presented.Keywords: enzyme, lignin valorisation, polyol, polyurethane foam
Procedia PDF Downloads 153622 Effect of Helical Flow on Separation Delay in the Aortic Arch for Different Mechanical Heart Valve Prostheses by Time-Resolved Particle Image Velocimetry
Authors: Qianhui Li, Christoph H. Bruecker
Abstract:
Atherosclerotic plaques are typically found where flow separation and variations of shear stress occur. Although helical flow patterns and flow separations have been recorded in the aorta, their relation has not been clearly clarified and especially in the condition of artificial heart valve prostheses. Therefore, an experimental study is performed to investigate the hemodynamic performance of different mechanical heart valves (MHVs), i.e. the SJM Regent bileaflet mechanical heart valve (BMHV) and the Lapeyre-Triflo FURTIVA trileaflet mechanical heart valve (TMHV), in a transparent model of the human aorta under a physiological pulsatile right-hand helical flow condition. A typical systolic flow profile is applied in the pulse-duplicator to generate a physiological pulsatile flow which thereafter flows past an axial turbine blade structure to imitate the right-hand helical flow induced in the left ventricle. High-speed particle image velocimetry (PIV) measurements are used to map the flow evolution. A circular open orifice nozzle inserted in the valve plane as the reference configuration initially replaces the valve under investigation to understand the hemodynamic effects of the entered helical flow structure on the flow evolution in the aortic arch. Flow field analysis of the open orifice nozzle configuration illuminates the helical flow effectively delays the flow separation at the inner radius wall of the aortic arch. The comparison of the flow evolution for different MHVs shows that the BMHV works like a flow straightener which re-configures the helical flow pattern into three parallel jets (two side-orifice jets and the central orifice jet) while the TMHV preserves the helical flow structure and therefore prevent the flow separation at the inner radius wall of the aortic arch. Therefore the TMHV is of better hemodynamic performance and reduces the pressure loss.Keywords: flow separation, helical aortic flow, mechanical heart valve, particle image velocimetry
Procedia PDF Downloads 174621 Understanding the Fundamental Driver of Semiconductor Radiation Tolerance with Experiment and Theory
Authors: Julie V. Logan, Preston T. Webster, Kevin B. Woller, Christian P. Morath, Michael P. Short
Abstract:
Semiconductors, as the base of critical electronic systems, are exposed to damaging radiation while operating in space, nuclear reactors, and particle accelerator environments. What innate property allows some semiconductors to sustain little damage while others accumulate defects rapidly with dose is, at present, poorly understood. This limits the extent to which radiation tolerance can be implemented as a design criterion. To address this problem of determining the driver of semiconductor radiation tolerance, the first step is to generate a dataset of the relative radiation tolerance of a large range of semiconductors (exposed to the same radiation damage and characterized in the same way). To accomplish this, Rutherford backscatter channeling experiments are used to compare the displaced lattice atom buildup in InAs, InP, GaP, GaN, ZnO, MgO, and Si as a function of step-wise alpha particle dose. With this experimental information on radiation-induced incorporation of interstitial defects in hand, hybrid density functional theory electron densities (and their derived quantities) are calculated, and their gradient and Laplacian are evaluated to obtain key fundamental information about the interactions in each material. It is shown that simple, undifferentiated values (which are typically used to describe bond strength) are insufficient to predict radiation tolerance. Instead, the curvature of the electron density at bond critical points provides a measure of radiation tolerance consistent with the experimental results obtained. This curvature and associated forces surrounding bond critical points disfavors localization of displaced lattice atoms at these points, favoring their diffusion toward perfect lattice positions. With this criterion to predict radiation tolerance, simple density functional theory simulations can be conducted on potential new materials to gain insight into how they may operate in demanding high radiation environments.Keywords: density functional theory, GaN, GaP, InAs, InP, MgO, radiation tolerance, rutherford backscatter channeling
Procedia PDF Downloads 174620 A Theoretical Approach on Electoral Competition, Lobby Formation and Equilibrium Policy Platforms
Authors: Deepti Kohli, Meeta Keswani Mehra
Abstract:
The paper develops a theoretical model of electoral competition with purely opportunistic candidates and a uni-dimensional policy using the probability voting approach while focusing on the aspect of lobby formation to analyze the inherent complex interactions between centripetal and centrifugal forces and their effects on equilibrium policy platforms. There exist three types of agents, namely, Left-wing, Moderate and Right-wing who comprise of the total voting population. Also, it is assumed that the Left and Right agents are free to initiate a lobby of their choice. If initiated, these lobbies generate donations which in turn can be contributed to one (or both) electoral candidates in order to influence them to implement the lobby’s preferred policy. Four different lobby formation scenarios have been considered: no lobby formation, only Left, only Right and both Left and Right. The equilibrium policy platforms, amount of individual donations by agents to their respective lobbies and the contributions offered to the electoral candidates have been solved for under each of the above four cases. Since it is assumed that the agents cannot coordinate each other’s actions during the lobby formation stage, there exists a probability with which a lobby would be formed, which is also solved for in the model. The results indicate that the policy platforms of the two electoral candidates converge completely under the cases of no lobby and both (extreme) formations but diverge under the cases of only one (Left or Right) lobby formation. This is because in the case of no lobby being formed, only the centripetal forces (emerging from the election-winning aspect) are present while in the case of both extreme (Left-wing and Right-wing) lobbies being formed, centrifugal forces (emerging from the lobby formation aspect) also arise but cancel each other out, again resulting in a pure policy convergence phenomenon. In contrast, in case of only one lobby being formed, both centripetal and centrifugal forces interact strategically, leading the two electoral candidates to choose completely different policy platforms in equilibrium. Additionally, it is found that in equilibrium, while the donation by a specific agent type increases with the formation of both lobbies in comparison to when only one lobby is formed, the probability of implementation of the policy being advocated by that lobby group falls.Keywords: electoral competition, equilibrium policy platforms, lobby formation, opportunistic candidates
Procedia PDF Downloads 333619 The Role of Knowledge Management in Innovation: Spanish Evidence
Authors: María Jesús Luengo-Valderrey, Mónica Moso-Díez
Abstract:
In the knowledge-based economy, innovation is considered essential in order to achieve survival and growth in organizations. On the other hand, knowledge management is currently understood as one of the keys to innovation process. Both factors are generally admitted as generators of competitive advantage in organizations. Specifically, activities on R&D&I and those that generate internal knowledge have a positive influence in innovation results. This paper examines this effect and if it is similar or not is what we aimed to quantify in this paper. We focus on the impact that proportion of knowledge workers, the R&D&I investment, the amounts destined for ICTs and training for innovation have on the variation of tangible and intangibles returns for the sector of high and medium technology in Spain. To do this, we have performed an empirical analysis on the results of questionnaires about innovation in enterprises in Spain, collected by the National Statistics Institute. First, using clusters methodology, the behavior of these enterprises regarding knowledge management is identified. Then, using SEM methodology, we performed, for each cluster, the study about cause-effect relationships among constructs defined through variables, setting its type and quantification. The cluster analysis results in four groups in which cluster number 1 and 3 presents the best performance in innovation with differentiating nuances among them, while clusters 2 and 4 obtained divergent results to a similar innovative effort. However, the results of SEM analysis for each cluster show that, in all cases, knowledge workers are those that affect innovation performance most, regardless of the level of investment, and that there is a strong correlation between knowledge workers and investment in knowledge generation. The main findings reached is that Spanish high and medium technology companies improve their innovation performance investing in internal knowledge generation measures, specially, in terms of R&D activities, and underinvest in external ones. This, and the strong correlation between knowledge workers and the set of activities that promote the knowledge generation, should be taken into account by managers of companies, when making decisions about their investments for innovation, since they are key for improving their opportunities in the global market.Keywords: high and medium technology sector, innovation, knowledge management, Spanish companies
Procedia PDF Downloads 237618 Redox-Mediated Supramolecular Radical Gel
Authors: Sonam Chorol, Sharvan Kumar, Pritam Mukhopadhyay
Abstract:
In biology, supramolecular systems require the use of chemical fuels to stay in sustained nonequilibrium steady states termed dissipative self-assembly in contrast to synthetic self-assembly. Biomimicking these natural dynamic systems, some studies have demonstrated artificial self-assembly under nonequilibrium utilizing various forms of energies (fuel) such as chemical, redox, and pH. Naphthalene diimides (NDIs) are well-known organic molecules in supramolecular architectures with high electron affinity and have applications in controlled electron transfer (ET) reactions, etc. Herein, we report the endergonic ET from tetraphenylborate to highly electron-deficient phosphonium NDI²+ dication to generate NDI•+ radical. The formation of radicals was confirmed by UV-Vis-NIR absorption spectroscopy. Electron-donor and electron-acceptor energy levels were calculated from experimental electrochemistry and theoretical DFT analysis. The HOMO of the electron donor locates below the LUMO of the electro-acceptor. This indicates that electron transfer is endergonic (ΔE°ET = negative). The endergonic ET from NaBPh₄ to NDI²+ dication was achieved thermodynamically by the formation of coupled biphenyl product confirmed by GC-MS analysis. NDI molecule bearing octyl phosphonium at the core and H-bond forming imide moieties at the axial position forms a gel. The rheological properties of purified radical ion NDI⦁+ gels were evaluated. The atomic force microscopy studies reveal the formation of large branching-type networks with a maximum height of 70-80 nm. The endergonic ET from NaBPh₄ to NDI²+ dication was used to design the assembly and disassembly redox reaction cycle using reducing (NaBPh₄) and oxidizing agents (Br₂) as chemical fuels. A part of NaBPh₄ is used to drive assembly, while a fraction of the NaBPh₄ is dissipated by forming a useful product. The system goes back to the disassembled NDI²+ dication state with the addition of Br₂. We think bioinspired dissipative self-assembly is the best approach to developing future lifelike materials with autonomous behavior.Keywords: Ionic-gel, redox-cycle, self-assembly, useful product
Procedia PDF Downloads 85617 Climate Change Adaptation: Methodologies and Tools to Define Resilience Scenarios for Existing Buildings in Mediterranean Urban Areas
Authors: Francesca Nicolosi, Teresa Cosola
Abstract:
Climate changes in Mediterranean areas, such as the increase of average seasonal temperatures, the urban heat island phenomenon, the intensification of solar radiation and the extreme weather threats, cause disruption events, so that climate adaptation has become a pressing issue. Due to the strategic role that the built heritage holds in terms of environmental impact and energy waste and its potentiality, it is necessary to assess the vulnerability and the adaptive capacity of the existing building to climate change, in order to define different mitigation scenarios. The aim of this research work is to define an optimized and integrated methodology for the assessment of resilience levels and adaptation scenarios for existing buildings in Mediterranean urban areas. Moreover, the study of resilience indicators allows us to define building environmental and energy performance in order to identify the design and technological solutions for the improvement of the building and its urban area potentialities. The methodology identifies step-by-step different phases, starting from the detailed study of characteristic elements of urban system: climatic, natural, human, typological and functional components are analyzed in their critical factors and their potential. Through the individuation of the main perturbing factors and the vulnerability degree of the system to the risks linked to climate change, it is possible to define mitigation and adaptation scenarios. They can be different, according to the typological, functional and constructive features of the analyzed system, divided into categories of intervention, and characterized by different analysis levels (from the single building to the urban area). The use of software simulations allows obtaining information on the overall behavior of the building and the urban system, to generate predictive models in the medium and long-term environmental and energy retrofit and to make a comparative study of the mitigation scenarios identified. The studied methodology is validated on a case study.Keywords: climate impact mitigation, energy efficiency, existing building heritage, resilience
Procedia PDF Downloads 240616 Hidden Hot Spots: Identifying and Understanding the Spatial Distribution of Crime
Authors: Lauren C. Porter, Andrew Curtis, Eric Jefferis, Susanne Mitchell
Abstract:
A wealth of research has been generated examining the variation in crime across neighborhoods. However, there is also a striking degree of crime concentration within neighborhoods. A number of studies show that a small percentage of street segments, intersections, or addresses account for a large portion of crime. Not surprisingly, a focus on these crime hot spots can be an effective strategy for reducing community level crime and related ills, such as health problems. However, research is also limited in an important respect. Studies tend to use official data to identify hot spots, such as 911 calls or calls for service. While the use of call data may be more representative of the actual level and distribution of crime than some other official measures (e.g. arrest data), call data still suffer from the 'dark figure of crime.' That is, there is most certainly a degree of error between crimes that occur versus crimes that are reported to the police. In this study, we present an alternative method of identifying crime hot spots, that does not rely on official data. In doing so, we highlight the potential utility of neighborhood-insiders to identify and understand crime dynamics within geographic spaces. Specifically, we use spatial video and geo-narratives to record the crime insights of 36 police, ex-offenders, and residents of a high crime neighborhood in northeast Ohio. Spatial mentions of crime are mapped to identify participant-identified hot spots, and these are juxtaposed with calls for service (CFS) data. While there are bound to be differences between these two sources of data, we find that one location, in particular, a corner store, emerges as a hot spot for all three groups of participants. Yet it does not emerge when we examine CFS data. A closer examination of the space around this corner store and a qualitative analysis of narrative data reveal important clues as to why this store may indeed be a hot spot, but not generate disproportionate calls to the police. In short, our results suggest that researchers who rely solely on official data to study crime hot spots may risk missing some of the most dangerous places.Keywords: crime, narrative, video, neighborhood
Procedia PDF Downloads 238615 An Ancient Rule for Constructing Dodecagonal Quasi-Periodic Formations
Authors: Rima A. Ajlouni
Abstract:
The discovery of quasi-periodic structures in material science is revealing an exciting new class of symmetries, which has never been explored before. Due to their unique structural and visual properties, these symmetries are drawing interest from many scientific and design disciplines. Especially, in art and architecture, these symmetries can provide a rich source of geometry for exploring new patterns, forms, systems, and structures. However, the structural systems of these complicated symmetries are still posing a perplexing challenge. While much of their local order has been explored, the global governing system is still unresolved. Understanding their unique global long-range order is essential to their generation and application. The recent discovery of dodecagonal quasi-periodic patterns in historical Islamic architecture is generating a renewed interest into understanding the mathematical principles of traditional Islamic geometry. Astonishingly, many centuries before its description in the modern science, ancient artists, by using the most primitive tools (a compass and a straight edge), were able to construct patterns with quasi-periodic formations. These ancient patterns can be found all over the ancient Islamic world, many of which exhibit formations with 5, 8, 10 and 12 quasi-periodic symmetries. Based on the examination of these historical patterns and derived from the generating principles of Islamic geometry, a global multi-level structural model is presented that is able to describe the global long-range order of dodecagonal quasi-periodic formations in Islamic Architecture. Furthermore, this method is used to construct new quasi-periodic tiling systems as well as generating their deflation and inflation rules. This method can be used as a general guiding principle for constructing infinite patches of dodecagon-based quasi-periodic formations, without the need for local strategies (tiling, matching, grid, substitution, etc.) or complicated mathematics; providing an easy tool for scientists, mathematicians, teachers, designers and artists, to generate and study a wide range of dodecagonal quasi-periodic formations.Keywords: dodecagonal, Islamic architecture, long-range order, quasi-periodi
Procedia PDF Downloads 402614 Epidemiological Correlates of Adherence to Anti-Hypertensive Treatment in Primary Health Care Setting of Ludhiana, Punjab
Authors: Sangeeta Girdhar, Amanat Grewal, Nahush Bansal
Abstract:
Introduction: There is an increasing burden of hypertension in India. The morbidity and mortality arising from complications are mainly due to non-adherence to medication, unhealthy dietary habits, and lack of physical activity. Non-adherence is a well-recognised factor contributing to inadequate control of high blood pressure. Adherence to pharmacotherapy for hypertension varies from 43% to 88%. Non-adherence is influenced by various socio-demographic factors. Understanding these factors is useful in managing non-adherence. Therefore, the study was planned to determine adherence among hypertensives and factors associated with non-adherence to treatment. Methodology: A cross-sectional study was conducted at Urban Health Training Centre of Dayanand Medical College and Hospital Ludhiana. Patients attending the OPD over a period of 3 months were included in the study. Prior ethical approval was obtained, and informed consent was taken from subjects. A predesigned semi-structured questionnaire was applied, which included socio-demographic profile, treatment-seeking behaviour, adherence to the antihypertensive medication, lifestyle factors (intake of alcohol, smoking, consumption of junk food, high salt intake) contributing to the development of the disease. Reasons for non-adherence to the therapy were also explored. Data was entered into excel, and SPSS 26 version was used for analysis. Results: A total of 186 individuals were interviewed. Out of these, 113 females (60.8%) and 73 males (39.2%) participated in the study. Mean age of participants was 60.9 ± 10.7 years. Adherence to anti-hypertensive treatment was found in 68.3% of the participants. It was observed that adherence was more in literate individuals as compared to illiterate (p value- 0.78). Adherence was lower among smokers (33.3%) and alcohol consumers (53.8%) as compared to non-users (69.4% and 70.6%, respectively). The predominant reasons for skipping medications were discontinuing medication when feeling well, forgetfulness and unawareness. Conclusion: There is a need to generate awareness regarding the importance of adherence to therapy among patients. Intensive health education and counselling of the patients is the need of the hour.Keywords: hypertension, anti-hypertensive, adherence, counselling
Procedia PDF Downloads 90613 Smart Signature - Medical Communication without Barrier
Authors: Chia-Ying Lin
Abstract:
This paper explains how to enhance doctor-patient communication and nurse-patient communication through multiple intelligence signing methods and user-centered. It is hoped that through the implementation of the "electronic consent", the problems faced by the paper consent can be solved: storage methods, resource utilization, convenience, correctness of information, integrated management, statistical analysis and other related issues. Make better use and allocation of resources to provide better medical quality. First, invite the medical records department to assist in the inventory of paper consent in the hospital: organising, classifying, merging, coding, and setting. Second, plan the electronic consent configuration file: set the form number, consent form group, fields and templates, and the corresponding doctor's order code. Next, Summarize four types of rapid methods of electronic consent: according to the doctor's order, according to the medical behavior, according to the schedule, and manually generate the consent form. Finally, system promotion and adjustment: form an "electronic consent promotion team" to improve, follow five major processes: planning, development, testing, release, and feedback, and invite clinical units to raise the difficulties faced in the promotion, and make improvements to the problems. The electronic signature rate of the whole hospital will increase from 4% in January 2022 to 79% in November 2022. Use the saved resources more effectively, including: reduce paper usage (reduce carbon footprint), reduce the cost of ink cartridges, re-plan and use the space for paper medical records, and save human resources to provide better services. Through the introduction of information technology and technology, the main spirit of "lean management" is implemented. Transforming and reengineering the process to eliminate unnecessary waste is also the highest purpose of this project.Keywords: smart signature, electronic consent, electronic medical records, user-centered, doctor-patient communication, nurse-patient communication
Procedia PDF Downloads 126612 Technologies of Factory Farming: An Exploration of Ongoing Confrontations with Farm Animal Sanctuaries
Authors: Chetna Khandelwal
Abstract:
This research aims to study the contentions that Farm Animal Sanctuaries pose to human-animal relationships in modernity, which have developed as a result of globalisation of the meat industry and advancements in technology. The sociological history of human-animal relationships in farming is contextualised in order to set a foundation for the follow-up examination of challenges to existing human-(farm)animal relationships by Farm Animal Sanctuaries. The methodology was influenced by relativism, and the method involved three semi-structured small-group interviews, conducted at locations of sanctuaries. The sample was chosen through purposive sampling and varied by location and size of the sanctuary. Data collected were transcribed and qualitatively coded to generate themes. Findings revealed that sanctuary contentions to established human-animal relationships by factory farming could be divided into 4 broad categories – Revealing horrors of factory farming (involving uncovering power relations in agribusiness); transforming relationships with animals (including letting them emotionally heal in accordance with their individual personalities and treating them as partial-pets); educating the public regarding welfare conditions in factory farms as well as animal sentience through practical experience or positive imagery of farm animals, and addressing retaliation made by agribusiness in the form of technologies or discursive strategies. Hence, this research concludes that The human-animal relationship in current times has been characterised by – (ideological and physical) distance from farm animals, commodification due to increased chasing of profits over welfare and exploitation using technological advancements, creating unequal power dynamics that rid animals of any agency. Challenges to this relationship can be influenced by local populations around the sanctuary but not so dependent upon the size of it. This research can benefit from further academic exploration into farm animal sanctuaries and their role in feminist animal rights activism to enrich the ongoing fight against intensive farming.Keywords: animal rights, factory farming, farm animal sanctuaries, human-animal relationships
Procedia PDF Downloads 137611 Geological Engineering Mapping Approach to Know Factor of Safety Distribution and Its Implication to Landslide Potential at Muria Mountain, Kudus, Central Java Province, Indonesia
Authors: Sony Hartono, Azka Decana, Vilia Yohana, Annisa Luthfianihuda, Yuni Faizah, Tati Andriani, Dewi Kania, Fachri Zulfiqar, Sugiar Yusup, Arman Nugraha
Abstract:
Landslide is a geological hazard that is quite common in some areas in Indonesia and have disadvantages impact for public around. Due to the high frequency of landslides in Indonesia, and extensive damage, landslides should be specifically noted. Landslides caused by a soil or rock unit that has been in a state of unstable slopes and not in ideal state again, so the value of ground resistance or the rock been passed by the value of the forces acting on the slope. Based on this fact, authors held a geological engineering mapping at Muria Mountain, Kudus, Central Java province which is known as an agriculture and religion tourism area. This geological engineering mapping is performed to determine landslides potential at Muria Mountain. Slopes stability will be illustrated by a number called the “factor of safety” where the number can describe how much potential a slope to fall. Slopes stability can be different depending on the physical and mechanical characteristics of the soil and slope conditions. Testing of physical and mechanical characteristics of the soil conducted in the geotechnical laboratory. The characteristics of the soil must be same when sampled as well as in the test laboratory. To meet that requirement, authors used "undisturb sample" method that will be guarantee sample will not be distracted by environtment influences. From laboratory tests on soil physical and mechanical properties obtained characteristics of the soil on a slope, and then inserted into a Geological Information Software that would generate a value of factor of safety and give a visualization slope form area of research. Then, as a result of the study, obtained a map of the ground movement distribution map and i is implications for landslides potential areas.Keywords: factor of safety, geological engineering mapping, landslides, slope stability, soil
Procedia PDF Downloads 419610 Effects of Nano-Coating on the Mechanical Behavior of Nanoporous Metals
Authors: Yunus Onur Yildiz, Mesut Kirca
Abstract:
In this study, mechanical properties of a nanoporous metal coated with a different metallic material are studied through a new atomistic modelling technique and molecular dynamics (MD) simulations. This new atomistic modelling technique is based on the Voronoi tessellation method for the purpose of geometric representation of the ligaments. With the proposed technique, atomistic models of nanoporous metals which have randomly oriented ligaments with non-uniform mass distribution along the ligament axis can be generated by enabling researchers to control both ligament length and diameter. Furthermore, by the utilization of this technique, atomistic models of coated nanoporous materials can be numerically obtained for further mechanical or thermal characterization. In general, this study consists of two stages. At the first stage, we use algorithms developed for generating atomic coordinates of the coated nanoporous material. In this regard, coordinates of randomly distributed points are determined in a controlled way to be employed in the establishment of the Voronoi tessellation, which results in randomly oriented and intersected line segments. Then, line segment representation of the Voronoi tessellation is transformed to atomic structure by a special process. This special process includes generation of non-uniform volumetric core region in which atoms can be generated based on a specific crystal structure. As an extension, this technique can be used for coating of nanoporous structures by creating another volumetric region encapsulating the core region in which atoms for the coating material are generated. The ultimate goal of the study at this stage is to generate atomic coordinates that can be employed in the MD simulations of randomly organized coated nanoporous structures. At the second stage of the study, mechanical behavior of the coated nanoporous models is investigated by examining deformation mechanisms through MD simulations. In this way, the effect of coating on the mechanical behavior of the selected material couple is investigated.Keywords: atomistic modelling, molecular dynamic, nanoporous metals, voronoi tessellation
Procedia PDF Downloads 277609 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction
Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan
Abstract:
Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.Keywords: decision trees, neural network, myocardial infarction, Data Mining
Procedia PDF Downloads 429608 Electrochemical Inactivation of Toxic Cyanobacteria and Degradation of Cyanotoxins
Authors: Belal Bakheet, John Beardall, Xiwang Zhang, David McCarthy
Abstract:
The potential risks associated with toxic cyanobacteria have raised growing environmental and public health concerns leading to an increasing effort into researching ways to bring about their removal from water, together with destruction of their associated cyanotoxins. A variety of toxins are synthesized by cyanobacteria and include hepatotoxins, neurotoxins, and cytotoxins which can cause a range of symptoms in humans from skin irritation to serious liver and nerve damage. Therefore drinking water treatment processes should ensure the consumers’ safety by removing both cyanobacterial cells, and cyanotoxins from the water. Cyanobacterial cells and cyanotoxins presented challenges to the conventional water treatment systems; their accumulation within drinking water treatment plants has been reported leading to plants shut down. Thus, innovative and effective water purification systems to tackle cyanobacterial pollution are required. In recent years there has been increasing attention to the electrochemical oxidation process as a feasible alternative disinfection method which is able to generate in situ a variety of oxidants that would achieve synergistic effects in the water disinfection process and toxin degradation. By utilizing only electric current, the electrochemical process through electrolysis can produce reactive oxygen species such as hydroxyl radicals from the water, or other oxidants such as chlorine from chloride ions present in the water. From extensive physiological and morphological investigation of cyanobacterial cells during electrolysis, our results show that these oxidants have significant impact on cell inactivation, simultaneously with cyanotoxins removal without the need for chemicals addition. Our research aimed to optimize existing electrochemical oxidation systems and develop new systems to treat water containing toxic cyanobacteria and cyanotoxins. The research covers detailed mechanism study on oxidants production and cell inactivation in the treatment under environmental conditions. Overall, our study suggests that the electrochemical treatment process e is an effective method for removal of toxic cyanobacteria and cyanotoxins.Keywords: toxic cyanobacteria, cyanotoxins, electrochemical process, oxidants
Procedia PDF Downloads 240607 Effector and Memory Immune Responses Induced by Total Extracts of Naegleria fowleri Co-Administered with Cholera Toxin
Authors: Q. B. Maria de la Luz Ortega Juárez, Saúl Rojas Hernández, Itzel Berenice Rodríguez Mera, María Maricela Carrasco Yépez, Mara Gutierrez Sánchez
Abstract:
Naegleria fowleri is a free-living amoeba found mainly in temperate freshwater and is the etiologic agent of primary amebic meningoencephalitis (PAM), a fatal acute disease with a mortality rate greater than 95%. At present, there are no treatments available for MAP, and the development of effective vaccines that generate long-term immunological memory allowing protection against MAP would be of great importance. The objective of this work was to analyze the effector and memory immune response in BALB/c mice immunized with total extract of N. fowleri co-administered with cholera toxin. In this study, BALB/c mice were immunized four times intranasally with ET of N. fowleri adjuvanted with CT with or without booster at three months and were challenged or not with the lethal dose of N. fowleri, determining survival, the humoral, effector and memory response, by ELISA and flow cytometry techniques. The results obtained showed that the survival of mice immunized with booster had 60% protection compared to the group without booster, which obtained 20% protection. Evaluating the humoral response, it was found that both IgG and IgA levels were higher in sera than in nasal washes in both treatments. In the cellular response, the increase in the percentage of positive cells was found for effector T and B lymphocytes in the nasal passages (NP) in the group with boost and nasopharynx-associated lymphoid tissue (NALT) in the group without boost and lymphocytes only. B in both treatments, as well as in memory cells treatment with boost T lymphocytes in PN and NALT and without boost in cervical lymph nodes (CG) with respect to B lymphocytes, in PN, GC and NALT in treatment with boost and NALT in treatment without booster. Therefore, the involvement of the effector immune response and memory play a fundamental role for protection against N. fowleri and for the development of vaccine candidates.Keywords: immune response, immunological memory, naegleria fowleri, primary amebic meningoencephalitis
Procedia PDF Downloads 78606 Transforming Data Science Curriculum Through Design Thinking
Authors: Samar Swaid
Abstract:
Today, corporates are moving toward the adoption of Design-Thinking techniques to develop products and services, putting their consumer as the heart of the development process. One of the leading companies in Design-Thinking, IDEO (Innovation, Design, Engineering Organization), defines Design-Thinking as an approach to problem-solving that relies on a set of multi-layered skills, processes, and mindsets that help people generate novel solutions to problems. Design thinking may result in new ideas, narratives, objects or systems. It is about redesigning systems, organizations, infrastructures, processes, and solutions in an innovative fashion based on the users' feedback. Tim Brown, president and CEO of IDEO, sees design thinking as a human-centered approach that draws from the designer's toolkit to integrate people's needs, innovative technologies, and business requirements. The application of design thinking has been witnessed to be the road to developing innovative applications, interactive systems, scientific software, healthcare application, and even to utilizing Design-Thinking to re-think business operations, as in the case of Airbnb. Recently, there has been a movement to apply design thinking to machine learning and artificial intelligence to ensure creating the "wow" effect on consumers. The Association of Computing Machinery task force on Data Science program states that" Data scientists should be able to implement and understand algorithms for data collection and analysis. They should understand the time and space considerations of algorithms. They should follow good design principles developing software, understanding the importance of those principles for testability and maintainability" However, this definition hides the user behind the machine who works on data preparation, algorithm selection and model interpretation. Thus, the Data Science program includes design thinking to ensure meeting the user demands, generating more usable machine learning tools, and developing ways of framing computational thinking. Here, describe the fundamentals of Design-Thinking and teaching modules for data science programs.Keywords: data science, design thinking, AI, currculum, transformation
Procedia PDF Downloads 81605 Using Multiomic Plasma Profiling From Liquid Biopsies to Identify Potential Signatures for Disease Diagnostics in Late-Stage Non-small Cell Lung Cancer (NSCLC) in Trinidad and Tobago
Authors: Nicole Ramlachan, Samuel Mark West
Abstract:
Lung cancer is the leading cause of cancer-associated deaths in North America, with the vast majority being non-small cell lung cancer (NSCLC), with a five-year survival rate of only 24%. Non-invasive discovery of biomarkers associated with early-diagnosis of NSCLC can enable precision oncology efforts using liquid biopsy-based multiomics profiling of plasma. Although tissue biopsies are currently the gold standard for tumor profiling, this method presents many limitations since these are invasive, risky, and sometimes hard to obtain as well as only giving a limited tumor profile. Blood-based tests provides a less-invasive, more robust approach to interrogate both tumor- and non-tumor-derived signals. We intend to examine 30 stage III-IV NSCLC patients pre-surgery and collect plasma samples.Cell-free DNA (cfDNA) will be extracted from plasma, and next-generation sequencing (NGS) performed. Through the analysis of tumor-specific alterations, including single nucleotide variants (SNVs), insertions, deletions, copy number variations (CNVs), and methylation alterations, we intend to identify tumor-derived DNA—ctDNA among the total pool of cfDNA. This would generate data to be used as an accurate form of cancer genotyping for diagnostic purposes. Using liquid biopsies offer opportunities to improve the surveillance of cancer patients during treatment and would supplement current diagnosis and tumor profiling strategies previously not readily available in Trinidad and Tobago. It would be useful and advantageous to use this in diagnosis and tumour profiling as well as to monitor cancer patients, providing early information regarding disease evolution and treatment efficacy, and reorient treatment strategies in, timethereby improving clinical oncology outcomes.Keywords: genomics, multiomics, clinical genetics, genotyping, oncology, diagnostics
Procedia PDF Downloads 161604 Constraint-Based Computational Modelling of Bioenergetic Pathway Switching in Synaptic Mitochondria from Parkinson's Disease Patients
Authors: Diana C. El Assal, Fatima Monteiro, Caroline May, Peter Barbuti, Silvia Bolognin, Averina Nicolae, Hulda Haraldsdottir, Lemmer R. P. El Assal, Swagatika Sahoo, Longfei Mao, Jens Schwamborn, Rejko Kruger, Ines Thiele, Kathrin Marcus, Ronan M. T. Fleming
Abstract:
Degeneration of substantia nigra pars compacta dopaminergic neurons is one of the hallmarks of Parkinson's disease. These neurons have a highly complex axonal arborisation and a high energy demand, so any reduction in ATP synthesis could lead to an imbalance between supply and demand, thereby impeding normal neuronal bioenergetic requirements. Synaptic mitochondria exhibit increased vulnerability to dysfunction in Parkinson's disease. After biogenesis in and transport from the cell body, synaptic mitochondria become highly dependent upon oxidative phosphorylation. We applied a systems biochemistry approach to identify the metabolic pathways used by neuronal mitochondria for energy generation. The mitochondrial component of an existing manual reconstruction of human metabolism was extended with manual curation of the biochemical literature and specialised using omics data from Parkinson's disease patients and controls, to generate reconstructions of synaptic and somal mitochondrial metabolism. These reconstructions were converted into stoichiometrically- and fluxconsistent constraint-based computational models. These models predict that Parkinson's disease is accompanied by an increase in the rate of glycolysis and a decrease in the rate of oxidative phosphorylation within synaptic mitochondria. This is consistent with independent experimental reports of a compensatory switching of bioenergetic pathways in the putamen of post-mortem Parkinson's disease patients. Ongoing work, in the context of the SysMedPD project is aimed at computational prediction of mitochondrial drug targets to slow the progression of neurodegeneration in the subset of Parkinson's disease patients with overt mitochondrial dysfunction.Keywords: bioenergetics, mitochondria, Parkinson's disease, systems biochemistry
Procedia PDF Downloads 294603 Food Poisoning (Salmonellosis) as a Public Health Problem Through Consuming the Meat and Eggs of the Carrier Birds
Authors: M.Younus, M. Athar Khan, Asif Adrees
Abstract:
The present research endeavour was made to investigate the Public Health impact of Salmonellosis through consuming the meat and eggs of the carrier’s birds and to see the prevalence of Salmonella enteritidis and Salmonella typhimurium from poultry feed, poultry meat, and poultry eggs and their role in the chain of transmission of salmonellae to human beings and causing food poisoning. The ultimate objective was to generate data to improve the quality of poultry products and human health awareness. Salmonellosis is one of the most wide spread food borne zoonoses in all the continents of the world. The etiological agents Salmonella enteritidis and Salmonella typhimurium not only produce the disease but during the convalescent phase (after the recovery of disease) remain carriers for indefinite period of time. The carrier state was not only the source of spread of disease with in the poultry but also caused typhoid fever in humans. The chain of transmission started from poultry feed to poultry meat and ultimately to humans as dead end hosts. In this experiment a total number of 200 samples of human stool and blood were collected randomly (100 samples of human stool and 100 samples of human blood) of 100 patients suspected from food poisoning patients from different hospitals of Lahore area for the identification of Salmonella enteritidis and Salmonella typhimurium through PCR method in order to see the public health impact of Salmonellosis through consuming the meat and eggs of the carrier birds. On the average 14 and 10 stool samples were found positive against Salmonella enteritidis and Salmonella typhimurium from each of the 25 patients from each hospital respectively in case of suspected food poisoning patients. Similarly on an average 5% and 6% blood samples were found positive from 25 patients of each hospital respectively. There was a significant difference (P< 0.05) in the sero positivity of stool and blood samples of suspected food poisoning patients as far as Salmonella enteritidis and Salmonella typhimurium was concerned. However there was no significant difference (P<0.05) between the hospitals.Keywords: salmonella, zoonosis, food, transmission, eggs
Procedia PDF Downloads 665602 Characterization of Transcription Factors Involved in Early Defense Response during Interaction of Oil Palm Elaeis guineensis Jacq. with Ganoderma boninense
Authors: Sakeh N. Mohd, Bahari M. N. Abdul, Abdullah S. N. Akmar
Abstract:
Oil palm production generates high export earnings to many countries especially in Southeast Asian region. Infection by necrotrophic fungus, Ganoderma boninense on oil palm results in basal stem rot which compromises oil palm production leading to significant economic loss. There are no reliable disease treatments nor promising resistant oil palm variety has been cultivated to eradicate the disease up to date. Thus, understanding molecular mechanisms underlying early interactions of oil palm with Ganoderma boninense may be vital to promote preventive or control measure of the disease. In the present study, four months old oil palm seedlings were infected via artificial inoculation of Ganoderma boninense on rubber wood blocks. Roots of six biological replicates of treated and untreated oil palm seedlings were harvested at 0, 3, 7 and 11 days post inoculation. Next-generation sequencing was performed to generate high-throughput RNA-Seq data and identify differentially expressed genes (DEGs) during early oil palm-Ganoderma boninense interaction. Based on de novo transcriptome assembly, a total of 427,122,605 paired-end clean reads were assembled into 30,654 unigenes. DEGs analysis revealed upregulation of 173 transcription factors on Ganoderma boninense-treated oil palm seedlings. Sixty-one transcription factors were categorized as DEGs according to stringent cut-off values of genes with log2 ratio [Number of treated oil palm seedlings/ Number of untreated oil palm seedlings] ≥ |1.0| (corresponding to 2-fold or more upregulation) and P-value ≤ 0.01. Transcription factors in response to biotic stress will be screened out from abiotic stress using reverse transcriptase polymerase chain reaction. Transcription factors unique to biotic stress will be verified using real-time polymerase chain reaction. The findings will help researchers to pinpoint defense response mechanism specific against Ganoderma boninense.Keywords: Ganoderma boninense, necrotrophic, next-generation sequencing, transcription factors
Procedia PDF Downloads 266601 Micro-Droplet Formation in a Microchannel under the Effect of an Electric Field: Experiment
Authors: Sercan Altundemir, Pinar Eribol, A. Kerem Uguz
Abstract:
Microfluidics systems allow many-large scale laboratory applications to be miniaturized on a single device in order to reduce cost and advance fluid control. Moreover, such systems enable to generate and control droplets which have a significant role on improved analysis for many chemical and biological applications. For example, they can be employed as the model for cells in microfluidic systems. In this work, the interfacial instability of two immiscible Newtonian liquids flowing in a microchannel is investigated. When two immiscible liquids are in laminar regime, a flat interface is formed between them. If a direct current electric field is applied, the interface may deform, i.e. may become unstable and it may be ruptured and form micro-droplets. First, the effect of thickness ratio, total flow rate, viscosity ratio of the silicone oil and ethylene glycol liquid couple on the critical voltage at which the interface starts to destabilize is investigated. Then the droplet sizes are measured under the effect of these parameters at various voltages. Moreover, the effect of total flow rate on the time elapsed for the interface to be ruptured to form droplets by hitting the wall of the channel is analyzed. It is observed that an increase in the viscosity or the thickness ratio of the silicone oil to the ethylene glycol has a stabilizing effect, i.e. a higher voltage is needed while the total flow rate has no effect on it. However, it is observed that an increase in the total flow rate results in shortening of the elapsed time for the interface to hit the wall. Moreover, the droplet size decreases down to 0.1 μL with an increase in the applied voltage, the viscosity ratio or the total flow rate or a decrease in the thickness ratio. In addition to these observations, two empirical models for determining the critical electric number, i.e., the dimensionless voltage and the droplet size and another model which is a combination of both models, for determining the droplet size at the critical voltage are established.Keywords: droplet formation, electrohydrodynamics, microfluidics, two-phase flow
Procedia PDF Downloads 176600 Application of Human Biomonitoring and Physiologically-Based Pharmacokinetic Modelling to Quantify Exposure to Selected Toxic Elements in Soil
Authors: Eric Dede, Marcus Tindall, John W. Cherrie, Steve Hankin, Christopher Collins
Abstract:
Current exposure models used in contaminated land risk assessment are highly conservative. Use of these models may lead to over-estimation of actual exposures, possibly resulting in negative financial implications due to un-necessary remediation. Thus, we are carrying out a study seeking to improve our understanding of human exposure to selected toxic elements in soil: arsenic (As), cadmium (Cd), chromium (Cr), nickel (Ni), and lead (Pb) resulting from allotment land-use. The study employs biomonitoring and physiologically-based pharmacokinetic (PBPK) modelling to quantify human exposure to these elements. We recruited 37 allotment users (adults > 18 years old) in Scotland, UK, to participate in the study. Concentrations of the elements (and their bioaccessibility) were measured in allotment samples (soil and allotment produce). Amount of produce consumed by the participants and participants’ biological samples (urine and blood) were collected for up to 12 consecutive months. Ethical approval was granted by the University of Reading Research Ethics Committee. PBPK models (coded in MATLAB) were used to estimate the distribution and accumulation of the elements in key body compartments, thus indicating the internal body burden. Simulating low element intake (based on estimated ‘doses’ from produce consumption records), predictive models suggested that detection of these elements in urine and blood was possible within a given period of time following exposure. This information was used in planning biomonitoring, and is currently being used in the interpretation of test results from biological samples. Evaluation of the models is being carried out using biomonitoring data, by comparing model predicted concentrations and measured biomarker concentrations. The PBPK models will be used to generate bioavailability values, which could be incorporated in contaminated land exposure models. Thus, the findings from this study will promote a more sustainable approach to contaminated land management.Keywords: biomonitoring, exposure, PBPK modelling, toxic elements
Procedia PDF Downloads 319599 The Role of Group Dynamics in Creativity: A Study Case from Italy
Authors: Sofya Komarova, Frashia Ndungu, Alessia Gavazzoli, Roberta Mineo
Abstract:
Modern society requires people to be flexible and to develop innovative solutions to unexpected situations. Creativity refers to the “interaction among aptitude, process, and the environment by which an individual or group produces a perceptible product that is both novel and useful as defined within a social context”. It allows humans to produce novel ideas, generate new solutions, and express themselves uniquely. Only a few scientific studies have examined group dynamics' influence on individuals' creativity. There exist some gaps in the research on creative thinking, such as the fact that collaborative effort frequently results in the enhanced production of new information and knowledge. Therefore, it is critical to evaluate creativity via social settings. The study aimed at exploring the group dynamics of young adults in small group settings and the influence of these dynamics on their creativity. The study included 30 participants aged 20 to 25 who were attending university after completing a bachelor's degree. The participants were divided into groups of three, in gender homogenous and heterogeneous groups. The groups’ creative task was tied to the Lego mosaic created for the Scintillae laboratory at the Reggio Children Foundation. Group dynamics were operationalized into patterns of behaviors classified into three major categories: 1) Social Interactions, 2) Play, and 3) Distraction. Data were collected through audio and video recording and observation. The qualitative data were converted into quantitative data using the observational coding system; then, they were analyzed, revealing correlations between behaviors using median points and averages. For each participant and group, the percentages of represented behavior signals were computed. The findings revealed a link between social interaction, creative thinking, and creative activities. Other findings revealed that the more intense the social interaction, the lower the amount of creativity demonstrated. This study bridges the research gap between group dynamics and creativity. The approach calls for further research on the relationship between creativity and social interaction.Keywords: group dynamics, creative thinking, creative action, social interactions, group play
Procedia PDF Downloads 127598 Application of Seismic Refraction Method in Geotechnical Study
Authors: Abdalla Mohamed M. Musbahi
Abstract:
The study area lies in Al-Falah area on Airport-Tripoli in Zone (16) Where planned establishment of complex multi-floors for residential and commercial, this part was divided into seven subzone. In each sup zone, were collected Orthogonal profiles by using Seismic refraction method. The overall aim with this project is to investigate the applicability of Seismic refraction method is a commonly used traditional geophysical technique to determine depth-to-bedrock, competence of bedrock, depth to the water table, or depth to other seismic velocity boundaries The purpose of the work is to make engineers and decision makers recognize the importance of planning and execution of a pre-investigation program including geophysics and in particular seismic refraction method. The overall aim with this thesis is achieved by evaluation of seismic refraction method in different scales, determine the depth and velocity of the base layer (bed-rock). Calculate the elastic property in each layer in the region by using the Seismic refraction method. The orthogonal profiles was carried out in every subzones of (zone 16). The layout of the seismic refraction set up is schematically, the geophones are placed on the linear imaginary line whit a 5 m spacing, the three shot points (in beginning of layout–mid and end of layout) was used, in order to generate the P and S waves. The 1st and last shot point is placed about 5 meters from the geophones and the middle shot point is put in between 12th to 13th geophone, from time-distance curve the P and S waves was calculated and the thickness was estimated up to three-layers. As we know any change in values of physical properties of medium (shear modulus, bulk modulus, density) leads to change waves velocity which passing through medium where any change in properties of rocks cause change in velocity of waves. because the change in properties of rocks cause change in parameters of medium density (ρ), bulk modulus (κ), shear modulus (μ). Therefore, the velocity of waves which travel in rocks have close relationship with these parameters. Therefore we can estimate theses parameters by knowing primary and secondary velocity (p-wave, s-wave).Keywords: application of seismic, geotechnical study, physical properties, seismic refraction
Procedia PDF Downloads 491597 Performance Evaluation of Production Schedules Based on Process Mining
Authors: Kwan Hee Han
Abstract:
External environment of enterprise is rapidly changing majorly by global competition, cost reduction pressures, and new technology. In these situations, production scheduling function plays a critical role to meet customer requirements and to attain the goal of operational efficiency. It deals with short-term decision making in the production process of the whole supply chain. The major task of production scheduling is to seek a balance between customer orders and limited resources. In manufacturing companies, this task is so difficult because it should efficiently utilize resource capacity under the careful consideration of many interacting constraints. At present, many computerized software solutions have been utilized in many enterprises to generate a realistic production schedule to overcome the complexity of schedule generation. However, most production scheduling systems do not provide sufficient information about the validity of the generated schedule except limited statistics. Process mining only recently emerged as a sub-discipline of both data mining and business process management. Process mining techniques enable the useful analysis of a wide variety of processes such as process discovery, conformance checking, and bottleneck analysis. In this study, the performance of generated production schedule is evaluated by mining event log data of production scheduling software system by using the process mining techniques since every software system generates event logs for the further use such as security investigation, auditing and error bugging. An application of process mining approach is proposed for the validation of the goodness of production schedule generated by scheduling software systems in this study. By using process mining techniques, major evaluation criteria such as utilization of workstation, existence of bottleneck workstations, critical process route patterns, and work load balance of each machine over time are measured, and finally, the goodness of production schedule is evaluated. By using the proposed process mining approach for evaluating the performance of generated production schedule, the quality of production schedule of manufacturing enterprises can be improved.Keywords: data mining, event log, process mining, production scheduling
Procedia PDF Downloads 279596 Microbial Fuel Cells and Their Applications in Electricity Generating and Wastewater Treatment
Authors: Shima Fasahat
Abstract:
This research is an experimental research which was done about microbial fuel cells in order to study them for electricity generating and wastewater treatment. These days, it is very important to find new, clean and sustainable ways for energy supplying. Because of this reason there are many researchers around the world who are studying about new and sustainable energies. There are different ways to produce these kind of energies like: solar cells, wind turbines, geothermal energy, fuel cells and many other ways. Fuel cells have different types one of these types is microbial fuel cell. In this research, an MFC was built in order to study how it can be used for electricity generating and wastewater treatment. The microbial fuel cell which was used in this research is a reactor that has two tanks with a catalyst solution. The chemical reaction in microbial fuel cells is a redox reaction. The microbial fuel cell in this research is a two chamber MFC. Anode chamber is an anaerobic one (ABR reactor) and the other chamber is a cathode chamber. Anode chamber consists of stabilized sludge which is the source of microorganisms that do redox reaction. The main microorganisms here are: Propionibacterium and Clostridium. The electrodes of anode chamber are graphite pages. Cathode chamber consists of graphite page electrodes and catalysts like: O2, KMnO4 and C6N6FeK4. The membrane which separates the chambers is Nafion117. The reason of choosing this membrane is explained in the complete paper. The main goal of this research is to generate electricity and treating wastewater. It was found that when you use electron receptor compounds like: O2, MnO4, C6N6FeK4 the velocity of electron receiving speeds up and in a less time more current will be achieved. It was found that the best compounds for this purpose are compounds which have iron in their chemical formula. It is also important to pay attention to the amount of nutrients which enters to bacteria chamber. By adding extra nutrients in some cases the result will be reverse. By using ABR the amount of chemical oxidation demand reduces per day till it arrives to a stable amount.Keywords: anaerobic baffled reactor, bioenergy, electrode, energy efficient, microbial fuel cell, renewable chemicals, sustainable
Procedia PDF Downloads 227595 Present Status, Driving Forces and Pattern Optimization of Territory in Hubei Province, China
Abstract:
“National Territorial Planning (2016-2030)” was issued by the State Council of China in 2017. As an important initiative of putting it into effect, territorial planning at provincial level makes overall arrangement of territorial development, resources and environment protection, comprehensive renovation and security system construction. Hubei province, as the pivot of the “Rise of Central China” national strategy, is now confronted with great opportunities and challenges in territorial development, protection, and renovation. Territorial spatial pattern experiences long time evolution, influenced by multiple internal and external driving forces. It is not clear what are the main causes of its formation and what are effective ways of optimizing it. By analyzing land use data in 2016, this paper reveals present status of territory in Hubei. Combined with economic and social data and construction information, driving forces of territorial spatial pattern are then analyzed. Research demonstrates that the three types of territorial space aggregate distinctively. The four aspects of driving forces include natural background which sets the stage for main functions, population and economic factors which generate agglomeration effect, transportation infrastructure construction which leads to axial expansion and significant provincial strategies which encourage the established path. On this basis, targeted strategies for optimizing territory spatial pattern are then put forward. Hierarchical protection pattern should be established based on development intensity control as respect for nature. By optimizing the layout of population and industry and improving the transportation network, polycentric network-based development pattern could be established. These findings provide basis for Hubei Territorial Planning, and reference for future territorial planning in other provinces.Keywords: driving forces, Hubei, optimizing strategies, spatial pattern, territory
Procedia PDF Downloads 105